Lightricks unveils LTX-2: the first production-grade open-source audio-video synchronization generation model
The AI video generation field has witnessed a revolutionary breakthrough. Lightricks, a renowned creative software company, has officially launched its new generation of open-source foundational model, LTX-2. This marks the industry's first complete creative engine capable of synchronized audio and video generation, with native support for 4K resolution, signaling the official entry of AI video production into the era of production-level applications.

Three major modes match the full-process creation needs
LTX-2 breaks through the limitations of traditional single models and innovatively introduces three working modes to precisely meet the actual needs of different creative scenarios:
LTX-2 Fast (quick mode) is designed specifically for creative brainstorming, capable of producing high-fidelity videos lasting from 6 to 10 seconds in just a few seconds, with automatically configured synchronized sound effects. This mode achieves the first-ever second-level generation of 4K-level videos, providing unprecedented efficiency support for rapid concept verification and creative iteration.
LTX-2 Pro (Professional Mode) balances quality and speed, targeting team collaboration and client presentations. This mode enhances image quality and precision while ensuring rapid delivery, aiding creative teams in aligning their direction before formal production. It is suitable for agents and brand owners for proposal review.
LTX-2 Ultra (flagship model) aims at film-level delivery standards, supporting ultra-high specifications of 4K resolution and 50 frames per second output. Coupled with its perfect audio-video synchronization capability, it can be directly used in professional scenarios such as film special effects and high-end brand promotional videos, without the need for post-production reprocessing.
The technical parameters have reached the top level in the industryFrom a technical perspective, LTX-2 fully covers two core functions: text-to-video and image-to-video, capable of generating content with a duration of 6 to 10 seconds per generation (a 15-second version is coming soon). Resolution options include full HD 1080p, ultra-clear 1440p, and ultra-high-definition 2060p, with plans to introduce a 720p option to meet diverse needs.
It's worth noting that LTX-2 supports a 16:9 landscape format, and a portrait 9:16 format is also in the pipeline. Users can freely toggle the audio switch according to specific application scenarios, enabling mute or synchronized music generation. These flexible configurations allow the model to precisely adapt to the full-chain production demands, ranging from social media short videos to cinema-grade content.
The cost-benefit advantage is significant
In terms of computational cost control, LTX-2 demonstrates remarkable engineering optimization capabilities. Compared to similar products in the market, its computing power consumption is reduced by 50%, and it can run smoothly on consumer-grade graphics cards. This feature completely breaks the dependence of professional-grade AI video generation on enterprise-level infrastructure, enabling independent creators and small and medium-sized teams to access top-tier creative tools with low barriers.
In terms of pricing strategy, the Fast mode starts at just $0.04 per second, the Pro mode at $0.08, and the Ultra mode at $0.16. Specific fees are dynamically adjusted based on resolution and whether audio functionality is enabled. Compared to the labor and time costs of traditional video production, this pricing system is highly competitive.
Open source strategy empowers the developer ecosystem
Lightricks adopts a truly open strategy, with core code components, datasets, and inference tools already made public on GitHub, and the complete model weight files planned to be released this autumn. Developers can access the LTX-2 API through mainstream platforms such as Fal, Replicate, and ComfyUI, and conduct deep customization and functional expansion according to their own business scenarios.
The model has been fully integrated into the LTX platform, and the API interface is gradually being opened to early partners for application. Zeev Farbman, CEO of Lightricks, said, "The diffusion model is no longer just a simulation of the production process; it is productivity itself. LTX-2 combines audiovisual synchronization, ultra-high-definition picture quality, long-term generation, and ultimate efficiency, aiming to empower all users, from independent creators to corporate teams."Redefine professional creative workflow
The three-tiered mode design of LTX-2 precisely mirrors the real creative process: utilizing the Fast mode to facilitate creative divergence, the Pro mode to unify team cognition, and the Ultra mode to deliver the final product. This design philosophy consistently adheres to the principle of "matching rendering quality with creative intent," thereby avoiding the waste of computational resources during the testing phase.
For advertising agencies, film and television directors, and brand marketing teams, LTX-2 offers unprecedented flexibility - completing all stages from conception to delivery within a single platform, eliminating the need to switch between multiple tools. This integration of workflows will significantly shorten production cycles, freeing up creative teams to focus more on the content itself.
With the official launch of LTX-2, AI video generation technology is moving from the laboratory to real-world scenarios. The combination of audio-video synchronization, native 4K, consumer-grade hardware support, and open-source transparency may reshape the competitive landscape of the entire digital content production industry.