Luma AI has introduced a new generative model, Ray2, now available on the Dream Machine platform for paid subscribers. This update enables the generation of high-quality, realistic videos from text prompts in just seconds. Initially, the model is focused on text-to-video generation, but future updates will include image-to-video, video-to-video, and editing features.
Introducing Ray2, a new frontier in video generative models. Scaled to 10x compute, #Ray2 creates realistic videos with natural and coherent motion, unlocking new freedoms of creative expression and visual storytelling. Available now. Learn more https://t.co/jGI6KmRQpR. pic.twitter.com/i9PYlatlPv
— Luma AI (@LumaLabsAI) January 15, 2025
Ray2 uses a multimodal transformer architecture trained on video data, enabling the creation of visually consistent and dynamic clips lasting from five to ten seconds. The model’s key features include natural motion, providing smooth transitions and enhanced cinematography. Users can submit detailed text prompts to achieve precise visual results, such as “a person running through snow with explosions” or “hands slicing steak with rising steam.”
The Ray2 model is available to Dream Machine paid subscribers, and Luma AI plans to integrate it into the Amazon AWS Bedrock platform, expanding access for developers and businesses. This opens up opportunities for a wide audience, including content creators, marketers, and professionals in gaming, entertainment, and e-commerce.