Runway has announced the release of a new generative AI model called Gen-4, now available to both individual and enterprise clients. This model is capable of creating high-quality videos, maintaining consistency of characters, locations, and objects across different scenes. Gen-4 allows users to generate new images and videos using visual references and instructions, ensuring stylistic and content integrity without the need for additional training.
Today we're introducing Gen-4, our new series of state-of-the-art AI models for media generation and world consistency. Gen-4 is a significant step forward for fidelity, dynamic motion and controllability in generative media.
— Runway (@runwayml) March 31, 2025
Gen-4 Image-to-Video is rolling out today to all paid… pic.twitter.com/VKnY5pWC8X
According to Runway, Gen-4 sets a new standard in video generation, surpassing the previous Gen-3 Alpha version. The model is distinguished by its ability to generate dynamic videos with realistic motion and high adherence to specified parameters. Runway also emphasizes that Gen-4 significantly improves the understanding of real-world physics, making it useful for creating complex video narratives.
This new model is part of Runway’s suite of video generation tools and helps the company stand out among competitors such as OpenAI and Google.