The examples presented by Meta demonstrate how Movie Gen creates videos featuring animals swimming or surfing, as well as uses real user photos to depict them performing various actions, such as painting. The model can also synchronize sound effects with video content, significantly expanding its capabilities. “Movie Gen” allows editing of existing videos — in one example, the tool added pom-poms to a man’s hands as he ran through the desert, and in another, it changed dry asphalt to puddles under a skateboarder.
According to Meta, videos can last up to 16 seconds, while the audio track can be up to 45 seconds long. Test results showed that Movie Gen is on par with competitors’ products.
The release of this tool comes amid discussions in Hollywood about the use of AI in film production, which began after OpenAI unveiled its Sora model earlier this year. Meta noted that it does not plan to make Movie Gen available to developers, as it did with the Llama language model series, but will instead collaborate with entertainment industry representatives and integrate the tool into its own products.
To create Movie Gen, the company used a mix of licensed and publicly available data, as stated in a research paper published by Meta. The company also noted that it continues to assess the risks associated with AI use, including the potential creation of deepfakes, which are a concern during elections in various countries.