Character.AI has announced the launch of its new generative video model, AvatarFX, which is currently available in closed beta. The model allows users to animate platform characters in various styles — from realistic to cartoonish — and to create videos not only from text but also using uploaded images. This enables users to turn photos of people into videos where the face, hands, and body move in sync with a voice or song.
📽️Say hello to AvatarFX — our cutting-edge video generation model. Cinematic. Expressive. Mind-blowing. Dive in: https://t.co/aF5zDrKLIK #CharacterAI #AvatarFX pic.twitter.com/Rkqo4SXEgX
— Character.AI (@character_ai) April 22, 2025
According to Character.AI, the model supports long videos, can work with dialogues between multiple characters, and allows users to set keyframes for more precise scene control. It is noted that, unlike popular solutions such as Pika or Runway Gen‑3, AvatarFX does not limit the length or resolution of clips and also reduces typical issues with incorrect animation found in such models.
The model is already available to CAI+ subscribers, and a waitlist has been opened for a wider audience on the web and mobile platform. In the future, access to the feature is planned to be expanded to everyone interested.
To prevent misuse, the company is implementing watermarks on generated videos and applying filters to detect and modify images of real people, especially minors, politicians, and public figures. This should help avoid the creation of videos with false or manipulative content.