Google has introduced Magenta RealTime — an open AI model for real-time music creation. It stands out by responding to text prompts, audio samples, or a combination of both, allowing users to experiment with different ways of interacting with music. The model is built on the Transformer architecture with 800 million parameters and trained on approximately 190,000 hours of music (mostly instrumental).
The model is suitable for both enthusiasts and musicians who want to try creating music with AI. Anyone interested can test Magenta RT for free on Colab TPU.
The code and the model itself are available under open licenses on GitHub and Hugging Face, allowing developers and musicians to freely use and modify the tool. Google plans to soon add local usage capabilities, expand customization options, and publish a research paper on Magenta RT.