In 2023, the track “Heart on My Sleeve,” created using AI and styled to mimic Drake and The Weeknd, garnered millions of listens and drew attention to the issue of identifying fake music. In response, music companies have begun implementing systems to detect synthetic audio at various stages of the music process—from preparing data for training models to placing tracks on platforms and licensing rights.
Startups like Vermillio and Musical AI are developing tools that automatically tag AI-generated elements in tracks by adding appropriate metadata. These systems break down compositions into individual components, such as vocal tone and melodic phrases, allowing for the detection and tagging of fragments created by AI.
Platforms like YouTube and Deezer have already implemented internal systems to tag synthetic audio upon upload. Deezer filters fully AI-generated tracks and reduces their visibility in search and recommendations, and will soon start labeling such tracks for users. According to Deezer’s Head of Innovation Aurélien Hérou, in April, the system tagged about 20 percent of new uploads as fully created using AI.
Individual companies, including Spawning AI, are developing a “Do Not Train” protocol that allows artists and rights holders to mark their works as prohibited for AI training. However, this approach is just beginning to be implemented, and there is currently no single standard for applying such labels.