Google, together with researchers from the Wild Dolphin Project, has introduced a unique AI model called DolphinGemma, opening up new horizons in the study of dolphin language. By combining the capabilities of this model with Pixel smartphones, scientists have, for the first time, been able not only to analyze dolphin sounds in real time but also to predict which signal will come next.
DolphinGemma is designed to work with Google’s audio technologies, so using a Pixel allows researchers to do without specialized equipment. This significantly simplifies the process of collecting data in the field, reduces energy consumption and financial costs, which is especially important for those working directly in the open sea.
Scientists are already conducting underwater experiments with the Pixel 6, and this summer they plan to switch to the Pixel 9, which will be able to perform deep learning and pattern analysis simultaneously. This approach not only allows them to record dolphin behavior but also to create sounds that could potentially be understood by the animals, paving the way for true interspecies communication.
Google has announced that the DolphinGemma model will become open to everyone as early as this summer. Although it was trained on the sounds of Atlantic spotted dolphins, it can also be used for other species, including bottlenose dolphins and spinner dolphins. Open access to such a tool gives researchers around the world a chance to get closer to unlocking the mysteries of communication among these intelligent animals.