Google is collaborating with researchers to learn how to decode dolphin vocalizations “in the quest for interspecies communication.”
The DolphinGemma AI model, announced today, aims to decode the clicks, whistles, and squawks dolphins make to enhance “our potential connection with the marine world.”
A mother dolphin (left) and the AI-powered version of the whistle she makes to call her calf (right) (Credit: Google)
Google trained the Gemini-backed model on a “vast, labeled dataset” of dolphin sounds compiled by the Wild Dolphin Project (WDP). The WDP has led the world’s longest-running underwater research project since 1985, Google says.
“I’ve been waiting for this for 40 years,” says Dr. Denise Herzing, research director/founder at Wild Dolphin Project, in the video below. “Feeding dolphin sounds into an AI model like DolphinGemma will give us a really good look at if there are subtleties that humans can’t pick out. You’re going to understand what priorities they have, what they talk about.”
When researchers are in the field, they can record sounds on Pixel phones and analyze them with DolphinGemma. It runs the sounds through Google’s SoundStream tokenizer, which identifies patterns and sequences. This can “uncover hidden structures and potential meanings within the dolphins’ natural communication—a task previously requiring immense human effort,” Google says.
Conducting this research on Pixel phones can significantly reduce costs and the need for custom hardware, Google says. The model can also predict the subsequent sounds a dolphin may make, “much like how large language models for human language predict the next word or token in a sentence.”
Whistles (left) and burst pulses (right) generated during early testing of DolphinGemma. (Credit: Google)
Get Our Best Stories!
Your Daily Dose of Our Top Tech News
By clicking Sign Me Up, you confirm you are 16+ and agree to our Terms of Use and Privacy Policy.
Thanks for signing up!
Your subscription has been confirmed. Keep an eye on your inbox!
Understanding dolphins is one thing, but what about speaking to them? That’s going to take a bit more work. Google says “eventually” the effort “may establish a shared vocabulary with the dolphins for interactive communication.” It would rely on pattern recognition as well as synthetic sounds the dolphins could learn, like teaching them a new language.
To that end, the WDP is working with the Georgia Institute of Technology to teach the dolphins a simple, shared vocabulary. They are using an underwater computer system called CHAT (Cetacean Hearing Augmentation Telemetry), developed by the Georgia Institute of Technology.
Recommended by Our Editors
CHAT can teach them new artificial sounds, which become associated with research equipment like “sargassum, seagrass, or scarves.” Eventually, the dolphins may learn to mimic the sounds to request the items as a form of basic communication.
Atlantic spotted dolphins (Credit: Google)
DolphinGemma and CHAT can also work together, the former using its “predictive power [to] help CHAT anticipate and identify potential mimics earlier in the vocalization sequence, increasing the speed at which researchers can react to the dolphins and [make] interactions more fluid and reinforcing.”
The WDP will deploy DolphinGemma this field season. Future versions of it could help study other cetacean species besides Atlantic spotted dolphins, like bottlenose or spinner dolphins. That will require fine-tuning for each species’ specific vocalizations.
About Emily Forlini
Senior Reporter

Read the latest from Emily Forlini
This article was published by WTVG on 2025-04-14 15:03:00
View Original Post