PHILIPPE HUGUEN/AFP via Getty Images
When baby mice cry, they do it to a beat that is synchronized to the rise and fall of their own breath. It’s a pattern that researchers say could help explain why human infants can cry at birth — and how they learn to speak.
Mice are born with a cluster of cells in the brainstem that appears to coordinate the rhythms of breathing and vocalizations, a team reports in the journal Neuron.
If similar cells exist in human newborns, they could serve as an important building block for speech: the ability to produce one or many syllables between each breath. The cells also could explain why so many human languages are spoken at roughly the same tempo.
“This suggests that there is a hardwired network of neurons that is fundamental to speech,” says Dr. Kevin Yackle, the study’s senior author and a researcher at the University of California, San Francisco.
Scientists who study human speech have spent decades debating how much of our ability is innate and how much is learned. The research adds to the evidence that human speech relies — at least in part — on biological “building blocks” that are present from birth, says David Poeppel, a professor of psychology and neural science at New York University who was not involved in the study.
But “there is just a big difference between a mouse brain and a human brain,” Poeppel says. So the human version of this building block may not look the same.
When baby mice cry to mom, a rhythm emerges
The study emerged from research on the ultrasonic distress calls that a newborn mouse makes when it is separated from its mother.
“We call it cries because the ultimate purpose is to have the mom find them and take them back to the nest,” Yackle says.
Analysis of the cries showed that they had similarities to the production of syllables in human speech.
“We saw that within a single breath there could be multiple cries, and when these occurred, they were recurring in a rhythm.” Yackle says. “So then the question was, can we find the origin of this cry rhythm?”
The team began tracing the numerous signals that control the muscles involved in producing the cries. Then they began looking for places in the brain where there was an overlap between cells involved in vocalizing and cells involved in breathing.
“There really is only one area in the brainstem that has an overlap,” Yackle says. “And so this is what really [brought] our attention to this node or cluster of cells.”
To make sure they’d found the right cluster, the team removed these cells from some newborn mice. The animals continued to breathe normally. But they stopped producing cries, or made cries that had no rhythm.
In other mice, the team tried electrically stimulating the cluster of cells. The animals immediately began producing rhythmic cries.
A brain circuit that could help explain human speech
Yackle suspects that a similar cluster of cells exists in humans. That would explain why we are able to cry from birth, and why even our earliest cries are coordinated with breathing and contain the rhythm of adult speech.
The presence of cells that act as a sort of metronome for human speech would explain why “people say about three to six syllables per second, no matter what language you’re speaking,” Poeppel says.
The cells also could also provide some of the linguistic “LEGO blocks” we use to construct words and sentences, Poeppel says.
“Our words come out as a string of sound,” he says. “But you have to break it up into little parts.” The study may show how one of those parts — rhythmic syllables — is generated.
If that’s the case, fluent speech would still require people to learn how to adjust or override the innate systems that control both breathing and the production of sounds, Poeppel says.
“You can inhale deeply and then say just ‘bah,'” he says. “So that’s one syllable in one breath. But you can also inhale and go ‘bah, bah, bah, bah, bah, bah, bah, bah, bah.'”
And, of course, speech also involves many other brain circuits and networks that are far more complicated, Poeppel says. These allow us to do things like adjust our inflection, access a huge vocabulary, and, ultimately, transform ideas into a stream of sounds that can be decoded by another human brain.