Study Shows How Human Brain Detects the 'Music' of Speech

Study Shows How Human Brain Detects the 'Music' of Speech

Researchers at UC San Francisco have identified neurons in the human brain that respond to pitch changes in spoken language, which are essential to clearly conveying both meaning and emotion.

The study was published online August 24, 2017 in Science by the lab of Edward Chang, MD, a professor of neurological surgery at the UCSF Weill Institute for Neurosciences, and led by Claire Tang, a fourth-year graduate student in the Chang lab,medicalxpress reports.

"One of the lab's missions is to understand how the brain converts sounds into meaning," Tang said. "What we're seeing here is that there are neurons in the brain's neocortex that are processing not just what words are being said, but how those words are said."

Changes in vocal pitch during speech - part of what linguists call speech prosody - are a fundamental part of human communication, nearly as fundamental as melody to music. In tonal languages such as Mandarin Chinese, pitch changes can completely alter the meaning of a word, but even in a non-tonal language like English, differences in pitch can significantly change the meaning of a spoken sentence.

For instance, "Sarah plays soccer," in which "Sarah" is spoken with a descending pitch, can be used by a speaker to communicate that Sarah, rather than some other person, plays soccer; in contrast, "Sarah plays soccer" indicates that Sarah plays soccer, rather than some other game. And adding a rising tone at the end of a sentence ("Sarah plays soccer?") indicates that the sentence is a question.

The brain's ability to interpret these changes in tone on the fly is particularly remarkable, given that each speaker also has their own typical vocal pitch and style (that is, some people have low voices, others have high voices, and others seem to end even statements as if they were questions). Moreover, the brain must track and interpret these pitch changes while simultaneously parsing which consonants and vowels are being uttered, what words they form, and how those words are being combined into phrases and sentences—with all of this happening on a millisecond scale.
Neural populations in the human auditory cortex respond to changes of vocal pitch in speech.

topics from
There is no comment