To view this page ensure that Adobe Flash Player version 10.1.0 or greater is installed.
Looking at how we listen
Wednesday 29 May 2013
UC graduate students are known for their braininess. At UC Berkeley, Adeen Flinker is getting a picture of the brain's intricacies and how the human auditory system is wired
Like the mute button on the TV remote control, our brains filter out unwanted noise so we can focus on what we’re listening to. But when it comes to following our own speech, a recent study by neurosciences doctoral student Flinker shows that instead of one homogenous mute button, we have a network of volume settings that can selectively silence and amplify the sounds we make and hear.
He was part of a team that tracked the electrical signals emitted from the brains of hospitalized epilepsy patients. They discovered that neurons in one part of the patients’ hearing mechanism were dimmed when they talked, while neurons in other parts lit up.
The study offers new clues about how we hear ourselves above the noise of our surroundings and monitor what we say. Until this finding, it was not clear how the human auditory system is wired.
“We used to think that the human auditory system is mostly suppressed during speech, but we found closely knit patches of cortex with very different sensitivities to our own speech that paint a more complicated picture,” said Flinker.
“We found evidence of millions of neurons firing together every time you hear a sound right next to millions of neurons ignoring external sounds but firing together every time you speak,” Flinker added. “Such a mosaic of responses could play an important role in how we are able to distinguish our own speech from that of others.”
While the study doesn’t specifically address why humans need to track their own speech so closely, Flinker theorizes that, among other things, tracking our own speech is important for language development, monitoring what we say and adjusting to various noise environments.
“Whether it’s learning a new language or talking to friends in a noisy bar, we need to hear what we say and change our speech dynamically according to our needs and environment,” Flinker said.