Adding subtitles is a well known way to make difficult to hear speech easier to understand – e.g. TV subtitles provide a huge benefit to hearing impaired individuals and are commonly added to interviews with heavily accented speakers of English. However, these subtitles don’t only help speech understanding, they also provide an illusion that the speech is actually clearer. Scientists have long debated how information gained from written text or other forms of prior knowledge is exploited by our perceptual systems to help interpret noisy or ambiguous speech.
A new study published in the Journal of Neuroscience by CBU scientists Ediz Sohoglu, Jonathan Peelle, Bob Carlyon and Matt Davis used MEG to show how the brain combines sounds and prior knowledge to make sense of noisy speech. Volunteers in the experiment, heard degraded recordings of spoken words and asked them to rate how clear they sounded. Before some words, volunteers read written words that sometimes matched the speech they were about to hear. When volunteers had this prior knowledge, they reported that the speech sounded clearer – exactly as if the speech was less noisy. However, auditory brain responses were opposite in these two cases: the MEG signal in the superior temporal gyrus increased for speech that was physically clearer, and decreased for speech that was clearer because of subtitles. This shows that perceptual enhancement due to prior knowledge is consistent with a theory of brain function called ‘predictive coding’ whereby the brain constantly predicts the sounds that it expects to hear so that only unexpected sensory information (‘prediction error’) is processed in detail. This finding joins other previous MEG research in providing important insights into understanding the neural basis of uniquely human skills in speech perception and helps explain how the provision of subtitles to hearing impaired individuals changes their experience of speech. You can read the paper here.