Our ability to understand speech, especially in noisy environments, is significantly improved by cross-modal cues such as lip reading and subtitles. How the brain creates these predictions, and reconciled them against what was actually heard is highly controversial.
In this study led by scientists and neurologists from the MRC CBU, patients with a rare speech disorder called nonfluent variant primary progressive aphasia (nfvPPA) were recruited over four years from hospitals in Cambridge, Oxford, London and Newcastle, to have cutting edge brain scans (7T fMRI) that can decode the pattern of neural activity in the brain. These patients have damage to the frontal areas of the brain that are involved in producing speech, but not to the temporal lobe areas that have traditionally been thought to be most important for understanding speech.
Patients and matched healthy individuals listened to speech shortly after reading words that either matched the speech, making it sound clearer, or mismatched, causing the brain to make the wrong prediction about what would be heard next. The brain scans showed that the brain regions that control mouth motor movements (left precentral gyrus) not only represented the patterns of sounds in spoken words (phonology), but also represented prediction errors. Importantly, the degree to which they represented these prediction errors was related to how much information the error contained about how the prediction could be made better next time. In simple terms, the scans showed that we use our motor cortex to imagine reading subtitles out loud, to work out what sound that would produce, so that we can compare what sounds we actually hear to this prediction.
The brain scans also showed what happens when the prediction is incorrect and the brain has to work out what was really heard. A different brain region, left inferior frontal gyrus often known as Broca’s area, represented verified and violated predictions independently. This region facilitated the reconciliation of the prediction and the sound in the ‘mind’s ear’ – echoic memory regions of anterior superior temporal gyrus. This process could not be completed by the brains of the patients, which performed well when predictions were verified, but were in flexible when the predictions were incorrect.
Overall, this work fundamentally changes and improves the way that we understand how the brain creates and uses predictions for speech, and how this can go wrong in patients with aphasia. It would not have been possible without the patient’s, their families, and the healthy volunteers, who have our eternal gratitude.
The full paper can be found here: https://www.sciencedirect.com/science/article/pii/S2211124723004333
Cope TE, Sohoglu E, … Patterson K, Davis MH*, Rowe JB*. “Temporal lobe perceptual predictions for speech are instantiated in motor cortex and reconciled by inferior frontal cortex.” Cell Reports 42.5 (2023).
*Joint senior authors