Human listeners are impressively successful at recognising words and understanding speech. The speed and accuracy of word recognition has been explained as due to Bayesian Inference; listeners use prior knowledge or predictions for upcoming speech sounds to identify words quickly and accurately.
Recent work from MRC CBU scientists Ed Sohoglu (now at University of Sussex), Loes Beckers (now at Radboud University Medical Center, Nijmegen, NL) and MRC CBU Programme Leader Matt Davis have revealed the specific computations that the brain performs when recognising spoken words. The team combined two types of brain imaging (fMRI, and MEG), with computational models, and neural decoding analyses to show how listeners recognise familiar words like “bingo” or “snigger” and made-up words like “binger” or “sniggo”. Their work shows that a region of the brain called the Superior Temporal Gyrus computes prediction errors – the difference between predicted and heard sounds – during the second syllables of these items. These prediction error computations explain the speed with which listeners recognise familiar words, and how they learn previously unfamiliar words.
Their findings of how typical listeners succeed in recognising speech, have implications for studying how individuals with hearing or language impairment who can struggle to understand speech or learn new words.
You can read more about this work in their published paper:
Sohoglu, E., Beckers, L., Davis, M.H. (2024) Convergent neural signatures of speech prediction error are a biological marker of spoken word recognition. Nature Communications, 15:9984.
https://doi.org/10.1038/s41467-024-53782-5
External web link:
https://doi.org/10.1038/s41467-024-53782-5
https://profiles.sussex.ac.uk/p161597-ediz-sohoglu/
https://www.mosaics-eid.eu/network/esrs/esr2-loes-beckers/
https://www.mrc-cbu.cam.ac.uk/people/matt.davis/