Using non-invasive brain imaging methods, such as electroencephalography (EEG), researchers can detect or “decode” neural activity that becomes time-locked to sensory signals in the environment, including speech sounds.
It’s possible that neural decoding techniques could also be applied in clinical settings, for example, to assess a patient’s hearing health. Speech perception, however, involves the complex interplay between sensory reception and cognitive processes originating in the brain, like learning and memory. Although this interaction is an important factor driving successful speech perception, it presents a source of ambiguity in the context of neural decoding, which may capture both sensory and cognition-related brain activity.
In this study, scientists from the MRC CBU played an audiobook to participants whilst recording their brains using EEG. To disentangle sound from cognitive processing, they systematically varied the audio quality of the audiobook, as well as whether or not the participants could understand the speech they listened to—thereby limiting their access to semantic prediction and other cognitive tools that listeners may use to make sense of a poor quality signal.
The results suggest that neural decoding can reveal how well speech sounds are tracked by the listener’s brain activity, but decoding accuracy is also enhanced when the listener understands what they’re hearing. These findings support theories arguing that “top-down” brain systems, like language and attention, exert an effect on how “bottom-up” signals are processed; however, the differences in decoding were relatively small, and unintelligible speech was processed generally similarly to intelligible speech. Hence, neural decoding could be a viable clinical tool that audiologists and other medical professionals could use to assess and improve hearing outcomes in the future.
The full paper can be read here: https://doi.org/10.1177/23312165241266316