Speech perception is fundamentally a multisensory process. In natural conversation, visual speech cues extracted from the speaker's head and facial movements can substantially enhance both perception and comprehension of speech. In my PhD research, I use psycholinguistic (online behavioural) and multivariate model-based neuroimaging (EEG/MEG) approaches to investigate the cognitive and neural mechanisms underlying audiovisual benefit for speech perception.
A central focus of my research is to explore individual differences in audiovisual speech perception, which are substantial in the general population. By uncovering the mechanisms distinguishing the successful integration of visual and auditory speech signals (e.g. in skilled lipreaders), I hope that my work can contribute to the development of speech rehabilitation programmes e.g. for age-related hearing loss.
My PhD is supervised by Dr Matt Davis and Dr Máté Aller, and funded by the MRC DTP, School of Clinical Medicine, and the Cambridge Trust.