skip to primary navigation skip to content

Speech is an intrinsically multisensory signal. In adverse listening conditions, access to visual cues such as articulatory movements in the tongue, teeth, lips and jaw of the speaker can substantially improve comprehension. Visual speech therefore represents an important target for the rehabilitation of individuals with hearing loss to preserve receptive speech and communication.

The neural mechanisms by which visual cues support speech perception, and the causes underlying significant individual differences in audiovisual speech perception, however, remain unclear. My research uses behavioural and neuroimaging (MEG) data to investigate correlates of this inter-individual variability and determine the mechanisms distinguishing successful integration of visual and auditory speech.

My PhD is supervised by Matt Davis and Máté Aller, and funded by the MRC DTP, School of Clinical Medicine, and the Cambridge Trust.

genesis();