skip to primary navigation skip to content
*** PLEASE READ ************************************** THIS PAGE HAS BEEN IMPORTED FROM THE OLD SITE. FORMATTING IS MAINTAINED BY AN EXTERNAL STYLESHEET. WHEN YOU EDIT THIS PAGE, YOU MAY WANT TO REMOVE THE REFERENCE TO THIS STYLESHEET AND UPDATE THE FORMATTING. ******************************************************
Untitled Document

Lexical information drives perceptual learning of distorted speech: evidence from the comprehension of noise-vocoded sentences

Matthew H. Davis, Ingrid S. Johnsrude, Alexis Hervais-Adelman, Karen Taylor and Carolyn McGettigan

MRC Cognition and Brain Sciences Unit, 15 Chaucer Road, Cambridge, UK

Abstract:

Speech comprehension is resistant to acoustic distortion in the input, reflecting listeners’ ability to adjust perceptual processes to match the speech input. For noise-vocoded sentences, a manipulation that removes spectral detail from speech, report scores improved from near 0 to 70% correct over 30 sentences (Experiment 1). Learning was enhanced if listeners heard distorted sentences while they knew the identity of the undistorted target (Experiments 2,3). Learning was absent when listeners were trained with nonword sentences (Experiment 4), although the meaning of the training sentences did not affect learning (Experiment 5). Perceptual learning of noise-vocoded speech depends on higher-level information, consistent with top-down, lexically-driven, learning. Similar processes may facilitate comprehension of speech in an unfamiliar accent, or following cochlear implantation.

genesis();