skip to primary navigation skip to content
Ediz Sohoglu
Research Staff, Hearing and language group
01223 273636
Research summary

My research aims to understand the brain bases of hearing and speech perception: How are sound vibrations transformed into perceptual representations of spoken words, music and other sounds of the acoustic world?

I use a range of techniques from cognitive neuroscience: human psychophysics and functional brain imaging (MEG, EEG, fMRI), and computational modelling.


E. Sohoglu and M. Chait (2016) Detecting and representing predictable structure during auditory scene analysis. eLife. 5: E19113.

E. Sohoglu and M. Davis (2016) Perceptual learning of degraded speech by minimizing prediction error. PNAS. 113(12): E1747-56.

E. Sohoglu and M. Chait (2016) Neural dynamics of change detection in crowded acoustic scenes. NeuroImage. 126: 164-172.

E. Sohoglu, J. Peelle, R. Carlyon, M. Davis (2014) Top-down influences of written text on perceived clarity of degraded speech. Journal of Experimental Psychology: Human Perception and Performance. 40(1): 186-99.

S. Amitay, J. Guiraud, E. Sohoglu, O. Zobay, B. Edmonds, Yu-Xuan Zhang, D. Moore (2013) Human decision making based on variations in internal noise: An EEG study. PLOS ONE. 8(7): e68928.

E. Sohoglu, J. Peelle, R. Carlyon, M. Davis (2012) Predictive top-down integration of prior knowledge during speech perception. Journal of Neuroscience. 32(25): 8443-53.

K. Molloy, D. Moore, E. Sohoglu, S. Amitay (2012) Less is more: latent learning is maximized by shorter training sessions in auditory perceptual learning. PLOS ONE. 7(5): e36929.

S. Amitay, L. Halliday, J. Taylor, E. Sohoglu, D. Moore (2010) Motivation and intelligence drive auditory perceptual learning. PLOS ONE. 5(3): e9816.