The goal of this programme is to develop computational models of basic cognitive processes such as memory, reading, and speech perception. Most of the modelling takes place within a Bayesian framework. The modelling enterprise is driven by behavioural and neuroimaging studies which provide data to guide the development of the models and to test their predictions.
Projects
Short and long-term memory
This programme investigates both the nature of short-term memory, and the relationship between short- and long-term memory. We try to answer basic questions such as “How in information coded in short-term memory”, “How does long-term memory help you remember information in the short-term?”, and “How is information transferred from short- to long-term memory?”. If you have to remember something for a few seconds, it’s much easier if the information is familiar. It’s easier to remember your telephone number than a random sequence of digits, and easier to remember very common words than very rare words. There are lots of ways that information in long-term memory might help short-term memory. For example, very familiar patterns might form ‘chunks’ (BBC, IBM etc). From a Bayesian perspective the process of remembering can be seen as combining prior information from long-term memory with information from short-term memory to generate the best interpretation of the evidence. We can build computational models of short-term memory and of how these different possibilities might operate, and then design experiments to test between them.
Neuroimaging of short-term memory
Neuroimaging studies of short-term memory have generally attempted to identify which regions of the brain are involved in functions such as storage, retrieval and rehearsal. A limitation of this approach is that it’s hard to know whether part of the brain that is active while maintaining information in memory is involved in storing or representing information, or just controlling or attending to memory processes. In our work we use a technique designed to analyse patterns of information stored in the brain (multi-voxel pattern analysis) to identify the areas responsible for storing information, and also to identify brain areas involved in learning. The focus of this work is on how we remember sequences in short-term memory and how we come to develop more enduring representations of those sequences in long-term memory. Remembering the order in which events occur is vital for all sorts of everyday activities, for example, there’s no point remembering the digits in a telephone number unless you also remember the order they appear in.
Reading
How do people read? Let’s start by pretending that human perception was perfect. How would a perfect, or optimally designed system work? Might people come pretty close to behaving like an optimal system? Perhaps rather surprisingly, it seems that they do. If we make the assumption that perception works by a process of collecting noisy evidence from the input (in this case, from the earliest stages of the visual system) we can construct a formal model of how people should behave when reading individual words, or when performing common laboratory tasks such as deciding whether letters form real words or nonsense words. This is the principle behind the Bayesian Reader model ( Norris, 2006, 2009; Norris & Kinoshita, 2012). This simple idea turns out to give a principled explanation of a wide range of experimental data on reading.
The latest version of this model (Norris & Kinoshita, Psychological Review, 2012) addresses the question of how people represent the order of letters in words. The model explains how we can read the famous “Aoccdrnig to a rscheearch at Cmabrigde Uinervtisy” email.
Speech recognition
Our work on speech recognition focusses on building a computational model of how people recognise words in continuous speech. This is exemplified by the Shortlist B model that I developed with James McQueen ( Norris & McQueen, 2008). Shortl;ist B represents a considerable advance over the original Shortlist model: First, it uses a more realistic input derived from perceptual confusion data. Second, and much more importantly, it replaces the interactive activation framework of the original Shortlist model (now known as Shortlist A for ‘activation’) with Bayesian methods. The model’s behaviour follows almost entirely from the simple assumption that listeners approximate optimal Bayesian recognisers. One consequence of this is that the model is much simpler than either the original Shortlist model or TRACE – it requires far fewer parameters. The model simulates data on speech segmentation, word frequency, and perceptual similarity. The paper also describes a Bayesian implementation of the Merge model (Norris, McQueen and Cutler, 2000). The latter is based on the procedures described in the Bayesian Reader model (Norris, 2006).
Computational modelling
All of my work is designed to enable me to build computer models of cognitive processes such as reading, memory and speech recognition. If you have a theory about how something works, it’s best to formulate it as a computer program because then you can be sure exactly what the model predicts and use the model to simulate your data. But there’s more to modelling than just being able to fit the data. The models have to help you understand the mental processes you are interested in. Most of my modelling is done in a Bayesian framework. The general idea has been to see how well we can explain aspects of perception or memory by assuming that people use the available perceptual resources in a near-optimal manner. This doesn’t mean that I believe that people really do behave optimally – clearly they don’t – but this is a good place to start, and turns out to give some very simple explanations for phenomena that have been difficult to explain in other frameworks.
Research team
Publications