Rehabilitation research topics and summaries
Executive/attentional function and rehabilitation.
A primary theme in our current work is how frontal and executive systems may facilitate recovery of function and how to support these processes if they themselves are damaged.
What do we mean by ‘frontal’, ‘executive’ or ‘attentional’ systems?
It has been known for a long time that damage to the frontal cortex of the brain can cause a range of problems for people. These have included in organizing behaviour towards achieving a goal (even if many basic abilities were intact), having insight into one’s own errors, distractibility and problems in inhibiting inappropriate actions. People have been reported to be sometimes able to work along habitual, routine lines but to run into problems when circumstances change and new problems are encountered. Sometimes people become very concrete and literal in their thinking, have difficulty initiating activity or sustaining their attention for very long. Others may make rash, hasty decisions, have difficulty empathizing with others and become indifferent to their relationships. It is rare that any particular person will show all of these characteristics and it should be said that sometimes damage to the frontal lobes seems to produce few if any problems. However, this is an example of where looking at what happens when an area of the brain is damaged tells us something about what it must be doing when it isn’t damaged (i.e. coordinating actions towards a goal, preventing irrelevant distraction, maintaining attention over time, inhibiting inappropriate actions, planning and so on). Clearly, however, these regions of the brain are contributing in many ways to some quite complex processes and the race has been on to work out quite how they do it.
Why are frontal, executive and attentional systems relevant to rehabilitation?
Life is a complex, multifaceted puzzle and we need to achieve certain goals to cope and thrive. It is clear from the brief list above that many of the abilities that can be damaged by brain injury are those which are extremely useful if we are to make and follow and adapt plans and get along with others. It is not surprising that damage to these capacities would have a generally negative effect. But what if one of the things to which we have to adapt is a change in our abilities? Say that, as a result of brain injury, your left arm has stopped working very well. Ideally you need to notice that this is the case and try and adjust your behaviour to get around it or compensate for the problem. Immediately you can see how the type of skills attributed to the frontal lobes (detecting errors, flexibly adapting behaviour to a new situation, evaluating the success of your strategies) would be particularly useful in these circumstances and why damage to the frontal regions might slow recovery, adaptation and response to rehabilitation.
At another level of description, new ideas about how the frontal (and other regions) work also stresses their potential importance in rehabilitation. Theorists like John Duncan here at the CBU have suggested that regions of the frontal lobes are very good at flexibly encoding and holding onto goals and task structures. Within this view, these easily tuned, flexible, multipurpose regions exert an influence over other areas of the brain that are more fixed and specialised in particular types of processing such as vision. In the example of a simple goal (“the target is hidden behind a red object”), the frontal representation and maintenance of this information influences regions of the brain that particularly respond to red objects and, as a result, in the battle between objects for attention, red objects become more and more likely to pop into awareness. In fact, although we continue to use ‘frontal’ as a shorthand, John and others’ work suggests that a network incorporating dorsal and ventral frontal cortex and regions of the parietal cortex tend to work together in these roles. To capture this distribution and the apparent flexibility in supporting effortful, attentional demanding processes he coined the phrase Multiple Demand (MD) areas – see John’s pages for a more coherent and complete account of these ideas. Dehaene and colleagues have made a somewhat similar argument, suggesting that frontal and parietal networks particularly contribute to a ‘global workspace’, associated with flexible, effortful, conscious processing.
So how is all that relevant to rehabilitation at a more general level? One intriguing possibility is that a flexible system, capable of holding on to a goal and which encourages other brain regions to cohere in processing relevant to that goal, could act as a form of ‘scaffolding’, providing temporary support to damaged brain systems and facilitating recovery. To return to our entirely hypothetical arm example, it is likely that, before the brain injury, you could use the arm skilfully and effortlessly with limited requirement for executive control (what you were doing may require executive control but not the production of the necessary movements). After your injury it may require all of your concentration and effort to make even a small movement. By ‘all of your concentration’ we mean that this goal must be actively maintained and that you must resist distraction or lapsing into habitual responses – the very type of capacities that have been linked with MD function. If a brain system was successful in doing this, it may provide the necessary sustained/repeated co-activation in surviving movement centres for them to begin to fire in sync. The ‘scaffolding’ that was needed to link them together would then no longer be (as) necessary. While the resulting level of movement ability may never return to its original efficiency, nevertheless some useful functional recovery would have occurred.
Now of course this is all very metaphorical but it does hopefully suggest at least three ways in which frontal/attentional/executive functions can contribute to recovery/rehabilitation – as very useful capacities in themselves, in allowing you to detect errors and flexibility adapt to your new situation and, potentially, as ‘neural scaffolding supporting residual capacity and the development of new functions.
Now – down to specifics.
Spatial neglect and spatial bias.
A patient with left spatial neglect’s drawing of a man from memory.
Quite a bit of our work has focused on a condition called unilateral spatial neglect. This is a striking (and oddly common) consequence of stroke in which people have difficulty noticing information on one side of space (see here for more information on neglect). We are interested in neglect both as a thing in itself and as a sort of ‘model condition’ because the influence of frontal capacities on recovery seems very extreme. In particular, it has long been noted that patients who show persistent neglect overwhelmingly have damage to the right side of the brain and to miss information on the left. They also tend very drowsy and to have difficulty maintaining their concentration. Work at the CBU confirmed this link in showing that patients with chronic neglect had particular difficulty in performing a non-spatial sustained attention task that has previously been linked with the function of the right frontal lobe. Work here and in other centres has also shown that patients may show more extreme neglect if they are asked to perform attentionally demanding tasks. One possibility therefore, is that patients who show spontaneous recovery (and most fortunately do) have sufficient residual general attentional/executive capacities to overcome the bias. Our recent work has suggested that many patients may have this residual capacity but tend not to use it reliably (this is a common theme across our work). For example, when patients were asked to perform a spatial test under the impression that they were under time-pressure, they showed a significant improvement in their performance. Believing that you are under time-pressure tends to cause modest increases in alertness (you probably have personal experience of this in exams or when you are very late for an appointment). Such results suggest that the internal mechanisms required to attain an alert state may be relatively intact but insufficiently used by patients. A key research aim is to investigate how patients can be best helped to use these residual capacities.
The line cancellation performance of a patient with left neglect. He was asked to cross out all of the lines on the page.
If spatial neglect is modulated by a patient’s level of arousal we might expect that the apparent severity of the disorder would vary from one moment to the next. In our recent work we have demonstrated exactly that link – showing, for example, that a patient would be classified as having no neglect at one time and quite a severe disorder at another and that this change is predictable when alertness (as measured by response times) is taken into account. This is a potentially important clinical issue as a brief, one-off assessment might show no neglect and yet the patient could have considerable difficulties in everyday life.
Theory of Visual Attention (TVA) spatial bias and top-down control.
The work of Polly Peers and John Duncan has particularly focused on the application of Bundesen’s Theory of Visual Attention (TVA) to spatial bias. This computational model allows us to look separately at different aspects that influence overall attentional functioning such as spatial bias (a preference to attend to one side of space), visual processing speed (how quickly visual information can be processed and consciously identified), attentional control (how efficient we are at directing our processing resources to things that are important to us, and reducing processing resources to things that are irrelevant) and visual short term memory capacity (how many items in the visual field can gain access to a limited capacity store, from which they become available for conscious report).
As discussed, whilst the spatial deficits seen in patients with disorders such as neglect may be striking there is an increasing awareness that patients spatial problems may be exacerbated by less ‘visible’ difficulties in non-spatial aspects of attention. We have been studying patients who have damage to key areas of the attentional network, namely in the frontal cortex and the parietal lobes, and who are medically stable and are back living in the community. This group of patients are of particular interest for a number of reasons, firstly, as they medically stable they show less day on day variation in their abilities allowing us to have a more certain understanding of their residual difficulties. Related to this, medical issues, such as generalised brain swelling (that occurs in the early stages post injury) have resolved, making it easier for us to relate their difficulties to structural changes in their brains. Finally, the patients are more able to take part in longer test sessions and so allow us to more systematically examine the profile of their difficulties.
Our work appears to indicate that patients difficulties are not limited to spatial problems alone, but that non-spatial aspects of attention are also important (Duncan et al 1999, Peers et al 2005). We have seen that problems in attending equally to both sides of space, and controlling attention (to attend most to things that are important) are present to some degree in all our patients with damage to the frontal or parietal cortex and is associated with amount of damage these patients have to this network. However, it is also clear that some patients, particularly those with damage in the lower parts of the parietal lobe, additionally have slowed visual processing, and a reduction in the amount of information that they can store in visual short term memory and thus can be reported.
Why is this important? Current strategies to improve spatial functioning in patients with neglect in the main attempt to remedy the clearly obvious spatial difficulties, by, for example cueing the patient to the left side of space, or using prism lenses to temporarily adjust the patient’s spatial experience of the world. Such strategies are not universally successful leaving many neglect patients with debilitating impairments even following rehab. As other work in our group has shown, improving non-spatial aspects of attention such as alertness can help improve spatial functioning in neglect patients. Additionally, our work suggests that many patients with damage to the frontal and parietal attentional network may have subtle spatial deficits but many of these patients have no problems with everyday functioning. It is possible that it is the presence of additional non-spatial attentional problems in neglect patients (as opposed to a purely spatial difficulty) that causes the problems these patients have. If that is the case we may be more successful in treating neglect if we concentrate on improving non-spatial deficits. We are currently setting out to examine more closely visual processing speed in patients with damage to the frontal and parietal lobes. We are looking to see whether we can examine the effects of alertness, attentional control and readiness on visual processing speed in patients. It is hoped that we may gain a greater understanding of the processing problems patients have and then see whether training patients to improve non-spatial aspects of attention will have knock on advantages for their spatial functioning. For more information, click here
- Key references
- George, M.S., Mercer, J.S., Walker, R., & Manly, T. (2008). A demonstration of endogenous modulation of unilateral spatial neglect: The impact of apparent time-pressure on spatial bias. Journal Of The International Neuropsychological Society, 14(1):33-41. [request reprint]
- Peers, P., Cusack, R., Duncan, J (2006). Modulation Of Spatial Bias In The Dual Task Paradigm: Evidence From Patients With Unilateral Parietal Lesions And Controls. Neuropsychologia, 44(8), 1325-1335. [request reprint]
- Manly, T., Woldt, K., Watson, P. & Warburton, E. (2001). Is Motor Perseveration In Unilateral Neglect ‘Driven’ By The Presence Of Neglected Left-Sided Stimuli? Neuropsychologia 40(11),1794-1803. [request reprint]
- Manly, T. (2001). Developments in The Rehabilitation Of Unilateral Neglect. Advances In Clinical Neurosciences And Rehabilitation. Advances In Clinical Neurosciences And Rehabilitation, 1 (4), 18-19. [request reprint]
- Muggleton, N., Postma, P., Moutsopoulou, K., Nimmo-Smith, I., Marcel, A. & Walsh, V. (2006) TMS Over Right Posterior Parietal Cortex Induces Neglect In A Scene-Based Frame Of Reference. Neuropsychologia, 44(7), 1222-1229. [request reprint]
- Ho, A.K., Manly, T., Nestor, P.J., Sahakian, B.J., Robbins, T.W., Rosser, A.E., & Barker, R.A. (2003). A Case Of Unilateral Neglect In Huntingdon’s Disease. Neurocase, 9(3), 261 – 273. [request reprint]
Key TVA references
- Duncan J, Bundesen C, Olson A, Humphreys G, Chavda S, Shibuya H. (1999) Systematic analysis of deficits in visual attention. J Exp Psychol Gen. 128, 450-78
- Peers PV, Ludwig CJ, Rorden C, Cusack R, Bonfiglioli C, Bundesen C, Driver J, Antoun N, Duncan J. (2005) Attentional functions of parietal and frontal cortex. Cereb Cortex. 15, 1469-84.
Extrapolation from spatial neglect studies to the healthy population.
An interesting question is whether the relationship between alertness and spatial bias is only apparent after serious brain injury or represents a somewhat amplified version of a normal pattern.
We have now examined this issue in adults and children and the results converge in suggesting that reduced alertness (as induced by, for example, sleep deprivation or boringly repetitive tasks) is associated with a modest but reliable rightward shift in attention. It is important to stress that these changes are, for the most part, subtle and we can only detect them using many trials on sensitive paradigms. The important implications are theoretical in suggesting that the type of brain damage occasioned by right hemisphere stroke acts to grossly amplify these patterns – an issue which is of relevance to the disproportionate occurrence of spatial neglect in patients with right hemisphere damage and poorly maintained alertness and its rehabilitation.
Another important clinical aspect of this work has been through looking at other groups in which low levels of alertness are more likely. While there are a number of arguments about its use, the Attention Deficit Hyperactivity Disorder diagnosis has been associated with poor performance on boring sustained attention tasks, suggested differences in right hemisphere processing and a paradoxical response to stimulant medication such as Ritalin (methylphenidate). As part of this work to date we (Melanie George, Veronica Dobler, Tom Manly and others) have worked with three boys who all met diagnostic criteria for ADHD, all had very poor sustained attention (without obvious neurological damage or illness) and all of whom showed a considerable rightward bias (e.g. completely ignoring some targets on the right of a cancellation sheet). Important findings from this work, aside from highlighting bias as a clinical issue that may not be regularly diagnosed have included that the bias may be reduced by external alerting (e.g. with loud tones) as well as medication and, conversely, that the bias gets worse with lowered alertness (e.g. after long periods of performing a boring repetitious task). For more information click here
- Key references
- Dobler, V. B., Anker, S., Gilmore, J., Robertson, I. H., Atkinson, J., & Manly, T. (2005). Asymmetric deterioration of spatial awareness with diminishing levels of alertness in normal children and children with ADHD. Journal of Child Psychology and Psychiatry, 46(11), 1230-1248.
- Dodds, C. M., van Belle, J., Peers, P. V., Duncan, J., Cusack, R., & Manly, T. (in press). Rightward shift in spatial awareness with time-on-task and increased cognitive load. Neuropsychology.
- Manly, T., Dobler, V. B., Dodds, C. M., & George, M. A. (2005). Rightward shift in spatial awareness with declining alertness. Neuropsychologia, 43(12), 1721-1728.
Non-spatial attention and executive function.
As discussed above, damage to the brain can induce a range of problems including difficulty in sustaining attention and increased distractibility, problems in developing and following strategies and gross levels of disorganization in behaviour. A range of projects within the Rehabilitation Group at the CBU have addressed these questions (that are also at the core of the Attention and Volition Group more generally).
Assessment of non-spatial executive and attentional functions
The Test of Everyday Attention (TEA)
Note: Queries and correspondence about acquiring, administrating or interpreting the Test of Everyday Attention should be directed to the test publishers in the first instance (this assists them in generating a series of frequently asked questions and they will pass on questions that they cannot answer to the authors). Correspondence of a more scientific nature should be directed to the first author, Professor Ian Robertson.
References
Robertson, I. H., Ward, A., Ridgeway, V., & Nimmo-Smith, I. (1994). Test of Everyday Attention. Bury St Edmunds: Thames Valley Test Company. [request reprint]
Robertson, I. H., Ward, A., Ridgeway, V., & Nimmo-Smith, I. (1996). The structure of normal human attention: The Test of Everyday Attention. Journal of the International Neuropsychological Society, 2, 523-534.
In comparison with the biases of unilateral neglect, in which performance on one side of space can be compared with the other in a single patient, assessing non-spatial attention is particularly challenging. Attention is thought to be a process or series of processes that contribute towards performance in many domains but which cannot be assessed directly. To assess attention you must inevitably ask people to do something (understand your instructions, look out for targets and press a button, make a verbal response, etc) and variations in that something may be as or more important than ‘attention’ in explaining differences between people. So how do you get around this? One method – and that which underpins most experimental work in this area – is to use subtraction. You try and equate the non-attentional demands of two conditions while varying their attentional requirements. The ‘Posner Cueing Paradigm’ is a good example of this. In the task, participants are asked to respond as quickly as possible to the occurrence of a visual target which could appear in one of two on-screen boxes. Crucially, in some trials you may get a strong clue as to in which box the target will appear. When this happens, people are trypically a little bit faster to respond than if they received no clue (or received an incorrect clue). The difference between response times under these different conditions can therefore be attributed to the allocation of attention particularly to one box or other. The fact that people may vary in their general speed of their response (e.g. due to poor vision or a damaged hand) is no longer as important because this will be true across all of the conditions.
This type of logic works well in group studies. There are however some problems when it comes to individual assessment (i.e. the type of assessment that you use clinically) where measurement ‘noise’ over practicable limits can make it rather unreliable. An alternative approach is to use very simple measures in which variations in attention are probably the major cause of differences between people. This was the basis of Ian Robertson and colleagues’ development of the Test of Everyday Attention (TEA). Here the attempt was to reduce requirements for memory, comprehension, general knowledge, strategy application etc to a minimum while producing tasks with particular attentional demands. This work was influenced by the arguments set out by Posner and Petersen (1991) about different attention systems contributing to somewhat different demands. For example, the TEA included a sustained attention task in which participants were asked to keep a count of tones played at a slow and variable rate (The Elevator Counting subtest). It would be possible to fail this task if you could not hear the tones or count but the most likely cause of failure would be a transient lapse in attention during this boring task (clinically, it is important to rule out other possible causes of poor performance). In this manner the TEA consisted of subtests that were designed to tap sustained attention and selective attention (argued by Posner and Petersen to reflect different underlying systems). To this was added the capacity to switch attention between two relatively easy activities (counting upwards and counting downwards) and to combine two activities (counting tones while searching for visual targets). An advantage of developing a clinical measure such as the TEA is that it requires the collection of normative data from a large number of people in the healthy population. This allows patterns of covariance in performance to be examined (i.e the degree to which performance on one test tends to vary in a similar or different manner to another test). If two tests tend to vary together across the normal population it is consistent with some common process – that varies between individuals – contributing to both tasks. This provided a means of testing the ideas proposed by Posner and Petersen (and others) in the healthy population. The results were consistent with the idea that sustained attention and selective attention did rely on somewhat different underlying systems and further suggested that switching attention from one thing to another similarly diverged. It is important to stress, however, that this separation is very unlikely to be absolute. Instead it is probable that sustained attention, selective attention and attentional switching etc diverge at some points but will also make demands on processes that they share. An example may be useful here: Suppose there are three cars that begin their journey on the same motorway at the same time and then diverge to take three separate small roads. Each car travels for the same distance.Differences between the journey times of the three cars are likely to be attributable to the conditions on the small roads but the overall speed of the journeys may be as or more attributable to hold-conditions on the motorway. This is important to keep in mind in clinical assessment.
The Test of Everyday Attention for Children (TEA-Ch)
(Queries and correspondence about acquiring, administrating or interpreting the Test of Everyday Attention for Children should be directed to the test publishers in the first instance as this assists them in generating a series of frequently asked questions and they will pass on questions that they cannot answer to the authors. Correspondence of a more scientific nature should be directed to the first author, Tom Manly [tom.manly@mrc-cbu.cam.ac.uk].)
Key references:
Manly, T., Anderson, V., Nimmo-Smith, I., Turner, A., Watson, P., & Robertson, I. H. (2001). The differential assessment of children’s attention: The Test of Everyday Attention for Children (TEA-Ch), normative sample and ADHD performance. Journal of Child Psychology and Psychiatry and Allied Disciplines, 42(8), 1065-1081.
Manly, T., Robertson, I. H., Anderson, V. A., & Nimmo-Smith, I. (1999). TEA-Ch – The Test of Everyday Attention for Children. Bury St Edmunds: Thames Valley Test Company.
One aspect of the work with the TEA in which we were particularly interested was that in spatial neglect and stroke recovery more generally suggesting that poor performance on the slow tone-counting sustained attention task was linked with slowed recovery in general and the persistence of rightward spatial bias in neglect. When we began working with children who appeared to show similar rightward spatial biases we wanted similar measures of attention with which to assess them. These were not available so we had to produce them ourselves. This was a major impetus to the development of the Test of Everyday Attention for Children (TEA-Ch).
In essence the TEA-Ch consists of child-friendly versions of a subset of TEA subtests. Ambitiously we attempted to create a test that was suitable for 6-year olds through to 16 year olds and, for this reason, chose a space/computer game theme that we thought would be common across this range. A challenge lay in setting difficulty levels such that the youngest children would be able to achieve some correct items while 16 year olds would not be hopelessly at ceiling (i.e. all scoring top marks with no capacity to differentiate between individuals).
As with the adult TEA, the requirement to test large numbers of children from the healthy population provided the opportunity to statistically test patterns of co-variance in the subtests (i.e the degree to which performance on one test tends to vary in a similar or different manner to another test). If two tests tend to vary together across the normal population it is consistent with some common process – that varies between individuals – contributing to both tasks. This provided a means of testing the ideas proposed by Posner and Petersen (and others) in the healthy population. The results were consistent with the idea that sustained attention and selective attention did rely on somewhat different underlying systems and further suggested that there was a basis for separating higher level executive control (switching attention and the suppression of pre-potent responses) from these more basic capacities. As with the TEA it is important to stress that the putative separations between attention systems ‘tapped’ by the subtests is relative rather than absolute. To repeat the example from above: Suppose there are three cars that begin their journey on the same motorway at the same time and then diverge to take three separate small roads. Each car travels for the same distance. Differences between the journey times of the three cars are likely to be attributable to the conditions on the small roads but the overall speed of the journeys may be as or more attributable to hold-conditions on the motorway. This is important to keep in mind in clinical assessment.
The Sustained Attention to Response Test (SART)
The SART is not currently a commercially available test and, while data on the performance of 100 or so healthy adult volunteers is given in Manly et al., 2000, this sample is not necessarily representative of the population as a whole. The parameters are in the public domain however and can be pretty easily reproduced using various types of experimental delivery software. The possibility of commercial or non-commercial distribution are actively being considered as is offering a web-based assessment facility. Information on these developments will appear here as soon as it is available.
- Key references:Robertson, I. H., Manly, T., Andrade, J., Baddeley, B. T., & Yiend, J. (1997). ‘Oops!’: Performance correlates of everyday attentional failures in traumatic brain injured and normal subjects. Neuropsychologia, 35(6), 747-758.Manly, T., Robertson, I. H., Galloway, M., & Hawkins, K. (1999). The absent mind: Further investigations of sustained attention to response. Neuropsychologia, 37, 661-670.Manly, T., Lewis, G. H., Robertson, I. H., Watson, P. C., & Datta, A. K. (2001). Coffee in the cornflakes: Time-of-day, routine response control and subjective sleepiness. Neuropsychologia, 40(1), 747-758.
Manly, T., Davison, B., Heutink, J., Galloway, M., & Robertson, I. (2000). Not enough time or not enough attention?: Speed, error and self-maintained control in the Sustained Attention to Response Test (SART). Clinical Neuropsychological Assessment, 3, 167-177.
Manly, T., Datta, A., Heutink, J., Hawkins, K., Cusack, R., Rorden, C., et al. (2000). An electrophysiological predictor of imminent action error in humans. Journal of Cognitive Neuroscience, 111.
Manly, T., Heutink, J., Davison, B., Gaynord, B., Greenfield, E., Parr, A., et al. (2004). An electronic knot in the handkerchief: ‘Content free cueing’ and the maintenance of attentive control. Neuropsychological Rehabilitation, 14(1-2), 89-116.
Manly, T., Owen, A. M., McAvinue, L., Datta, A., Lewis, G. H., K, S., et al. (2003). Enhancing the sensitivity of a sustained attention task to frontal damage. Convergent clinical and functional imaging evidence. Neurocase, 9(4), 340-349.
(See also Smallwood, J., Davies, J. B., Heim, D., Finnigan, F., Sudberry, M., O’Connor, R., et al. (2004). Subjective experience and the attentional lapse:Task engagement and disengagement during sustained attention. Consciousness and Cognition, 13, 657-690.)
A series of projects within the rehabilitation research group have focused on or used the Sustained Attention to Response Test (SART). Since the introspections of William James in the 19th Century, ideas about executive functions have been informed by the occurrence of ‘action lapses’. An action lapse, in this sense, is when you perform a routine or habitual action rather than the one you intended. For William James this was of finding himself getting into bed when he had only gone to the bedroom to change for dinner. For later theorists Donald Norman and Tim Shallice the example was (unwisely) holding a watch in the hand during an evening walk by the ocean and thence hurling it into the waves instead of a stone. We have collated a number of examples that include removing the U-bend trap under a sink and very carefully pouring the water that it contained down the sink – despite moments before thinking how likely and foolish this would be. Perhaps the most common example is of knowing that a light bulb has blown and yet finding oneself incapable of not pressing the switch anyway when entering the darkened room. A current favourite example came from a caller to the Danny Baker phone-in radio programme who was often late for work and, from his doorstep, saw the bus that he needed to catch to get there on time. Noting that that the bus was somewhat caught-up in traffic he estimated that he could run to the bus stop in time. He didn’t but, with the bus still making slow progress, he ran on to the next. This time he made it… just. Some moments later into his journey he reported looking down and being surprised to be clutching two rubbish bags that he had been in the process of putting into his bin.
What all of these examples have in common is that sometimes very complicated sequences of behaviour can be ‘absentmindedly’ produced despite one’s overt intentions at the time. Theoretically this led to the idea that human behaviour was controlled at different levels and that, for probably more time than many of us would care to admit, what we do is determined by routine and context rather than by (or despite) conscious thought. In turn, this idea of two levels of organization was mapped onto ideas about frontal lobe function. Within Tim Shallice’s refinement of the Norman and Shallice model, or example, routine behaviours were controlled relatively automatically by environmental triggers and reciprocal inhibitions between competing action plans – a level they termed contention scheduling. This system works well but encounters problems when the response which is in receipt of the greatest trigger is inappropriate (because conditions have changed) or contrary to one’s goals at the time. Here, it was argued, a second level of control was required to intercede – within this framework, the Supervisory Attentional System (SAS).
The Sustained Attention to Response Task (SART) set out to model the action lapse in a simple and highly reduced manner. In the test participants watch a computer monitor on which single digits are presented in a random sequence at a regular pacing of one ever 1.15 seconds. They are asked to respond to each digit by pressing a button. Quite quickly, people begin to respond rather automatically to this boring task, pressing the button without a great deal of thought. The catch is that, every so often and unpredictably a digit appears to which they should not respond (say, the digit 3). The trick is therefore to try and maintain attentive control over your response to avoid ‘absentmindedly’ pressing the button on these no-go trials. This turns out not to be easy and most people make some mistakes. In an early study, the propensity to make mistakes was heightened in individuals from the healthy population who reported a higher frequency of everyday attentional slips and increased in survivors of traumatic brain injury.
An issue in interpreting SART performance is whether it best reflects some kind of inherent ‘response inhibition’ capacity of an individual or the capacity to sustain attention over actions. Our preferred view has been that the capacity to inhibit a response is in part determined by the degree to which you are actively attending to what you are doing: if you know that the light-bulb has broken you are less likely to habitually press the switch if this is very actively kept in mind during the interval between having the thought and entering the room. One way of examining this was to investigate the effects of ‘content free cues’ on SART performance. The content free cues were tones periodically played to participants as they were completing the test. Before the test we asked them to use the cues to remind them to think about what they should be doing. The cues carried no information other than by association with the participants own intentions and certainly did not provide information about when a no-go trial would occur. Despite this, significant reductions in error rates were observed – suggesting that, without the cues, participants had greater difficulty in keeping the goal actively in mind (the application of these ideas to assessment and rehabilitation are outlined further below).
Automated (content-free) cueing of executive reviews.
As highlighted above, content free cueing refers to some sort of external reminder that doesn’t carry any information in itself but which (ideally) prompts you to consider your own goals and intentions. The idea is that there are various different levels at which we can ‘forget’ something. Suppose that you think “I should take that book back to the library”. It is possible that you could absolutely forget this intention to the point if someone asked you about it you said “what book?”. That could happen if you have very dense amnesia or over a long period of time but is relatively unlikely over the scale of days and weeks at which we mainly operate. More likely is that the goal slips from mind but, if you are asked about it or get a cue such as seeing the library, comes rushing back with a jolt (“D’oh!”). It is, of course, a very interesting question as to how, in a limited capacity system in which goals may become completely submerged by the demands of current activity, they ever bob back in to consciousness. Subjectively, it certainly feels as if there is a continuum in which goals may be completely forgotten, by retained in the sense that the information can be recalled but is not in current awareness, through to having the goal actively in mind (I must not switch on that dud lightbulb) but still not sufficiently so to prevent the habitual response being expressed. What is clear is that this goal management process is imperfect for most of us and decidedly imperfect for many patients with brain injury.
The classic content free cue is the knot in the handkerchief that people tie when they wish to remember something. The idea is that, at some point in the future when the need for handkerchief use arises, they will pull it out, see the knot, and think ‘now, why did I put that there?’: With luck, the original intention will spring back to mind, the task will be completed and the knot removed ready for the next goal. This doesn’t always happen of course but it should still form a moment when people are reminded to think about what they should be doing – in other words by tying the knot they have planted a cue in their own future to engage in an ‘executive review’.
In studies highlighted above, we examined the effect of automated content free cues (in this case, tones) as people performed the Sustained Attention to Response Test (SART). That their performance improved suggested that, even over a time-scale of seconds between the presentation of no-go trials, people had difficulty keeping this simple goal sufficiently active in mind to overcome the routine response.
The next step was to examine the possible effectiveness of content free cues on a more complex, life like task.
The Hotel Test
The Hotel Test, although using different subtasks, was derived entirely from the Six Elements test, developed by Paul Burgess and Tim Shallice (Shallice, T., & Burgess, P. (1991). Deficits in strategy application following frontal lobe damage in man. Brain, 114, 727-741.) A version of the 6 Elements test is available commercially (and with norms) as part of the Behavioural Assessment of the Dysexecutive Syndrome(publisher link) The Hotel Test can be relatively easily re-created using everyday materials described in the Manly et al. (2002) paper for your own experimental studies. However, there is no normative data (other than 24 neurologically healthy individuals described in Manly et al., 2002)
Manly, T., Hawkins, K., Evans, J. J., Woldt, K., & Robertson, I. H. (2002). Rehabilitation of Executive Function: Facilitation of effective goal management on complex tasks using periodic auditory alerts. Neuropsychologia, 40(3), 271-281.
The idea behind the Six Elements Test was to examine how people manage and maintain goals when they are set against current activity. In the Hotel Test version, participants were asked to imagine that they were the new deputy manager of a hotel and that their manager had asked them to sample 5 tasks ‘to get a feel for how long they would take to complete’. The tasks consisted of relatively easy activities of sorting UK from foreign coins, putting conference labels into alphabetical order, compiling individual bills from an annotated till roll, looking up phone numbers and proof reading a mistake riddled leaflet. Crucially – and as repeatedly emphasised in the instructions – completing any of the tasks would take longer than the 15 minutes available to try all of the 5 tasks. As a consequence, to achieve the main goal of trying each of the tasks, participants would be need to keep this in mind and break off from current activity to switch without any sort of external cue (other than the time; a clock was available throughout) as to when this should be done. The classic error among patients in our study, as with Shallice and Burgess’ original paper, was a heightened tendency for patients to get caught up in an individual task and neglect this overall goal.
As described in detail in Manly et al. (2002), when occasional tones were presented that had previously been linked with the instruction to ‘think about what you are doing’, the performance of the patient group, which had been impaired, was no longer distinguishable from that of healthy participants. Again, the tones carried no information other than by association with the patients’ own stored representation of the goal. The tones were timed such as not to occur near the time when a switch would be optimal and there was little evidence that the patients simply changed task when the tones occurred. Instead, it was argued, this additional moments of executive review were sufficient to enhance their goal management.
By their nature complex, multiple component tasks such as the Six Elements can be failed for many reasons. These range from poor understanding of the instructions, poor planning, frank amnesia for the goals through to indifference about whether one performs well or not. From an assessment perspective, the improvement of the patients with the tones therefore tells you a great deal: The tone carried no information so, to benefit, the patients must have understood the instructions and formed a reasonable plan and retained the goal. It is similarly hard to imagine how a few tones could overcome indifference or poor motivation. The most likely account of the poor performance without the tones was therefore patients’ difficulty in keeping the retained goal sufficiently active.
The results of this study were useful in thinking about assessment. The observed improvements in performance also suggest that the technique may have value in rehabilitation although for this to be the case the effects would need to persist for longer than the 15 minutes examined by the Hotel Test. A subsequent study set out to investigate this.
Extrapolation of the automated cueing of executive reviews to goal manament in everyday life.
(back to the index)
In the Hotel Test study the ‘training’ consisted simply of asking people to think about what they were doing when they heard a tone. It seemed likely that, if you were exposed to such tones over a longer period you would rapidly habituate and their value would decline. The aim of a recent study run by Jessica Fish was to try and offset such habituation by associating them with a more elaborate training procedure, a modified form of Goal Management Training.
What is Goal Management Training (GMT)?
Goal Management Training was originally developed by Ian Robertson, Brian Levine and colleagues as a group or individual therapy approach for patients with acquired dysexecutive disorders (e.g. poor planning, forgetting to do things in the future, poor decision making and impulse control etc). The training had an educational/insight component. Here people learned about common problems that could arise following brain injury and, in this context, were encouraged to think about any difficulties that they might be experiencing. A likely advantage of a group approach in this respect is that people may feel more able to share experiences, observe and think about others’ problems and feel less defensive and judged: these are a set of problems to be solved or avoided rather than simply a list of shortcomings. The training also encouraged participants to engage collectively in exercises that highlighted particular difficulties and try out new strategies. An example might be a logical puzzle in which patients may have a tendency to rush to a conclusion or feel so overwhelmed that they give up. Here they would be encouraged to adopt a step-by-step approach, breaking the problem down into relevant components, thinking of different solutions and weighing the pros and cons of each and so forth. There is growing evidence that this type of training can be effective in encouraging a more measured and effective approach. A revision of Goal Management Training as therapeutic package has recently been developed by Brian Levine, Ian Robertson and Tom Manly and is currently being evaluated. It is not yet available. Further information on this will appear here as soon as possible.
Levine, B., Robertson, I. H., Clare, L., Carter, G., Hong, J., Wilson, B. A., et al. (2000). Rehabilitation of executive functioning: An experimental-clinical validation of Goal Management Training. Journal of the International Neuropsychological Society, 6, 299-312.
Levine, B., Stuss, D. T., Winocur, G., Binns, M. A., Fahy, L., Mandic, M., et al. (2007). Cognitive rehabilitation in the elderly: Effects on strategic behavior in relation to goal management. Journal of the International Neuropsychological Society, 13(1), 143-152.
An issue with this type of training approach is that, while patients may improve on the type of tasks which they have practiced in the sessions, do they generalize from these tasks and context to their everyday lives? Given the nature of some dysexecutive difficulties (poor abstraction, a tendency towards concrete, literal thinking etc) there are good reasons why this may be a problem (see von Cramon & Matthes-von Cramon, 1994) and it is certainly something that needs to be built in to the therapeutic package. One possibility, that arises from our work discussed above, is to use some form of automated cueing to help patients ‘take’ the therapeutic context into their everyday lives.
Automated Cueing of Executive Reviews (continued).
In the study described in Fish et al. (2007), this is what we attempted. Specifically, patients were given a shortened version of Goal Management Training in which the process of stopping a task and thinking briefly about your overall goals was practiced and associated with the cue-phrase “STOP!”. The generalization of the training was facilitated by patients receiving text (SMS) messages on their mobile (cell) phones as they went about their everyday lives. Here the effectiveness of the generalization was assessed by asking patients to remember to call our voice mail at set times of the working day. This is sometimes referred to as a “prospective memory” (PM task), i.e. remembering to perform an action at some point in the future. For each patient in the study, half of the 10 study days were randomly assigned as cued days and half as “uncued” days. On cued days, 8 STOP messages were sent at random times (although not within half an hour of when a call needed to be made). The results showed that patients were significantly better and more accurate at making the calls on cued days – a clear indication that spontaneous generalization from the training was at least not optimal on uncued days.
There are clear applications of this technique in cognitive rehabilitation. That cueing facilitated patients’ performance on this simple PM task suggests that similar strategies may be helpful in promoting attainment of more functional goals in people with executive dysfunction following brain injury, as well as in other conditions where executive functions are thought to be compromised. One major current project, called the Automated Intention Monitoring (AIM) Study, seeks to address this question using Randomised Controlled Trial methodology.
The results of the Fish et al. (2007) study were also relevant to the assessment of Prospective Memory. Prospective Memory is notoriously difficult to assess. There are very few standardised measures available, and those that exist have yet to demonstrate good predictive validity (i.e. we do not know the extent to which performance on laboratory/clinical tests reflects everyday behaviour). Furthermore, experimental paradigms used in PM research are rarely used with clinical groups. The PM task given to participants for the purpose of this study has clear ecological validity, and the results suggest that it may be a useful method of assessing PM in future research studies, as well as in clinical practice. The assessment and rehabilitation of prospective memory (see Fish et al., 2008) is part of our ongoing research program.
New clues to understanding attention deficit hyperactivity disorder in children
Children suffering from Attention Deficit Hyperactivity Disorder (ADHD) may be ignoring visual information to their left and being diagnosed mistakenly as having dyslexia, according to new research by Dr Tom Manly and colleagues at the Medical Research Council (MRC) Cognition and Brain Sciences Unit in Cambridge and published in Journal of Child Psychology and Psychiatry and Brain and Cognition. The latest of three recent papers on the subject, published today, shows that this ‘left neglect’ phenomenon is more widespread in children than previously thought.
The research, conducted by Dr Manly and colleagues Dr Veronika Dobler and Melanie George, showed that children with ADHD might simply stop noticing things to their left, particularly when they are doing boring or unstimulating tasks. The phenomenon of ‘left neglect’ is well-known in adults who have suffered right-sided brain injury, who can act as if half the world has simply disappeared. Some children with ADHD, who had no brain damage and perfectly normal intelligence, showed ‘left neglect’ quite as severe as that seen in some adults with substantial damage to the right side of the brain. Remarkably, the studies show that most children’s awareness of things to their left – but not their right – significantly declines if they are asked to perform a boring task for about 40 minutes.
The research published today shows that even perfectly healthy children can begin to lose some awareness of information on the left with boredom. Dr Manly said, “The right side of our brain seems to be heavily involved in keeping us awake and alert, particularly when we are bored. Because the right side of the brain is interested in what is going on to our left and vice versa, as this alertness declines over time or with boredom, it takes some of our awareness of the left with it. All children lose information disproportionately from the left, but children with ADHD appear to reach this point more quickly and to a greater extent than other children unless they are given stimulant medication.
Dr Manly highlighted the phenomenon in his earlier studies, “One boy with ADHD we worked with tended to ignore the first letters in words, reading ‘TRAIN’ as ‘RAIN’ and ‘FLOAT’ as ‘OAT’. Another boy would miss details from the left in his drawing and compress his writing or drawing only into the right hand side of the page.”
Dr Manly claims that this difficulty with noticing things on the left has often gone undiscovered because it is not routinely assessed. The problems may be attributed incorrectly to dyslexia or clumsiness.
He concludes, “We have no idea how many children are affected, or if they grow out of it or if it is permanent. However, there are some effective treatments for this problem in adults and our early studies suggest they may work for children, but more research is needed. Nevertheless, improving early assessment in children should be a priority.”