skip to primary navigation skip to content
*** PLEASE READ ************************************** THIS PAGE HAS BEEN IMPORTED FROM THE OLD SITE. FORMATTING IS MAINTAINED BY AN EXTERNAL STYLESHEET. WHEN YOU EDIT THIS PAGE, YOU MAY WANT TO REMOVE THE REFERENCE TO THIS STYLESHEET AND UPDATE THE FORMATTING. ******************************************************
Spot the difference: Investigations of conceptual structure for living things and artifacts using speeded word-picture matching
Spot the difference: Investigations of conceptual structure for living things and artifacts using speeded word-picture matching

M.H. Davis, H.E. Moss, P. de Mornay Davies and L.K. Tyler

Department of Experimental Psychology
University of Cambridge


Many cases of domain-specific semantic deficits for living things have now been reported. We have developed an account of this dissociation in which the differential vulnerability of living things and artifacts reflects the statistical structure of concepts within these domains. Living things concepts are more affected by damage to the semantic system because their distinctive properties (e.g. has stripes, is yellow) are not correlated with other information within the category, and are therefore not supported by mutual activation. Shared properties, including biological functions and the perceptual properties that support them (e.g. walks-has legs; eats-has a mouth), are densely intercorrelated and so robust to damage. For artifact concepts, informative perceptual properties are strongly correlated with specific functions (e.g. has a blade-used for cutting) and so are relatively preserved following damage (Durrant-Peatfield et al, 1997; Moss et al, 1998).

We tested key predictions of our account using a speeded word-picture matching task varying the relation between word and picture: (a) same (tiger-tiger); (b) close co-ordinate (lion-tiger); (c) distant co-ordinate (rabbit-tiger); (d) cross-domain (truck-tiger). Target pictures came from either the living or non-living domains. Participants indicated with a button press whether the word and picture target are the same or different. In the same, close and distant co-ordinate (i.e. within-domain) conditions, distinctive perceptual information must be retrieved to decide whether the word and picture match. In the cross-domain condition, however, properties shared by all members of a category are sufficient to say that the word and picture are different. In the intact system, distinctive properties should be available for all concepts, supporting accurate decisions for both living and non-living things, although we expect the close co-ordinate conditions to be more difficult to discriminate due to their greater semantic overlap.

In the impaired semantic system, however, we predict a greater loss of distinctive properties for living things than artifacts, leading to poorer word-picture matching for the living domain. Crucially, this difficulty should be observed only when distinctive information is needed to make a correct decision - the within-domain conditions. Cross-domain pairs should present little problem, since shared properties of living things are highly intercorrelated and robust to damage.

Method

Stimuli: 88 colour photographs of living things (animals/fruit/vegetables) and artifacts (vehicles/tools/household objects) were combined to form word-picture pairs in the four conditions and pre-tested for visual and semantic similarity. Words were matched on frequency and length, with pictures matched for familiarity, complexity and name agreement. Words were presented auditorily, with pictures presented briefly (200ms for controls, 500ms for JBR) at offset. Items were rotated over four versions to avoid repetition with fillers added so that 50% of trials elicit a ‘same’ response.

Subjects: 48 normal young controls aged between 18 and 45 were tested. We also tested JBR, a 41 year old man who had Herpes Simplex Encephalitis at the age of 23. JBR has a documented domain-specific deficit for living things (Warrington & Shallice, 1984; Bunn et al, 1998). For example, he scored 14% and 59% correct for living things and artifacts respectively in a colour picture naming task (c 2(1)=27.9, p<.001).

Results

Controls: As expected, the close condition was more difficult than other conditions, producing significantly slower reaction times (F1[2,88]=111.27, p<.001; F2[2,138]=72.85, p<.001) and greater error rates (F1[2,88]=31.21, p<.001; F2[2,138]=30.95, p<.001). Of greater interest, were differences between the living and artifact domains for different prime conditions. Responses where distinctive information is needed to make a decision (close and distant prime conditions) were slower for artefacts than for natural kinds (close condition F1[1,44]=18.11, p<.001; F2[1,69]=5.61, p<.05; distant condition F1[1,44]=26.26, p<.001; F2[1,80]=7.53, p<.01. As shown in Figure 1 there were no significant differences between natural kinds and artefacts in the same and cross-domain conditions (all F<1).

JBR: Unlike the control subjects, JBR found the living things more difficult than artifacts, (43% vs 17% errors respectively, (c 2(1)=13.1, p<.01). Like control subjects, JBR showed no difference between living things and artifacts in the same or cross-domain conditions, with relatively few errors in either. He found the close condition very difficult, with high error rates for both living (68%) and artifacts (45%; c 2(1)=1.48, p>.1). The major difference across domains was for the distant within-category condition, where JBR’s responses were significantly slower and more error prone for living things (1360ms/68%) than for artifacts (1114ms/0%; F1[1,27]=4.34, p<.05, c 2(1)=19.8, p<.01).

Figure 1:

Conclusions

For normal subjects we found a living things advantage in conditions where distinctive information is needed to discriminate between members of a category. Further investigation will test whether this is due to the greater accessibility of distinctive perceptual properties – such as colour and surface detail - in pictures of living things. It is precisely these properties that the conceptual structure account of Moss et al. (1998) predicts will be most vulnerable to damage.

JBR’s results support our predictions, showing a significant impairment for living things in the conditions where distinctive information is needed - i.e. in the close and distant within-category conditions (e.g. lion-tiger, rabbit-tiger). This reflects the severe loss of distinctive perceptual properties of living things, which are vulnerable to damage because they are weakly correlated with other information in the semantic system. Shared properties of living things are densely inter-correlated and less affected by damage - hence JBR’s accurate performance on cross-domain decisions for the living things domain. JBR has less difficulty with artifacts, but even in this "preserved" domain he makes many errors in the close condition (e.g. trowel-spade), indicating that there has been some erosion of distinctive semantic information. Nevertheless, the form-function correlations within artifact concepts mean that enough distinctive properties remain intact to support JBR’s excellent performance in the distant within-category condition (e.g. knife-spade). This pattern of data supports the hypothesis that informativeness and correlation of features are crucial in accounting for domain-specific semantic deficits.
 
 

References

Bunn, E. M., Tyler, L.K. & Moss, H.E. (1998). Category-specificity and semantic deficits: The role of familiarity and property type re-examined. Neuropsychology, 12, 367-379.

Durrant-Peatfield, M. R., Tyler, L. K., Moss, H. E., & Levy, J. P. (1997). The distinctiveness of form and function in category structure: A connectionist model. In M. G. Shafto (Ed.), Proc. 19th Cognitive Science Conference.

Moss, H. E., Tyler, L. K., Durrant-Peatfield, M., & Bunn, E. M. (1998). 'Two eyes of a see-through': impaired and intact semantic knowledge in a case of selective deficit for living things. Neurocase, 4, 291-310.

Warrington, E.K. & Shallice, T. (1984). Category specific semantic impairments. Brain, 107, 829-853.

genesis();