Action Perception Laboratory

 

 

 

Research funded by ERSC (along with various other organisations)

ESRC Logo

Publications

Conference Abstracts
2014
Conference Proceedings
27. Barraclough, N.E. (2014) Other peoples’ actions interact within our visual system. VSS Symposium, St. Pete’s Beach Florida, USA
Perception of actions relies on the behaviour of neurons in the temporal cortex that respond selectively to the actions of other individuals. It is becoming increasingly clear that visual adaptation, well known for influencing early visual processing of more simple stimuli, appears also to have an influence at later processing stages where actions are coded. In a series of studies we, and others, have been using visual adaptation techniques to attempt to characterize the mechanisms underlying our ability to recognize and interpret information from actions. Action adaptation generates action aftereffects where perception of subsequent actions is biased; they show many of the characteristics of both low-level and high-level face aftereffects, increasing logarithmically with duration of action observation, and declining logarithmically over time. I will discuss recent studies where we have investigated the implications for action adaptation in naturalistic social environments. We used high-definition, orthostereoscopic presentation of life-sized photorealistic actors on a 5.3 x 2.4 m screen in order to maximize immersion in a Virtual Reality environment. We find that action recognition and judgments we make about the internal mental state of other individuals is changed in a way that can be explained by action adaptation. Our ability to recognize and interpret the actions of an individual is dependent, not only on what that individual is doing, but the effect that other individuals in the environment have on our current brain state. Whether or not two individuals are actually interacting in the environment, it seems they interact within our visual system.

26. Yukiko Kikuchi, Jennifer Ip, James C. Mossom, Nick Barraclough, Christopher I. Petkov, Quoc C. Vuong (2014) Attentional modulation of repetition suppression effects in human face- and voice-sensitive cortex. Program No. XXX. 2014 Neuroscience Meeting Planner. Washington, DC: Society for Neuroscience, 2014. Online
An important property of the brain is that its neurons reduce their responses to the repetition of the same or similar environmental stimuli. There is considerable interest in understanding how repetition suppression is influenced by attention, such as when people focus their attention on properties of the repeating stimuli. However whether comparable repetition effects operate in different sensory modalities and whether attention modulates repetition effects in a similar way across the modalities was unclear. We asked how attention modulates repetition effects in the auditory and visual modalities, either by changing the gain of repetition effects (affecting the intercept of the stimulus repetition function) or by selectively sharpening repetition effects (affecting the slope of the repetition function). Nine volunteers participated in separate auditory and visual fMRI experiments in which they directed their attention to voice or face identity changes or to changes of the respective stimuli in their spatial location. By morphing between pairs of different face or voice identities, we aimed to modulate the strength of repetition effects which are stronger for repetition of more similar stimuli. The spatial difference was manipulated by systematically changing the screen position for face pairs or the virtual acoustic location for voice pairs. We also equated performance on the identity and spatial tasks across the two modalities. For each volunteer we functionally localised face- and voice-sensitive regions of interest (ROI). For both face and voice ROIs there was a significant change to the slope of the stimulus repetition function when volunteers attended to identity differences but not to spatial differences (interaction between attention and identity differences, F(2,16)=4.9, p=.02, but no interaction between attention and spatial differences, F<1.0). Moreover, the attentional modulation seemed to be specific to face/voice-sensitive cortex because it was not evident in temporal lobe areas outside of these ROIs. Overall the results suggest comparable repetition effects and attentional modulations of these effects in human face- and voice-sensitive cortex. CIP and QCV: joint senior authors; JI and JCM contributed equally.

Conference Talk
25. Yukiko Kikuchi, Jennifer Ip, James C. Mossom, Nick Barraclough, Christopher I. Petkov, Quoc C. Vuong  (2014) Attentional modulation of repetition suppression effects in human face- and voice-sensitive cortex. Asia-Pacific Advanced Network Conference, Nantou, Taiwan
An important property of the brain is that its neurons reduce their responses to the repetition of the same or similar environmental stimuli. There is considerable interest in understanding how repetition suppression is influenced by attention, such as when people focus their attention on properties of the repeating stimuli. However whether comparable repetition effects operate in different sensory modalities and whether attention modulates repetition effects in a similar way across the modalities was unclear. We asked how attention modulates repetition effects in the auditory and visual modalities, either by changing the gain of repetition effects (affecting the intercept of the stimulus repetition function) or by selectively sharpening repetition effects (affecting the slope of the repetition function). Nine volunteers participated in separate auditory and visual fMRI experiments in which they directed their attention to voice or face identity changes or to changes of the respective stimuli in their spatial location. By morphing between pairs of different face or voice identities, we aimed to modulate the strength of repetition effects which are stronger for repetition of more similar stimuli. The spatial difference was manipulated by systematically changing the screen position for face pairs or the virtual acoustic location for voice pairs. We also equated performance on the identity and spatial tasks across the two modalities. For each volunteer we functionally localised face- and voice-sensitive regions of interest (ROI). For both face and voice ROIs there was a significant change to the slope of the stimulus repetition function when volunteers attended to identity differences but not to spatial differences (interaction between attention and identity differences, F(2,16)=4.9, p=.02, but no interaction between attention and spatial differences, F<1.0). Moreover, the attentional modulation seemed to be specific to face/voice-sensitive cortex because it was not evident in temporal lobe areas outside of these ROIs. Overall the results suggest comparable repetition effects and attentional modulations of these effects in human face- and voice-sensitive cortex. CIP and QCV: joint senior authors; JI and JCM contributed equally.

24. Wincenciak, J., Dzhelyova, M., Perrett, D.I., Barraclough, N.E. (2014) Adaptation to facial trustworthiness is different in female and male observers. 26th Annual Human Behaviour and Evolution Society Conference, Natal, Brazil
Face adaptation, prolonged viewing of faces of a given type, typically decreases sensitivity to that trait in new faces, and has been used to investigate the perceptual mechanisms underlying the processing of various facial characteristics, including identity, sex, and emotional expression. To investigate how recent visual experience influences trustworthiness judgments, we examined how adaptation to perceived trustworthiness in faces influences perceptions of the trustworthiness of subsequent faces. Women showed typically repulsive aftereffects, where novel faces were more likely to be judged as untrustworthy after viewing trustworthy faces, and more likely to be judged as trustworthy after viewing untrustworthy faces. These aftereffects were unaffected by the sex of the adapting or test stimuli. In contrast, recent visual experience did not influence men’s perceptions of the trustworthiness of new faces. This sex difference suggests that different mechanisms may underpin men’s and women’s perception of facial trustworthiness.

2013
Conference Poster
23. Wincenciak, J., Ingham, J., Jellema, T., Barraclough, N.E. (2013) Two systems for emotional action processing: with and without identity. European Society for Cognitive Psychology, Budapest, Hungary
Recent evidence suggests an interaction between the processing of facial identity and facial expression. Would a similar interaction exist for whole-body actions? We investigated the role of identity in the coding of emotional bodily actions using a visual adaptation paradigm. Adaptation to happy and sad actions substantially biased the perception of the emotion of subsequent actions away from the adapting stimuli. That is, when judging the test stimuli participants chose the emotion opposite the adapted emotion significantly more than any other emotion. The magnitude of these aftereffects was dependent upon the similarity between the adapting and test stimuli: aftereffects were strongest when the identities of the actors in the adapting and test stimuli were the same. Both same identity and different identity aftereffects increased logarithmically with adaptation duration. However, the different identity aftereffect declined significantly over 10s, while the same identity aftereffect didn’t. These results indicate the existence of two systems (mechanisms) for coding emotional actions. The identity independent system, which shows typical adaptation dynamics, and the identity dependent system, which does not follow typical adaptation dynamics and seems to involve a long term recalibration with exposure to emotional actions.

2012
Conference Proceedings
22. Wincenciak J, Ingham J S, Jellema T, Barraclough N. E. (2012) Emotional action aftereffects indicate dual emotion coding mechanisms. Perception 41 ECVP Abstract Supplement, page 53 doi:10.1068/v120092
Face aftereffects suggest partially independent coding of facial expressions and facial identity (Fox & Barton, 2007, Brain Research, 1127(1), 80-89). Bodily actions can also convey actor identity and emotional state. We investigated the mechanisms involved in recognising emotions from whole body actions using a visual adaptation paradigm. Following adaptation to actions performed in either a happy or sad fashion participants interpreted subsequent actions performed in a neutral fashion as portraying the opposite emotion. Emotional action aftereffects were stronger when the identity of the actor in the adapting and test stimuli was the same, than when it was different. Both identity dependent and identity independent emotional action aftereffects increased with the duration of the adapting stimuli. However, the different identity aftereffect quickly decayed over time, while the same identity aftereffect had still not decayed after 10.8 sec. These findings suggest that adapting to emotional actions influences 2 separate mechanisms. Following adaptation, an identity independent emotional action coding mechanism shows visual aftereffects with dynamics similar to other high-level aftereffects. A second identity dependent emotional action coding mechanism, however, shows different adaptation dynamics, where adaptation results in a long lasting recalibration of the perceived emotion derived from the actions of the observed individual.

21. Keefe B., Wincenciak J., Ward J., Jellema T., Barraclough N. E. (2012) Adaptation aftereffects when seeing full-body actions: Do findings from traditional 2D presentation apply to 'real-world' stereoscopic presentation? Perception 41 ECVP Abstract Supplement, page 72 doi:10.1068/v120513
Extended viewing of visual stimuli, including faces and actions, can result in adaptation causing a bias (aftereffect) in subsequently viewed stimuli. Previously, all aftereffects have been tested under highly-controlled, but unnaturalistic conditions. In this study, we investigated if adaptation to whole body actions occurred under naturalistic viewing conditions. Participants rated the weight of boxes lifted by test actors following adaptation to a different identity actor lifting a heavy box, lifting a light box, or standing still. Stimuli were presented under 3 different conditions: (1) life-sized stereoscopic presentation on a 5.3 x 2.4m screen, (2) life-sized presentation on a 5.3 x 2.4m screen without stereoscopic depth information, (3) smaller than life presentation on a 22in monitor without stereoscopic depth information. After adapting to an actor lifting heavy or light boxes, subsequently viewed boxes lifted by different actors were perceived as significantly heavier, or lighter, respectively. Aftereffects appeared to show similar dynamics as for other high-level face and action aftereffects, and were similarly sized irrespective of viewing condition. These results suggest that when viewing people in our daily lives, their actions generate visual aftereffects, and this influences our perception of the behaviour of other people.

20. Barraclough N, Ingham J, Page S. (2012) Dynamics of walking adaptation aftereffects induced in static images of walking actors. Perception 41 ECVP Abstract Supplement, page 37 doi:10.1068/v120172
Visual adaptation to walking actions results in subsequent aftereffects that bias perception of static images of walkers in different postures so that they are interpreted as walking in the opposite direction to the adapting actor. In order to test how walking aftereffects are comparable to other well studied low- and high-level visual aftereffects we measured walking aftereffect dynamics in order to assess the characteristics of the adapting mechanism. We found that walking aftereffects showed similar characteristic dynamics as for face aftereffects and some motion aftereffects. Walking aftereffects could be induced in a broad range of different static images of walking actors and were not restricted to images of actors in any particular posture. Walking aftereffects increased with adapting stimulus repetition and declined over time. The duration of the aftereffect was dependent upon time spent observing the adapting stimulus and could be well modelled by a power-law function that characterises this relationship in both face and motion aftereffects. Increasing the speed of the adapting stimulus by increasing actor walk speed increased aftereffect magnitude, as seen for some motion aftereffects. The nature of the aftereffects induced by observing walking actors indicates that they behave like traditional high-level visual aftereffects.

19. Page S, Barraclough N, (2012) Crossmodal adaptation aftereffects following observation of human hand actions. Perception 41 ECVP Abstract Supplement, page 180 doi:10.1068/v120321
Repeated exposure (adaptation) to visual actions can induce adaptation aftereffects biasing subsequent perception of visual actions [Barraclough et al, 2009, Journal of Cognitive Neuroscience, 21, 1805-1819]. Crossmodal aftereffects have been observed to more simple stimuli, for example adaptation to visual motion in depth causes auditory loudness aftereffects [Kitagawa and Ichihara, 2002, Nature, 416, 172-174]. In order to investigate multimodal action coding, we tested whether action sound perception was influenced by prior adaptation to different stimuli (auditory only, visual only, or audiovisual representations of actions). After adapting to auditory action sounds (hand knocking and hand slapping), subsequent test stimuli (blended ‘knock’ and ‘slap’ sounds) sounded less like the adapting stimulus, a repulsive auditory aftereffect. This auditory aftereffect showed a characteristic increase with repetition of the adapting stimulus. We also observed significant crossmodal aftereffects following audiovisual and visual only adaptation. These high-level crossmodal aftereffects suggest multimodal coding of actions in humans, and may result from adaptation in multimodal neurons selective for actions as have been found in the monkey [Barraclough et al, 2005, Journal of Cognitive Neuroscience, 17, 377-391].

Conference talk
18. Nick E. Barraclough, Jennifer Ingham, Stephen A. Page (2012) Dynamics of walking adaptation aftereffects induced in static images of walking actors. Experimental Psychology Society, University of Hull, UK.
Visual adaptation to walking actions results in subsequent aftereffects that bias perception of static images of walkers in different postures so that they appear to walk in the opposite direction to the adapting actor. It is not clear, however, if the walking aftereffect is comparable to other well studied low- and high-level visual aftereffects. We therefore measured the dynamics of the walking aftereffect in order to assess the characteristics of the adapting mechanism. We found that walking aftereffects showed similar characteristic dynamics as for face aftereffects and some motion aftereffects. Walking aftereffects could be induced in a broad range of different static images of walking actors and were not restricted to images of actors in any particular posture. Walking aftereffects increased with adapting stimulus repetition and declined over time. The duration of the aftereffect was dependent upon time spent observing the adapting stimulus and could be well modelled by a power-law function that characterises this relationship in both face and motion aftereffects. Increasing the speed of the adapting stimulus by increasing actor walk speed increased aftereffect magnitude, as seen for some motion aftereffects. The nature of the aftereffects induced by observing walking actors indicates that they behave like traditional high-level visual aftereffects.

17. Joanna Wincenciak, Tjeerd Jellema and Nick E. Barraclough (2012) Two mechanisms for coding emotions from bodily actions. Experimental Psychology Society, University of Hull, UK.
Face aftereffects (AE) have been widely applied to study face coding and have provided support for partially independent coding of facial expressions and facial identity (Ellamil, Susskind, & Anderson, 2008; Fox & Barton, 2007). We can, however, also recognize the identity of other individuals and infer their emotional states by observing their bodily actions. We investigated the mechanisms involved in coding emotions and identity from whole body actions using a visual adaptation paradigm. Following adaptation to happy or sad actions, participants judged the subsequent action as having the opposite emotion. This bias was significantly stronger when the identity of the actor in the adapting and test stimuli was the same. For both conditions (same and different identity) magnitude of emotional action aftereffects increased with the duration of the adapting stimuli. Only the different identity AE decayed over time (absent by 10.8 seconds). These findings suggest that emotional action aftereffects for same and different identity actors rely on different mechanisms. For different identity conditions, aftereffects showed similar dynamics as for other high-level aftereffects; whereas for the same identity condition, adaptation appeared to produce a long lasting recalibration of the perceived emotion derived from the actions of the observed individual.

2011
Conference Proceedings
16. Wincenciak J, Ingham J S, Barraclough N E. (2011) Visual adaptation to emotional actions. Perception 40 ECVP Abstract Supplement, page 213 doi:10.1068/v120513
We are able to recognise the emotions of other individuals by observing characteristic body movements during their actions.  In this study we investigated mechanisms involved in coding emotional actions using a visual adaptation paradigm. We found that after adapting to an action (e.g. walking, lifting a box, sitting) performed conveying one emotion (happy or sad), the subsequent action performed was more likely to be judged as having the opposite emotion. This aftereffect showed similar characteristic dynamics as for other motion and face adaptation aftereffects, for example increasing magnitude with repetition of the adapting action. These emotional action aftereffects cannot be explained by low level adaptation, as they remain significant when actor identity and action differs between the adapting and test stimuli. We also found that emotional aftereffects transferred across faces and whole body actions indicating that emotions may be partially coded irrespective of body part. Our findings provide behavioural support for neuroimaging evidence for body-part independent visual representations of emotions in high-level visual brain areas (Peelen et al, 2010, The Journal of Neuroscience 30, 10127-10134).

Conference Poster
15. Page, S., Barraclough, N.E. (2011) Adaptation to visual actions generate auditory action aftereffects. Experimental Psychology Society, Nottingham, UK
Repeated exposure (adaptation) to visual actions can induce adaptation aftereffects influencing subsequent perception of visual actions (Barraclough, Keith, Xiao, Oram, & Perrett, 2009). Cross-modal aftereffects have been found where adapting to motion in depth causes subsequent auditory amplitude aftereffects (Kitagawa & Ichihara, 2002). We tested whether hand action sound perception was influenced by prior adaptation to different modality stimuli (auditory, visual only, audiovisual, or orthographic representations of actions) in order to investigate if actions were coded multimodally. After adapting to auditory action sounds (hand knocking and hand slapping), subsequent test stimuli (blended ‘knock’ and ‘slap’ sounds) sounded less like the adapting stimulus. This aftereffect showed a characteristic increase with repetition of the adapting stimulus. We also observed aftereffects following audiovisual and visual only adaptation, but not following orthographic stimuli. These crossmodal adaptation aftereffects suggest multimodal coding of actions  in humans, and may result from adaptation of multimodal neurons involved in action coding similar to those observed in the Superior Temporal Sulcus of the macaque monkey (Barraclough, Xiao, Oram, & Perrett, 2005).

14. Joanna Wincenciak, Jennifer Ingham, Nick E. Barraclough (2011) Visual adaptation to emotional actions. Experimental Psychology Society, Nottingham, UK
We are able to recognise the emotions of other individuals by observing characteristic body movements during their actions. In this study we investigated mechanisms involved in coding emotional actions using a visual adaptation paradigm. We found that after adapting to an action (e.g. walking, lifting a box, sitting) performed conveying one emotion (happy or sad), the subsequent action performed was more likely to be judged as having the opposite emotion. This aftereffect showed similar characteristic dynamics as for other motion and face adaptation aftereffects, for example increasing magnitude with repetition of the adapting action. These emotional action aftereffects cannot be explained by low level adaptation, as they remain significant when actor identity and action differs between the adapting and test stimuli. We also found that emotional aftereffects transferred across faces and whole body actions indicating that emotions may be partially coded irrespective of body part. Our findings provide behavioural support for neuroimaging evidence for body-part independent visual representations of emotions in high-level visual brain areas (Peelen et al, 2010).

13. Keefe, B.D, Margerison, B., Dzhelyova, M., Perrett, D. I., Barraclough, N.E. (2011) Face adaption improves trustworthiness discrimination. Applied Vision Association, Christmas Meeting. UCL, London.
Adaptation to a variety of facial characteristics such as identity, gender, and age, has been shown to bias our percept of faces in the opposing direction to the adapted level. These results suggest heterogeneous populations of neurons that encode different facial attributes at a relatively high-level of visual processing. Recently, face adaptation has been shown to improve sensitivity to the gender, race, and viewpoint of faces. In this study, we examined whether adapting to varying levels of face trustworthiness, improved sensitivity around the adapted level. In the first experiment just noticeable differences (JNDs) where calculated around an untrustworthy face after participants adapted to an untrustworthy face, a trustworthy face, or did not adapt. In the second experiment, the three conditions were identical except that JNDs were calculated around a relatively trustworthy face. In both experiments, participants completed a 2-alternate forced choice adaptive staircase procedure and JNDs were derived from the 76% point of a cumulative Gaussian fitted to the data. Compared to no adaptation, adapting to an untrustworthy or trustworthy face, improved discrimination around untrustworthy and trustworthy faces respectively. When adapting to an untrustworthy face but discriminating around a trustworthy face (and vice-versa), there was no improved sensitivity and JNDs were equivalent to those in the no adaptation condition. These findings suggest that distinct neuronal populations encode the level of facial trustworthiness, and that adaption can alter the tuning of these neuronal populations to improve our sensitivity to the trustworthiness of faces.

2010
Conference Proceedings
12. Barraclough N E, Jellema T. (2010) Visual aftereffects for walking reveal underlying neural mechanisms for action recognition. Perception 39 ECVP Abstract Supplement, page 77 doi:10.1068/v100094
We present results illustrating a new high-level visual after-effect: observing actors walking forward, without horizontal translation, make subsequent actors appear to walk backward, while the opposite effect is obtained after observing backward walking. We used this after-effect, which cannot be explained by simple low-level adaptation to motion direction, to investigate the properties of neural mechanisms underlying recognition of walking actions. Our results suggest that the perception of walking actions containing movement and the perception of static images of actors in walking postures relies on common brain mechanisms that are primarily object-centered, rather than viewer-centered, and are “blind” to the identity of the actor. These results obtained with human psychophysical adaptation techniques support previous evidence accumulated using single unit recording in non-human primates, and should be incorporated into current models of human action recognition. We conclude that action-adaptation is a powerful technique to determine the brain mechanisms in humans that underlie our perception of the behavior of other individuals.

2008
Conference Proceedings
11. Nick E. Barraclough, Rebecca H. Keith, Dengke Xiao, Mike W. Oram, David I. Perrett (2008) Human visual adaptation and monkey superior temporal sulcus cell responses to goal-directed hand actions. Program No. 615.3. 2008 Neuroscience Meeting Planner. Washington, DC: Society for Neuroscience, 2008. Online
Prolonged exposure to visual stimuli, often results in an adaptation ‘after-effect’ which distorts our perception of subsequent visual stimuli. This technique has been commonly used to investigate mechanisms underlying our perception of simple visual stimuli, and more recently of static faces. We tested the effects of adaptation to movies of hands grasping and its ‘opposite’, placing objects of different weight in humans and compared the results to single cell recordings in the superior temporal sulcus of the monkey. Adapting to hands grasping light or heavy objects led to objects appearing relatively heavier, or lighter, respectively. The after-effects increased logarithmically with adaptation action repetition and decayed logarithmically with time. Adaptation after-effects also indicated that perception of actions relies predominantly on view-dependent mechanisms. Adapting to one action significantly influenced the perception of the opposite action, suggesting common processing mechanisms. These after-effects can only be explained by adaptation of mechanisms that take into account the presence/absence of the object in the hand. We tested if evidence for action processing mechanisms obtained using visual adaptation techniques substantiates neural processing. We recorded monkey superior temporal sulcus (STS) single cell responses to hand actions. Cells displayed selectivity for action type responded during particular action phases but were often additionally responsive to components of opposite actions. Cell responses were sensitive to the view of the action, and were dependent upon the presence of the object in the scene. We show that action processing mechanisms established using visual adaptation parallel the neural mechanisms revealed during recording from monkey STS. Visual adaptation techniques can thus be usefully employed to investigate brain mechanisms underlying action perception.

10. Nick E. Barraclough, Rebecca H. Keith, Dengke Xiao, Mike W. Oram, David I. Perrett (2008) Action Psychophysics and Neurophysiology. Federation of the Societies of Neuropsychology, Edinburgh, UK.
Background: Visual adaptation has been used to understand the brain mechanisms underlying visual processing of simple stimuli and static faces. We wanted to know if humans adapted to movies of actions, and if brain mechanisms elucidated using this technique confirmed neurophysiological accounts.
Methods: We tested human judgement of the weight of different objects being grasped and placed both before and after adaptation. In addition, we recorded the responses of single cells in the monkey superior temporal sulcus (STS) to similar actions.
Results: Human perception showed strong action ‘after-effects’, these increased logarithmically with action repetition and decayed logarithmically with time. Action after-effects also indicated that perception of actions relies predominantly on view-dependent mechanisms; after-effects transferred across actions. STS cell responses confirmed these findings: most cells responded to both actions, responding during different phases of the actions. Cell responses were sensitive to the view of the action, and were dependent upon the presence of the object in the scene.
Discussion: Action adaptation appears to occur at a ‘high-level’ rather than a ‘low-level’. Mechanisms established using visual adaptation parallel neural mechanisms revealed during recording from monkey STS.  Visual adaptation techniques can thus be usefully employed to investigate brain mechanisms underlying action processing.

2007
Conference Proceedings
9. Richard J.A. van Wezel, Nick E. Barraclough, Tjeerd Jellema, Jacob Duijnhouwer, Dengke K. Xiao, Martin J.M. Lankheet, David I. Perrett, Mathijs Raemaekers and Jeannette A. M. Lorteije. (2007) No evidence for animate implied motion processing in cortical areas MT and MST. Society for Neuroscience Abstract, Vol. 33
We investigated whether cells in middle temporal (MT) and medial superior temporal (MST) areas in the macaque respond to implied motion. We recorded cell responses to static images of human or monkey figures running or walking, and compared these responses to the same human and monkey figures standing or sitting still. We also investigated whether the implied motion direction (facing left or right) that elicited the highest response was correlated to the preferred direction for moving random dot patterns. In the first experiment figures were presented inside the cell’s receptive field. In a second experiment figures were presented at the fovea, while a dynamic noise pattern was presented at the cell’s receptive field location. For both experiments, the results show that the responses of individual MT and MST units and neural populations do not discriminate between figures implying motion or standing still. Response preferences of individual cells for human implied motion correlated with preferences for low-level visual features such as orientation and stimulus location. Furthermore, no correlation was found between the preferred direction for implied motion and the preferred direction for moving random dot patterns. In a separate experiment we verified that cells in anterior regions of the superior temporal sulcus (STSa) respond differentially to the figures that we used in this study, confirming processing of implied motion in STSa. Furthermore, we performed a human fMRI study that controlled for low-level features in implied motion stimuli, and found that low-level stimulus features play an important role in activation to implied motion figures in human MT and its satellites. In contrast to previous human imaging studies these results show no evidence for animate implied motion processing in human and monkey areas MT/MST.

8. Nick E. Barraclough, Jeannette Lorteije, Dengke Xiao, Mike W. Oram, David I. Perrett (2007) Primate Superior Temporal Sulcus (STS) cell responses and human adaptation to walking sequences and static human postures.Society for Neuroscience Abstract, Vol. 33
Static images of animate agents can convey whether they are moving or not.  Images of humans in postures with arms and legs outstretched (articulated) often appear to be implying motion.  Humans in postures with arms and legs near the body (standing) are not interpreted as moving.  In monkey STS cells, we tested the association between sensitivity to movies of humans walking forwards and backwards, the sensitivity to the posture of static images of humans and the sensitivity to moving random dot patterns.  There was a significant correlation between cell sensitivity to the posture of human figures and sensitivity to walking direction.  Those cells that selectively responded to images implying motion were more likely to respond selectively to humans walking forwards (and vice versa); cells that selectively responded to images not implying motion were more likely to respond selectively to humans walking backwards (and vice versa).  There was no significant correlation between cell sensitivity to the speed or direction of movement of random dot patterns and sensitivity to walking movies or sensitivity to static images of humans.  Furthermore, we tested human subjects’ perception of pairs of static images of humans in different postures, taken from a walking sequence, before and after adaptation to forward and backwards walking.  After adaptation to forward walking, subjects were more likely to judge the static image pairs as walking backwards; after adaptation to backwards walking, subjects were more likely to judge the static image pairs as walking forwards.  Together these results suggest that in monkey STS cells and humans forwards and backwards walking are opponently coded.  STS cells coding walking direction could generate an implied motion signal from static images to be used for subsequent motion processing.

2006
7. Conference Proceedings
Perrett D I, Xiao D, Jellema T, Barraclough N, Oram M W. (2006) Social perception from static and dynamic visual information. Perception 35 ECVP Abstract Supplement doi:10.1068/v060621
Social cognition relies on interpreting the 'state' of others. Shape, texture, and colour cues are available from the face to drive social cognition. Within the temporal cortex, colour modulates responses of 70% of cells tuned to the form of faces. For human perception, colour influences perception of identity, attractiveness, and health. Face colour coded by cells, may therefore shape social cognition. For an agent, the direction of attention and movement relative to objects allow the agent's behaviour to be coded as goal-directed. Extrapolation from visible body movements can support inferences when the action becomes occluded from sight. For example, one can infer the continued presence of a person hidden behind a screen if one sees the person walk there but not re-emerge. Moreover, intentions can be inferred if the person's reappearance does not occur when predicted from their trajectory before occlusion. Information about likely future or prior body movements can also be 'implied' from postures visible at specific moments. It is proposed that associative learning mechanisms relate available visual cues to action outcomes and social cognition. In this scheme, social cognition becomes a process of statistical inference about likely behaviour and the attributes of others from sensory cues.

2005
Conference Proceedings
6. Xiao, D., *Barraclough, N.E., Oram, M.W., Perrett, D.I. (2005) Forward masking of the responses of neurons in the superior temporal sulcus. Society for Neuroscience Abstract, Vol. 31
Perception of a visual stimulus can be disrupted, or masked, by another within close spatial or temporal proximity.  Forward masking occurs when a mask is presented prior to a target stimulus, thereby disrupting its perception.  We investigated responses of neurons sensitive to complex images in the superior temporal sulcus under forward masking conditions.  We show that forward masking affects the initial transient of the response to an image more than the sustained component of the response, reducing the peak magnitude and delaying it.  Forward masking is greatest at short mask-target onset asynchronies, declines with increasing mask-target onset asynchrony and extends at least up to 458ms after mask onset.  The duration of influence of forward masking is twice that reported for backward masking.  Forward masking is dependent upon the effectiveness of the mask image as a stimulus itself: the larger a cell’s average response to the mask, the smaller the cell’s average response to the target. The response of a cell to the mask stimulus on a given trial, however, does not determine the degree of masking of the cell’s response to the target on that trial.  This suggests that forward masking is a network property and does not depend on local fatigue of the individual cells.  Under the same stimulus conditions, human observers are slower in detecting a target image when it’s preceded by a mask image.  Delays in human reaction time declines with increasing mask-target onset asynchrony and correlate well with delays in the peak responses of monkey STS neurons.  We argue that forward masking of cell responses is a network property of the visual system and its appearance in temporal cortex neurons could underlie the phenomena of forward masking seen in human perception.

5. Perrett D I, Xiao D, Barraclough N E, Oram M W, Keysers C. (2005) Receptive fields as prediction devices: A comparison of cell tuning to single images and to natural image sequences in temporal cortex. Perception 34 ECVP Abstract Supplement doi:10.1068/v050034
We experience the world as a continuous stream of events where the previous scenes help us anticipate and interpret the current scene. Visual studies, however, typically focus on the processing of individual images presented without context. To understand how processing of isolated images relates to processing of continuously changing scenes, we compared cell responses in the macaque temporal cortex to single images (of faces and hands) with responses to the same images occurring in pairs or sequences during actions. We found two phenomena affecting the responses to image pairs and sequences: (a) temporal summation, whereby responses to inputs from successive images add together, and (b) 'forward masking', where the response to one image diminishes the response to subsequent images. Masking was maximal with visually similar images and decayed over 500 ms. Masking reflects interactions between cells rather than adaptation of individual cells following heightened activity. A cell's 'receptive field' can be defined by tuning to isolated stimuli that vary along one dimension (e.g. position or head view). Typically, this is a bell-shaped curve. When stimuli change continuously over time (e.g. head rotation through different views), summation and masking skew the tuning. The first detectable response to view sequences occurs 25 ms earlier than for corresponding isolated views. Moreover, the responses to sequences peak before the most effective solitary image: the peak shift is ~1/2 the bandwidth of tuning to isolated stimuli. These changes result in activity across cells tuned to different views of the head that 'anticipate' future views in the sequence: at any moment the maximum activity is found in those cells tuned to images that are about to occur. We note that, when sensory inputs change along any dimension, summation and masking transform classical receptive field properties of cells tuned to that dimension such that they predict imminent sensations.

2003
Conference Proceedings
4. Barraclough, N.E., Xiao, D., Oram, M.W., Perrett, D.I. (2003) Primate superior temporal sulcus neurons integrate visual and auditory information for biological motions. Society for Neuroscience Abstract, Vol. 29
Neurons in the superior temporal sulcus (STS) of the macaque monkey respond to different modality stimuli, visual, auditory, somatosensory, or to more that one of these modalities.  Many studies have previously shown that STS is involved in the processing of biological motion; effective stimuli include images of whole bodies walking and hands grasping objects.  We recorded from 92 STS neurons sensitive to different biological motions in the rhesus macaque (macaca mullata).  We measured neuronal responses to movies of actions with and without the auditory information, and also to the auditory information alone.  In a smaller subset of neurons we tested the neuronal response to movies of actions with auditory information appropriate and inappropriate for the actions presented.  The addition of auditory information augmented the visual response in 28% of neurons and attenuated the visual response in 19% of neurons.  In 50% of neurons tested sound failed to modulate visual responses; 3% of neurons tested responded only to auditory stimuli.  In 40% of neurons that showed augmentation of the visual response with auditory information, the type of auditory information was important.  Addition of sounds inappropriate to the biological motion visually presented failed to produce the augmentation found with the appropriate sounds. These results show that a high proportion of STS neurons that respond to the sight of biological motion also integrate auditory information. The neuronal sensitivity to combined visual and auditory information in many cases is specific for the identity of the action.

2001
Conference Proceedings
3. Barraclough, N.E., Tinsley, C.J., Webb, B.S., Goodson, G.R., Easton, A., Parker, A., Derrington, A.M. (2001) Second-order motion in marmoset V1 and V2. Society for Neuroscience Abstract, Vol. 27
Analysis of second-order motion is hypothesised to occur in the cortex, in a different stream from first-order motion, (Wilson, H.R., Ferrera, V.P., Yo, C. (1992) Vis. Neurosci. 9). Zhou and Baker (J. Neurophysiol. 72 (1994)), found responses to second-order stimuli more predominantly in area 18 than in area 17 of the cat. We report here that more V2 cells respond to second-order motion than V1, and they give a bigger response than V1 cells, in the marmoset. Single cells, from V1 and V2 of the anaesthetised marmoset,Callithrix jacchus, were tested with moving ‘beat’ stimuli in both the preferred and the null direction of the cells.  The two ‘beat’ patterns were made by adding a static and a moving sinusoidal grating. In one pattern the second-order ‘beat’ moved in the same direction as the moving sinusoidal component. In the other pattern the second-order ‘beat’ moved in the opposite direction to the moving sinusoidal component. Most direction selective V1 cells showed responses that were selective for the direction of movement of the moving sinusoidal component of the stimulus, irrespective of the direction of movement of the second-order ‘beat’. Only a few cells showed small responses that were selective for the direction of movement of the second-order ‘beat’. Some V2 cells were like V1 cells, others showed responses that were selective for the direction of movement of the second-order ‘beat’ and not the moving sinusoidal component. On average the V2 responses that were selective for the second-order motion were bigger than the V1 cells. These results indicate that V2 is more predominantly involved in the processing second-order motion than V1.

Conference poster
2. N.E. Barraclough, C.J. Tinsley, B.S. Webb, G.R. Goodson, A. Easton, A.E. Parker, A.M. Derrington. (2001) Second-order motion in marmoset V1, V2 and V3. The physiology of cognitive processes, The Royal Society, UK
Zhou and Baker (J. Neurophysiol. 72 (1994)), found responses to second-order stimuli more predominantly in area 18 than in area 17 of the cat. We report here that more V2 and V3 cells respond to second-order motion than V1. Direction selective single cells, from V1, V2 and V3 of the anaesthetised marmoset, were tested with moving ‘beat’ stimuli. Most V1 cells showed responses that were selective for the direction of movement of the first-order component of the stimulus. Only a few cells showed responses that were selective for the direction of movement of the second-order component. Some V2 and V3 cells were like V1 cells, most cells preferred second-order motion. These results indicate that V2 and V3 are more predominantly involved in the processing second-order motion.

2000
Conference Proceedings
1. Barraclough, N.E., Derrington, A.M., Felisberti, F.M. (2000) Responses to second-order patterns in primate lateral geniculate nucleus. Society for Neuroscience Abstract, Vol. 26
The mammalian visual system is able to detect movement signalled by spatiotemporal changes in contrast. Cells in areas 17 and 18 of the cat cortex will respond to a stimulus where the movement is defined only by a modulation in its contrast (Mareschal, I. and Baker, C.L. (1998) Nature Neuroscience, 1:2). In order to extract this ‘second-order’ motion, a non-linear transformation must be performed at some point in the visual pathway. Here we report that LGN cells in a non-human primate produce responses suitable for the processing of second order of motion. Single cells were recorded from the LGN of the anaesthetised marmoset, Callithrix jacchus. The stimulus presented was a ‘beat pattern’; this consisted of two sinusoidal luminance gratings of slightly different spatial frequencies, moving in opposite directions at different temporal frequencies. The ‘beat pattern’ is a second-order contrast-defined pattern whose spatial and temporal frequencies are given by the differences between the frequencies of the two luminance-defined gratings. Fourier analysis was used to extract the response of the cells at the sum and difference frequencies. The LGN cells showed strong responses at the temporal frequencies corresponding to the sum frequency (or beat), and the difference frequency. These response components were on average, 80% of those of the luminance gratings. The variation in amplitude of these second-order components as the contrasts of the component gratings are varied can be accounted for by modelling the overall cell response as a quadratic.

Back to publications