disentangling the contributions of grasp and action ... · disentangling the contributions of grasp...

12
1 in press, Experimental Brain Research Disentangling the contributions of grasp and action representations in the recognition of manipulable objects Nicolas A. McNair , Irina M. Harris School of Psychology, The University of Sydney, NSW 2006, Australia Phone: +61 2 9351 3497 Fax: +61 2 9351 2603 Email: [email protected] Abstract There is increasing evidence that the action properties of manipulable objects can play a role in object recognition, as objects with similar action properties can facilitate each other's recognition [Helbig HB, Graf M, Kiefer M (2006) The role of action representations in visual object recognition. Exp Brain Res 174: 221-228. doi: 10.1007/s00221-006-0443-5]. However, it is unclear whether this modulation is driven by the actions involved in using the object or the grasps afforded by the objects, because these factors have been confounded in previous studies. Here we attempted to disentangle the relative contributions of the action and grasp properties by using a priming paradigm in which action and grasp similarity between two objects were varied orthogonally. We found that target tools with similar grasp properties to the prime tool were named more accurately than those with dissimilar grasps. However, naming accuracy was not affected by the similarity of action properties between the prime and target tools. This suggests that knowledge about how an object is used is not automatically accessed when identifying a manipulable object. What are automatically accessed are the transformations necessary to interact directly with the object – i.e., the manner in which one grasps the object. Keywords: Action, Grasp, Object recognition, Priming, Representations

Upload: dangque

Post on 22-Jan-2019

226 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Disentangling the contributions of grasp and action ... · Disentangling the contributions of grasp and action representations in the recognition of manipulable objects ... Mean age

in press, Experimental Brain Research

 

Disentangling the contributions of grasp and action representations in the recognition of manipulable objects Nicolas A. McNair†, Irina M. Harris 

School of Psychology, The University of Sydney, NSW 2006, Australia 

Phone: +61 2 9351 3497 

Fax: +61 2 9351 2603 

E‐mail: [email protected] 

Abstract There is increasing evidence that the action properties of manipulable objects can play a role in object recognition, as objects with similar action properties can facilitate each other's recognition [Helbig HB, Graf M, Kiefer M (2006) The role of action representations in visual object recognition. Exp Brain Res 174: 221-228. doi: 10.1007/s00221-006-0443-5]. However, it is unclear whether this modulation is driven by the actions involved in using the object or the grasps afforded by the objects, because these factors have been confounded in previous studies. Here we attempted to disentangle the relative contributions of the action and grasp properties by using a priming paradigm in which action and grasp similarity between two objects were varied orthogonally. We found that target tools with similar grasp properties to the prime tool were named more accurately than those with dissimilar grasps. However, naming accuracy was not affected by the similarity of action properties between the prime and target tools. This suggests that knowledge about how an object is used is not automatically accessed when identifying a manipulable object. What are automatically accessed are the transformations necessary to interact directly with the object – i.e., the manner in which one grasps the object.

Keywords: Action, Grasp, Object recognition, Priming, Representations 

 

Page 2: Disentangling the contributions of grasp and action ... · Disentangling the contributions of grasp and action representations in the recognition of manipulable objects ... Mean age

Introduction Much of our early understanding of the mental representation of tools has come from the disorder apraxia. Apraxia is characterised by difficulty in imitating and performing gestures and in using tools, and is typically associated with damage to left hemisphere structures; particularly the parietal lobe (Goldenberg, 2009). It is commonly subcategorised into ideational apraxia (IA), associated with impaired access to semantic information about object function (i.e., what an object is used for), and ideomotor apraxia (IM), associated with impaired knowledge about object manipulation (i.e., how to use an object). As a result, current models of tool-use point to a distributed representation incorporating two semantic systems: one based on conceptual knowledge about the function of tools; and the other based on action semantics related to the manipulation of the tool itself (Buxbaum, 2001). These stored representations of manipulation knowledge, referred to as 'gesture engrams', include the specific grasp and articulated movements necessary to achieve a particular action. The division between the representation of function and manipulation knowledge related to tools is supported by lesion (e.g., Buxbaum & Saffran, 2002) and neuroimaging (e.g., Boronat et al., 2005) studies. However, brain-damage associated with impaired semantic information about tool function and manipulation has been shown to occur without concomitant impairment in tool-use (Negri et al., 2007). Additionally, errors in grasps associated with using a tool are either small (Randerath, Li, Goldenberg, & Hermsdörfer, 2009) or largely non-existent (Osiurak et al., 2008a) in apraxia, and show no correlation with the degree of impairment in tool-use, implying independent representations of functional action and grasp attributes.

More recently, research has focused on an idea, originally proposed by Gibson (Gibson, 1979), that tools automatically activate their associated use – or 'affordance'. This idea links neatly with feature-based hypotheses about the organisation of semantic knowledge. These suggest that information about a particular object feature (in this case, object 'use') is stored in the same areas that were activated during its sensorimotor experience (Noppeney, 2009). In other words, the performance, or observation, of tool-use activates motor programs which are then linked with the visual and conceptual representation of that object. Indeed, neuroimaging studies have shown left hemispheric fronto-parietal sensorimotor activation, including ventral premotor cortex (PMv), inferior parietal lobule (IPL), and intra-parietal sulcus (IPS), while viewing manipulable objects (Chao & Martin, 2000; Kellenbach, Brett, & Patterson, 2003; Grèzes & Decety, 2002) and during the planning and execution of actions with such objects (Kroliczak & Frey, 2009; Vingerhoets, Vandekerckhove, Honore, Vandamaele, & Achten, 2011). Behaviourally, it has been demonstrated that the motor affordance of manipulable objects can affect movement execution. For example, when particular responses are made while viewing graspable stimuli, faster reaction times are seen when those responses are compatible with the manner in which the object itself would be grasped – an affordance-based compatibility effect (Ellis & Tucker, 2000). A spatial compatibility effect has also been demonstrated with tools (Tucker & Ellis, 2004). Responses made with the left or right hand are faster when the task-irrelevant handle of a viewed object points towards the left or right, respectively. This later effect also seems to be dependent on the object being presented in 'reachable space' (Constantini et al., 2010). However, neither of these effects necessarily demonstrates motor activity associated with the use of the object. For example, the affordance compatibility effect is also found towards novel objects (Vainio, Ellis, Tucker, & Symes, 2006) and is sensitive to the graspability of a tool but not the degree of familiarity with its use (Anderson, Yamagishi, & Karavia, 2002). Therefore, these effects may simply represent ''embryonic reaching and grasping movements towards graspable stimuli" (Vingerhoets, Vandamme, & Vercammen, 2009, p. 488) in general. Furthermore, Anderson et al., (2002) showed that spatial compatibility effects may be the result of attentional bias toward a particular part of the object (although see: Pappas & Mack, 2008), and not a result of activated motor affordances. Based on these findings, it remains uncertain whether the actions, or even grasps, associated with a tool form part of its representation.

Some specific evidence purporting to support a role of motor representations in the recognition of manipulable objects comes from a recent study by Helbig, Graf, and Kiefer (2006). This study demonstrated that manipulable objects could prime identification of other manipulable objects that have similar actions (e.g., a frying pan and a dustpan) compared to objects that are similarly shaped but have different actions associated with them (e.g., a frying pan and a banjo). We have also recently found that manipulable objects show repetition priming when they are repeated under rapid serial visual processing (RSVP) conditions, in contrast to non-manipulable objects, which produce repetition blindness (i.e., a failure to detect the second occurrence of the object) under the same conditions (Harris, Murray, Hayward, O'Callaghan, & Andrews,

Page 3: Disentangling the contributions of grasp and action ... · Disentangling the contributions of grasp and action representations in the recognition of manipulable objects ... Mean age

submitted). However, in these previous studies, the object pairs used were also similar in the way they are grasped. This makes it difficult to determine whether the priming effect was mediated by the action or the grasp properties shared by the two objects.

In the present experiment we attempted to disentangle action and grasp contributions to object priming by using a similar priming paradigm to Helbig et al. (2006). We define grasp here as the specific positioning of the fingers and hand when grasping the tool as if to use it. The action associated with a tool encompasses the movements made by the fingers, wrist, and forearm – irrespective of the particular grasp employed during the movement. For example, a key and a screwdriver are grasped differently but the actions associated with their use are similar (i.e., a twisting action). Conversely, a spanner and a screwdriver are grasped in a similar manner but are associated with very different actions. To achieve this we employed sets of tools in which we systematically varied the similarity in the associated grasps vs. actions between the target and prime objects.

Method

Participants

Eighteen right-handed participants (10 female; Mean age = 19.2) with normal, or corrected-to-normal, vision were recruited from amongst undergraduate psychology students at the University of Sydney in exchange for course credit. The experimental procedures were approved by the Human Research Ethics Committee of the University of Sydney, and all participants gave their informed consent to take part in the experiment.

Stimuli and Apparatus

Stimuli consisted of gray-scale pictures taken from either a photo-object database (Hemera Inc., Canada) or Google Image Search. Sets of tools were created around a base prime tool. Targets consisted of a tool with a similar grasp and action (G+A+), a tool with similar grasp but dissimilar action (G+A-), a tool with dissimilar grasp but similar action (G-A+), and finally a tool with dissimilar grasp and action (G-A-). Eight such sets were created (see Figure 1). A separate pool of 40 tool pictures were used for practice trials and the target-duration adjustment procedure (outlined below), while a further 21 pictures of animals were used for the norming studies outlined below.

INSERT FIGURE 1 APPROXIMATELY HERE  

Picture stimuli were resized to 200x200 pixels. Displayed on a 17" CRT monitor at a resolution of 1024x768 (120Hz refresh), the picture size was approximately 7.4x7.4cm. In order to match the visual angle of stimuli (6.5°) used by Helbig et al. (2006), participants sat at a viewing distance of ~65cm. Participants familiarised themselves with the pictures before starting the experiment. Stimulus presentation was controlled via MATLAB, using the Psychophysics Toolbox (version 3; Brainard, 1997; Pelli, 1997; Kleiner et al., 2007), running on an Apple Mac Mini computer.

We conducted a number of norming measures similar to those used by Helbig et al. (2006). Firstly, 10 people named the tools without time constraint. The most common name produced was then subsequently used to describe each picture. We then asked 10 new participants to identify the stimuli under the same procedure to be used in the experiment itself (see below for full details). Briefly, on each trial a masked prime picture was shown followed by a masked target picture. Participants then had to select the names of the two pictures shown during the trial from a list of all the names of tools used in the experiment. In order to avoid any priming effects while establishing identification norms, the prime tool in each set was replaced with a random animal picture. Although this entailed that there was actually no relationship between the prime and target stimuli, the G/A nomenclature is still used in describing the results to distinguish the four distinct target groups. Data were analysed using Friedman's test which found no significant difference in base naming accuracy between the four groups of targets (G+A+: 87.8%; G+A-: 86.5%; G-A+: 87.5%; G-A-: 87.5%; p= .589). Another group of participants (n= 15) then rated, on a scale of 1 to 6, the grasp similarity, action similarity, shape similarity, and contextual similarity between each target and the prime for each of the separate tool sets. Wilcoxon signed-rank tests were performed to examine the same pre-planned comparisons as those we intended to conduct in the experiment itself. Specifically, we compared: 1) tools with grasps categorised as similar to the prime (G+: G+A+ & G+A-) to tools with grasps dissimilar to the prime (G-: G-A+ & G-A-); 2) tools with

Page 4: Disentangling the contributions of grasp and action ... · Disentangling the contributions of grasp and action representations in the recognition of manipulable objects ... Mean age

actions categorised as similar to the prime (A+: G+A+ & G-A+) to tools with actions dissimilar to the prime (A-: G+A- & G-A-); 3) the additive effect of grasp-similarity above that of action-similarity (G+A+ vs. G-A+); and 4) the additive effect of action-similarity above that of grasp-similarity (G+A+ vs. G+A-). The outcome of each similarity measure is outlined individually below.

Grasp Similarity Ratings

Participants were asked to rate the similarity of the hand grasp (i.e., the way they would shape their fingers and hand) when using each tool compared to the reference (prime) tool – ignoring the particular actions associated with the use of each object. As expected, similar-grasp tools (G+: 4.8) were rated overall higher than dissimilar-grasp tools (G-: 2.2; p= .003). Furthermore, grasp ratings for action-similar (A+: 3.7) and action-dissimilar (A-: 3.3) tools were not significantly different from each other (p= .874). The two similar-grasp conditions (G+A+: 5.0; G+A-: 4.5) also did not differ significantly from each other (p= .143), but the G+A+ condition had more similar grasps to the prime than the G-A+ condition (2.3; p = .004).

Action Similarity Ratings

Participants were asked to rate the similarity of the motor action performed when using each target tool compared to the prime tool – ignoring the particular hand grasps associated with each object. The similar-action tools (A+: 4.6) had significantly higher action-similarity ratings than the dissimilar-action tools (A-: 2.0; p= .003). However there was no significant difference in action similarity between the primes and those tools with either similar (G+: 3.4) or dissimilar (G-: 3.1) grasps (p= .294). Furthermore, the two similar-actions conditions (G+A+: 4.6; G-A+: 4.5) did not differ significantly from each other (p> .999), but the G+A+ condition had a significantly higher action rating than the G+A- condition (2.3; p = .003). Thus, our attempt to decouple grasp and action similarity seems to have been successful.

Shape Similarity Ratings

Here, participants were asked to rate how similar the visual shape of each tool was compared to the prime tool. Tools with similar grasps (G+: 2.8) were judged to have more similar shapes to the prime than the tools with dissimilar grasps (G-: 2.0; p= .003). Tools with similar actions (A+: 3.0) were also rated as having more similar shapes than tools with dissimilar actions (A-: 1.9; p= .003). Furthermore, the G+A+ condition (3.4) had higher shape similarity ratings than both the G+A- condition (2.3; p= .003) and the G-A+ condition (2.6; p = .004). Although not ideal, we did not consider this to be of too much concern given that Helbig et al. (2006) elicited priming of congruent action targets despite controlling for visual similarity. We also address this issue in the Discussion below.

Contextual Similarity Ratings

Finally, participants were asked to rate the similarity of the context in which each tool is used to that of the prime tool. Tools with similar grasps to the prime (G+: 2.1) had similar context ratings to those with dissimilar grasps (G-: 2.0; p= .470). Context ratings were also similar between the action-similar (A+: 2.2) and action-dissimilar (A-: 2.0) tools (p> .999), as were those between the G+A+ (2.2) and G-A+ (2.1) conditions (p> .999) and between tools in the G+A+ (2.2) and G+A- (2.1) conditions (p> .999).

Procedure

The trial procedure was based on that used by Helbig et al. (2006; see Figure 2), with some modifications. After an initial fixation cross in the centre of the screen, trials began with a blank screen for 700ms. Following this the prime picture was presented for 167ms and then replaced by a random mask for 42ms. After an intermediary blank screen of 67ms, the target picture was shown for a duration determined by a thresholding procedure conducted before commencing the experiment proper. The target picture was replaced by a second mask shown for 1000ms. At the end of the trial, participants were shown a screen with a list of 40 object names and were instructed to use the arrow keys and spacebar on the keyboard to highlight and select the name of the first picture shown during the trial. The names were then reset, and they then selected the name of the second picture they had seen, before progressing to the next trial. Participants were given

Page 5: Disentangling the contributions of grasp and action ... · Disentangling the contributions of grasp and action representations in the recognition of manipulable objects ... Mean age

unlimited time to respond.

INSERT FIGURE 2 APPROXIMATELY HERE  

The experiment was divided into three phases: 1) a practice phase consisting of 10 trials; 2) a thresholding phase, during which the duration of the target picture was adjusted using a 3-down-1-up staircase procedure in order to achieve accuracy of approximately 80%; 3) the experimental phase, using the target duration achieved during thresholding (Mean duration across participants: 217ms). During the practice and threshold phases, prime and target stimuli were drawn randomly from a collection of pictures. Each phase had its own collection that did not overlap with the other or with the pictures in the sets used during the experimental phase. The experimental phase was divided into two blocks. Each block comprised 64 trials, consisting of 4 repeats of each condition (G+A+, G+A-, G-A+, and G-A-) for each of the 8 sets of tools. Across the two blocks, this resulted in each condition being presented 64 times.

Statistical Analyses

Because of negative skew in the data, target accuracy across the four conditions was analysed with Friedman's nonparametric test for related measures (using chi-square approximation). Only trials in which the prime was accurately identified were included (Mean across participants: 97%). A set number of more specific comparisons, using Wilcoxon signed-rank tests, were planned in advance based on our hypotheses. First, we compared similar actions (G+A+/G-A+) to dissimilar actions (G+A-/G-A-). Based on Helbig et al. (2006), we would expect tools with similar actions to the prime to have higher accuracy than those with dissimilar actions. Second, we compared similar grasp (G+A+/G+A-) to dissimilar grasp (G-A+/G-A-) to investigate whether tools with more similar grasps to the prime would be identified more accurately than tools with dissimilar grasps. Finally, we compared the additive effect of similar action on similar grasp (G+A+ vs. G+A-) and the additive effect of similar grasp on similar action (G+A+ vs. G-A+). This was done to determine whether action similarity provides a benefit to accuracy over and above that from grasp similarity, and/or if grasp similarity provides any benefit to accuracy over the effect of action similarity. A Bonferroni-style correction for the number of comparisons conducted was applied (N.B. corrected p-values are reported below, using an α of .05).

Results Accuracy data are shown in Figure 3 (Mean accuracy – G+A+: 91%, G+A-: 90%, G-A+: 84%, G-A-: 85%). Statistical analysis indicated a significant effect of condition on accuracy (χ2(3)= 16.28, p= .001). Post-hoc tests revealed that tools with similar grasps were more accurately indentified than those with dissimilar grasps (G+: 91%, G-: 85%; p= .003). In contrast, tools with similar actions were identified as readily as those with dissimilar actions (A+: 88%, A-: 87%; p> .999). Similarity in action added nothing to accuracy over that provided by similarity in grasp (G+A+: 91%, G+A-: 90%; p= .857), while similarity in grasp provided a significant improvement to accuracy above that from action similarity (G+A+: 91%, G-A+: 84%; p= .008).

INSERT FIGURE 3 APPROXIMATELY HERE

Discussion The results from this experiment demonstrate that only grasp similarity to the prime object enhanced participants' ability to identify the target object; action similarity did not improve target identification. This implies that action representations are not automatically accessed when forming the representation of a manipulable object for the purposes of identifying it. Instead, what appear to be automatically accessed are the motor transformations necessary to interact directly with an object – i.e., the way it is grasped.

There are two potential confounds, related to physical properties of the stimuli involved, which may provide an alternative explanation for the current results. The first is that stimuli with similar grasps may have handles that fall on the same part of the screen as they are presented in succession. However, this was not borne out upon inspection of the stimuli; G- stimuli were as likely as G+ stimuli to have handles that fall in the same part of the screen as the prime tool. The second potential confound relates to the significant differences in shape ratings we obtained during

Page 6: Disentangling the contributions of grasp and action ... · Disentangling the contributions of grasp and action representations in the recognition of manipulable objects ... Mean age

pre-screening of the experimental stimuli. Certain conditions were judged to be more similar in shape to the priming tool. However, the pattern of these ratings did not match the pattern of accuracy priming we observed during the experiment. While the G+A+ condition had the highest shape similarity (3.4) as well as the highest accuracy (91%), the condition with the next highest shape similarity (G-A+: 2.3) had the lowest accuracy (84%). This suggests that the shape similarity between the objects did not play a major role in facilitating identification.

One possible way that grasp representations could facilitate object recognition might be through attentional processes. For example, processing the graspable features (i.e., the handle) of an object may improve orienting of attention to similar features in the subsequent objects; perhaps through generating an implicit motor response to the handle of the object. However, Pappas and Mack (2008) found that affordance-based compatibility effects did not rely on conscious awareness of the prime. In their study, tools rendered invisible through brief presentation and backward masking still facilitated responses with the appropriate hand. Similarly, Almeida and colleagues (Almeida, Mahon, Nakayama, & Caramazza, 2008; Almeida, Mahon, & Caramazza, 2010) found priming for tools presented under conditions of continuous flash suppression, which prevents conscious perception of the stimuli. These findings argue against attention being necessary for priming of the motor system by visual processing of objects

Alternatively, as argued by Helbig et al. (2006), the effect may occur through direct interaction between the motor system and ventral areas involved in visual processing of objects. The present study is consistent with this view, but suggests that the particular information involved in these interactions is related to how the object is grasped, rather than how the object is used.

It remains possible that the action properties of an object may be able to be connected to the visual and semantic representations of that object. However, this would not be achieved automatically in the course of visual processing of the object. Osiurak and colleagues (Osiurak, Jarry, & Le Gall, 2010, 2011; Osiurak et al., 2008a, 2008b) have suggested that such interaction between motor and visual processing systems might rely on what they termed 'technical reasoning'. Technical reasoning is the ability to take abstract technical laws regarding actions (e.g., levering) and apply them to an object (such as a tool) in order to achieve a particular end in a given context. Through this application, the specific motor programs necessary to complete the action are computed. Because these specific programs are idiosyncratic to the context in which they're conducted, it is argued that it is not feasible for them to be stored as part of the representation of manipulable objects. Rather, the motor system is specifically activated by appreciating a particular action afforded by the object. The use of prime objects that are shown performing a particular action might allow this to be investigated. Recently, Helbig, Steinwender, Graf, and Kiefer (2010) conducted such an experiment where they showed participants a short video clip of empty hand(s) mimicking the use of a tool. They found that subsequent objects that had a similar action associated with their use were better recognised than objects that did not. This appears to provide more convincing evidence for functional action representations being involved in object recognition. Unfortunately, while some of the actions used were associated with the 'use' of objects these were mixed with other actions associated with merely 'grasping' the objects. Thus, in this recent study, as in their previous experiment (Helbig at al., 2006), action congruency is still confounded with grasp congruency.

In conclusion, the present study showed that recognition of manipulable objects is improved when they are preceded by an object with similar action properties. Importantly, though, the specific action properties involved are those related to directly interacting with the object by grasping it; not those associated with using the object. However, as this study only used 2D pictures, further research may be needed to verify that these results translate to real tools. Nevertheless, these results highlight the need to consider a pervasive confound in investigations of the role of action representations in object processing: that action congruency is often correlated with grasp congruency. Acknowledgments This research was funded by Australian Research Council Discovery Grant DP0879206. Irina Harris was supported by an ARC Future Fellowship. We are grateful to Nicole De Fina for assistance with data collection.

References Almeida J, Mahon BZ, Caramazza A (2010) The role of the dorsal visual processing stream in tool identification. Psychol Sci 21: 772-778. doi: 10.1177/0956797610371343

Page 7: Disentangling the contributions of grasp and action ... · Disentangling the contributions of grasp and action representations in the recognition of manipulable objects ... Mean age

Almeida J, Mahon BZ, Nakayama K, Caramazza A (2008) Unconscious processing dissociates along categorical lines. Proc Natl Acad Sci USA 105:15214-15218. doi:10.1073/pnas.0805867105 Anderson SJ, Yamagishi N, Karavia V (2002) Attentional process link perception and action. Proc R Soc Lond B 269: 1225-1232. doi: 10.1098/rspb.2002.1998 Boronat CB, Buxbaum LJ, Coslett HB, Tang K, Saffran EM, Kimberg DY, Detre JA (2005) Distinctions between manipulation and function knowledge of objects: evidence from functional magnetic resonance imaging. Cognitive Brain Res 23: 361-373. doi:10.1016/j.cogbrainres.2004.11.001 Brainard DH (1997) The psychophysics toolbox. Spatial Vision 10: 433-436. doi: 10.1163/156856897X00357 Buxbaum LJ (2001) Ideomotor aparxia: a call to action. Neurocase 7: 445-458. doi: 10.1093/neucas/7.6.445 Buxbaum LJ, Saffran EM (2002) Knowledge of object manipulation and object function: dissociations in apraxic and non-apraxic subjects. Brain Lang 89, 179-199. doi: 10.1016/S0093-934X(02)00014-7 Chao LL, Martin A (2000) Representation of manipulable man-made objects in the dorsal stream. NeuroImage 12:478-484. doi: 10.1006/nimg.2000.0635 Constantini M, Ambrosini E, Tieri G, Sinigaglia C, Committeri G (2010) Where does an object trigger an action? An investigation about affordances in space. Exp Brain Res 207: 95-103. doi: 10.1007/s00221-010-2435-8 Ellis R, Tucker M (2000) Micro-affordance: the potentiation of components of action by seen objects. Brit J Psychol 91: 451-471. doi: 10.1348/000712600161934 Gibson JJ (1979) The ecological approach to visual perception. Boston: Houghton-Mifflin. doi: 10.2307/989638 Goldenberg G (2009) Apraxia and the parietal lobes. Neuropsychologia 47: 1449-1459. doi:10.1016/j.neuropsychologia.2008.07.014 Grèzes J, Decety J (2002) Does visual perception of object afford action? Evidence from a neuroimaging study. Neuropsychologia 40: 212-222. doi: 10.1016/S0028-3932(01)00089-6 Harris IM, Murray AM, Hayward WG, O’Callaghan C, Andrews S. (submitted) Repetition blindness reveals differences between the representation of manipulable and non-manipulable objects. Manuscript submitted for publication Helbig HB, Graf M, Kiefer M (2006) The role of action representations in visual object recognition. Exp Brain Res 174: 221-228. doi: 10.1007/s00221-006-0443-5 Helbig HB, Steinwender J, Graf M, Kiefer M (2010) Action observation can prime visual object recognition. Exp Brain Res 200: 251-258. doi: 10.1007/s00221-009-1953-8 Kellenbach ML, Brett M, Patterson K (2003) Actions speak louder than functions: the importance of manipulability and action in tool representation. J Cognitive Neurosci 15(1): 30-46. doi: 10.1162/089892903321107800 Kleiner M, Brainard D, Pelli D (2007) What's new in Psychtoolbox-3. Perception 36 ECVP Abstract Supplement Kroliczak G, Frey SH (2009) A common network in the left cerebral hemisphere represents planning of tool use pantomimes and familiar intransitive gestures at the hand-independent level. Cereb Cortex 19(10): 2396-2410. doi: 10.1093/cercor/bhn261 Noppeney U (2009) The neural systems of tool and action semantics: a perspective from functional imaging. J Physiology-Paris 102: 40-49. doi:10.1016/j.jphysparis.2008.03.009

Negri GA, Lunardelli A, Reverberi C, Gigli GL, Rumiati RI (2007) Degraded semantic knowledge and accurate object use. Cortex 43: 376-388. doi: 10.1016/S0010-9452(08)70463-5 Osiurak F, Aubin G, Allain P, Jarry C, Etcharry-Bouyx F, Richard I, Le Gall D (2008a) Different constraints on grip selection in brain-damaged patients: object use versus object transport. Neuropsychologia 46: 2431-2434. doi:10.1016/j.neuropsychologia.2008.03.018 Osiruak F, Aubin G, Allain P, Jarry C, Richard I, Le Gall D (2008b) Object utilization and object usage: a single-case study. Neurocase 14(2): 169-183. doi: 10.1080/13554790802108372 Osiurak F, Jarry C, Le Gall D (2010) Grasping the affordances, understanding the reasoning: toward a dialectical theory of human tool use. Psychol Rev 117(2): 517-540. doi: 10.1037/a0019004 Osiurak F, Jarry C, Le Gall D (2011) Re-examining the gesture engram hypothesis. New perspectives on apraxia of tool use. Neuropsychologia 49: 299-312. doi:10.1016/j.neuropsychologia.2010.12.041 Pappas Z, Mack A (2008) Potentiation of action by undetected affordant objects. Visual Cognition 16(7): 892-915. doi: 10.1080/13506280701542185 Pelli DG (1997) The videotoolbox software for visual psychophysics: transforming numbers into movies. Spatial Vision 10:437-442. doi: 10.1163/156856897X00366

Page 8: Disentangling the contributions of grasp and action ... · Disentangling the contributions of grasp and action representations in the recognition of manipulable objects ... Mean age

Randerath J, Li Y, Goldenberg G, Hermsdörfer J (2009) Grasping tools: effects of task and apraxia. Neuropsychologia 47: 497-505. doi:10.1016/j.neuropsychologia.2008.10.005 Tucker M, Ellis R (2004) Action priming by briefly presented objects. Acta Psychol 116: 185-203. doi: 10.1007/s00221-009-1953-8 Vainio L, Ellis R, Tucker M, Symes E (2006) Manual asymmetries in visually primed grasping. Exp Brain Res 175: 395-406. doi: 10.1007/s00221-006-0378-x Vingerhoets G, Vandamme K, Vercammen A (2009) Conceptual and physical object qualities contribute differently to motor affordances. Brain Cognition 69: 481-489. doi:10.1016/j.bandc.2008.10.003 Vingerhoets G, Vandekerckhove E, Honore P, Vandemaele P, Achten E (2011) Neural correlates of pantomiming familiar and unfamiliar tools: action semantics versus mechanical problem solving? Human Brain Mapping 32(6): 905-918. doi: 10.1002/hbm.21078

Page 9: Disentangling the contributions of grasp and action ... · Disentangling the contributions of grasp and action representations in the recognition of manipulable objects ... Mean age

Figure Captions

Fig. 1 The eight stimulus sets used in the experiment (Prime: priming tool; G+A+: similar-grasp/similar-action target; G+A-: similar-grasp/dissimilar-action target; G-A+: dissimilar-grasp/similar-action target; G-A: dissimilar-grasp/dissimilar-action target)

Fig. 2 Trial sequence used in the experiment

Fig. 3 Mean accuracy (%) across the target conditions: G+A+ (similar grasp, similar action); G+A- (similar grasp, dissimilar action); G-A+ (dissimilar grasp, similar action); and G-A- (dissimilar grasp, dissimilar action). ** = p< .01    

Page 10: Disentangling the contributions of grasp and action ... · Disentangling the contributions of grasp and action representations in the recognition of manipulable objects ... Mean age

10 

 

Fig. 4 The eight stimulus sets used in the experiment (Prime: priming tool; G+A+: similar-grasp/similar-action target; G+A-: similar-grasp/dissimilar-action target; G-A+: dissimilar-grasp/similar-action target; G-A-: dissimilar-grasp/dissimilar-action target)    

Page 11: Disentangling the contributions of grasp and action ... · Disentangling the contributions of grasp and action representations in the recognition of manipulable objects ... Mean age

11 

 

Fig. 5 Trial sequence used in the experiment    

Page 12: Disentangling the contributions of grasp and action ... · Disentangling the contributions of grasp and action representations in the recognition of manipulable objects ... Mean age

12 

 

Fig. 6 Mean accuracy (%) across the target conditions: G+A+ (similar grasp, similar action); G+A- (similar grasp, dissimilar action); G-A+ (dissimilar grasp, similar action); and G-A- (dissimilar grasp, dissimilar action). ** = p< .01