Visual and linguistic cues to graspable objects.

Andriy Myachykov, Rob Ellis, Angelo Cangelosi, Martin Fischer

Research output: Contribution to journalArticlepeer-review

34 Citations (Scopus)
14 Downloads (Pure)


Two experiments investigated (1) how activation of manual affordances is triggered by visual and linguistic cues to manipulable objects and (2) whether graspable object parts play a special role in this process. Participants pressed a key to categorize manipulable target objects co-presented with manipulable distractor objects on a computer screen. Three factors were varied in Experiment 1: (1) the target’s and (2) the distractor’s handles’ orientation congruency with the lateral manual response and (3) the visual focus on one of the objects. In Experiment 2, a linguistic cue factor was added to these three factors – participants heard the name of one of the two objects prior to the target display onset. Analysis of participants’ motor and oculomotor behaviour confirmed that perceptual and linguistic cues potentiated activation of grasp affordances. Both target- and distractor-related affordance effects were modulated by the presence of visual and linguistic cues. However, a differential visual-attention mechanism subserved activation of compatibility effects associated with target and distractor objects. We also registered an independent implicit attention attraction effect from objects’ handles suggesting that graspable parts automatically attract attention during object viewing. This effect was further amplified by visual but not linguistic cues, thus providing initial evidence for a recent hypothesis about differential roles of visual and linguistic information in potentiating stable and variable affordances (Borghi, 2012).
Original languageEnglish
Pages (from-to)545-559
JournalExperimental Brain Research
Issue number4
Publication statusPublished - 3 Jul 2013


Dive into the research topics of 'Visual and linguistic cues to graspable objects.'. Together they form a unique fingerprint.

Cite this