TY - JOUR
T1 - ISign
T2 - An architecture for humanoid assisted sign language tutoring
AU - Kose, Hatice
AU - Akalin, Neziha
AU - Yorganci, Rabia
AU - Ertugrul, Bekir S.
AU - Kivrak, Hasan
AU - Kavak, Semih
AU - Ozkul, Ahmet
AU - Gurpinar, Cemal
AU - Uluer, Pinar
AU - Ince, Gökhan
N1 - Publisher Copyright:
© Springer International Publishing Switzerland 2015.
PY - 2015
Y1 - 2015
N2 - This paper investigates the role of interaction and communication kinesics in human-robot interaction. It is based on a project on Sign Language (SL) tutoring through interaction games with humanoid robots. The aim of the study is to design a computational framework, which enables to motivate the children with communication problems (i.e., ASD and hearing impairments) to understand and imitate the signs implemented by the robot using the basic upper torso gestures and sound in a turn-taking manner. This framework consists of modular computational components to endow the robot the capability of perceiving the actions of the children, carrying out a game or storytelling task and tutoring the children in any desired mode, i.e., supervised and semisupervised. Visual (colored cards), vocal (storytelling, music), touch (using tactile sensors on the robot to communicate), and motion (recognition and implementation of gestures including signs) based cues are proposed to be used for a multimodal communication between the robot, child and therapist/parent. We present an empirical and exploratory study investigating the effect of basic non-verbal gestures consisting of hand movements, body and face gestures expressed by a humanoid robot, and having comprehended the word, the child will give relevant feedback in SL or visually to the robot, according to the context of the game.
AB - This paper investigates the role of interaction and communication kinesics in human-robot interaction. It is based on a project on Sign Language (SL) tutoring through interaction games with humanoid robots. The aim of the study is to design a computational framework, which enables to motivate the children with communication problems (i.e., ASD and hearing impairments) to understand and imitate the signs implemented by the robot using the basic upper torso gestures and sound in a turn-taking manner. This framework consists of modular computational components to endow the robot the capability of perceiving the actions of the children, carrying out a game or storytelling task and tutoring the children in any desired mode, i.e., supervised and semisupervised. Visual (colored cards), vocal (storytelling, music), touch (using tactile sensors on the robot to communicate), and motion (recognition and implementation of gestures including signs) based cues are proposed to be used for a multimodal communication between the robot, child and therapist/parent. We present an empirical and exploratory study investigating the effect of basic non-verbal gestures consisting of hand movements, body and face gestures expressed by a humanoid robot, and having comprehended the word, the child will give relevant feedback in SL or visually to the robot, according to the context of the game.
KW - Humanoid Robots
KW - Interaction games
KW - Non-verbal communication
KW - Sign Language
UR - http://www.scopus.com/inward/record.url?scp=84926039758&partnerID=8YFLogxK
U2 - 10.1007/978-3-319-12922-8_6
DO - 10.1007/978-3-319-12922-8_6
M3 - Article
AN - SCOPUS:84926039758
SN - 1610-7438
VL - 106
SP - 157
EP - 184
JO - Springer Tracts in Advanced Robotics
JF - Springer Tracts in Advanced Robotics
ER -