Automatic Sign Dance Synthesis from Gesture-based Sign Language

Naoya Iwamoto, Hubert P. H. Shum, Wakana Asahina, Shigeo Morishima

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

3 Citations (Scopus)
55 Downloads (Pure)


Automatic dance synthesis has become more and more popular due
to the increasing demand in computer games and animations. Existing
research generates dance motions without much consideration
for the context of the music. In reality, professional dancers make
choreography according to the lyrics and music features. In this research, we focus on a particular genre of dance known as sign dance, which combines gesture-based sign language with full body dance motion. We propose a system to automatically generate sign dance from a piece of music and its corresponding sign gesture. The core of the system is a Sign Dance Model trained by multiple regression analysis to represent the correlations between sign dance and sign gesture/music, as well as a set of objective functions to evaluate the quality of the sign dance. Our system can be applied to music
visualization, allowing people with hearing difficulties to understand and enjoy music.
Original languageEnglish
Title of host publicationMotion, Interaction and Games (MIG ’19)
Subtitle of host publicationOctober 28–30, 2019, Newcastle upon Tyne, United Kingdom
EditorsHubert P. H. Shum, Edmond S. L. Ho, Marie-Paule Cani, Tiberiu Popa, Daniel Holden, He Wang
Place of PublicationNew York, NY, USA
Number of pages9
ISBN (Electronic)9781450369947
Publication statusPublished - 28 Oct 2019
EventMIG 2019: 12th annual ACM/SIGGRAPH conference on Motion, Interaction and Games - Northumbria University, Newcastle upon Tyne, United Kingdom
Duration: 28 Oct 201930 Oct 2019


ConferenceMIG 2019
Country/TerritoryUnited Kingdom
CityNewcastle upon Tyne
Internet address


Dive into the research topics of 'Automatic Sign Dance Synthesis from Gesture-based Sign Language'. Together they form a unique fingerprint.

Cite this