Automatic Dance Generation System Considering Sign Language Information

Wakana Asahina, Naoya Iwamoto, Hubert P. H. Shum, Shigeo Morishima

Research output: Contribution to conferencePaperpeer-review

2 Citations (Scopus)
2 Downloads (Pure)

Abstract

In recent years, thanks to the development of 3DCG animation editing tools (e.g. MikuMikuDance), a lot of 3D character dance animation movies are created by amateur users. However it is very difficult to create choreography from scratch without any technical knowledge. Shiratori et al. [2006] produced the dance automatic generation system considering rhythm and intensity of dance motions. However each segment is selected randomly from database, so the generated dance motion has no linguistic or emotional meanings. Takano et al. [2010] produced a human motion generation system considering motion labels. However they use simple motion labels like “running” or “jump”, so they cannot generate motions that express emotions. In reality, professional dancers make choreography based on music features or lyrics in music, and express emotion or how they feel in music. In our work, we aim at generating more emotional dance motion easily. Therefore, we use linguistic information in lyrics, and generate dance motion. In this paper, we propose the system to generate the sign dance motion from continuous sign language motion based on lyrics of music. This system could help the deaf to listen to music as visualized music application.
Original languageEnglish
Publication statusPublished - 24 Jul 2016
EventSIGGRAPH 2016 - 43rd International Conference and Exhibition on Computer Graphics and Interactive Techniques - Anaheim, California
Duration: 24 Jul 2016 → …

Conference

ConferenceSIGGRAPH 2016 - 43rd International Conference and Exhibition on Computer Graphics and Interactive Techniques
Period24/07/16 → …

Fingerprint

Dive into the research topics of 'Automatic Dance Generation System Considering Sign Language Information'. Together they form a unique fingerprint.

Cite this