Prior-less 3D Human Shape Reconstruction with an Earth Mover’s Distance Informed CNN

Jingtian Zhang, Hubert P. H. Shum, Kevin McCay, Edmond S. L. Ho

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

1 Citation (Scopus)
32 Downloads (Pure)

Abstract

We propose a novel end-to-end deep learning framework, capable of 3D human shape reconstruction from a 2D image without the need of a 3D prior parametric model. We employ a “prior-less” representation of the human shape using unordered point clouds. Due to the lack of prior information, comparing the generated and ground truth point clouds to evaluate the reconstruction error is challenging. We solve this problem by proposing an Earth Mover’s Distance (EMD) function to find the optimal mapping between point clouds. Our experimental results show that we are able to obtain a visually accurate estimation of the 3D human shape from a single 2D image, with some inaccuracy for heavily occluded parts.
Original languageEnglish
Title of host publicationProceedings - MIG 2019: ACM Conference on Motion, Interaction, and Games
Subtitle of host publicationNewcastle upon Tyne, England, October 28-30, 2019
EditorsHubert P. H. Shum, Edmond S. L. Ho, Marie-Paule Cani, Tiberiu Popa, Daniel Holden, He Wang
Place of PublicationNew York
PublisherACM
Pages1-2
Number of pages2
ISBN (Electronic)9781450369947
DOIs
Publication statusPublished - 28 Oct 2019
EventMIG 2019: 12th annual ACM/SIGGRAPH conference on Motion, Interaction and Games - Northumbria University, Newcastle upon Tyne, United Kingdom
Duration: 28 Oct 201930 Oct 2019
http://www.mig2019.website/index.html

Conference

ConferenceMIG 2019
Country/TerritoryUnited Kingdom
CityNewcastle upon Tyne
Period28/10/1930/10/19
Internet address

Keywords

  • Human Surface Reconstruction
  • Deep Learning
  • CNN
  • Earth Mover’s Distance

Fingerprint

Dive into the research topics of 'Prior-less 3D Human Shape Reconstruction with an Earth Mover’s Distance Informed CNN'. Together they form a unique fingerprint.

Cite this