Deep neural network based multi-resolution face detection for smart cities

Gary Storey, Richard Jiang, Ahmed Bouridane, Ranjith Dinakaran, Chang-Tsun Li

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

28 Downloads (Pure)

Abstract

Face detection from unconstrained “in the wild” images such as those obtained from CCTV and other image capture devices used within urban environments can provide a rich source of information about citizens within the urban environments benefiting tasks such as pedestrians counting and biometric security. In recent years Deep Convolutional Neural Networks have revolutionized the state-of-the-art for face detection tasks, for utilization within smart cities through leveraging existing CCTV networks, some challenges still exist such as the scale and resolution of the faces within an image. We present a single multi-resolution deep neural network and trained on publicly available image databases that splits the face detection task into small and large face detection at a feature level. We show how our proposed network outperforms single task face detection Faster R-CNN architectures across three challenging test sets (AFW, AFLW and Wider Face).
Original languageEnglish
Title of host publicationInternational Conference on Information Society and Smart Cities 2018
Number of pages7
Publication statusPublished - 27 Jun 2018
EventInternational Conference on Information Society and Smart Cities 2018 - Cambridge University, Cambridge, United Kingdom
Duration: 27 Jun 201828 Jun 2018
Conference number: 1

Conference

ConferenceInternational Conference on Information Society and Smart Cities 2018
Abbreviated titleISC
Country/TerritoryUnited Kingdom
CityCambridge
Period27/06/1828/06/18

Keywords

  • Face Detection
  • Urban Computing
  • Biometric-as-a-service
  • Deep Neural Networks

Fingerprint

Dive into the research topics of 'Deep neural network based multi-resolution face detection for smart cities'. Together they form a unique fingerprint.

Cite this