Pose-invariant face recognition with multitask cascade networks

Omar Elharrouss*, Noor Almaadeed, Somaya Al-Maadeed, Fouad Khelifi

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

In this work, a face recognition method is proposed for face under pose variations using a multi-task convolutional neural network (CNN). Furthermore, a pose estimation method followed by a face identification module are combined in a cascaded structure and used separately. In the presence of various facial poses as well as low illuminations, datasets that include separated face poses can enhance the robustness of face recognition. The proposed method relies on a pose estimation module using a convolutional neural network model and trained on three categories of face image capture such as the Left side, Frontal, and right side. Second, three CNN models are used for face identification according to the estimated pose. The Left-CNN model, Front-CNN model, and Right-CNN model are used to identify the face for the left, frontal, and right pose of the face, respectively. Because face images may contain some useless information (e.g. background content), we propose a skin-based face segmentation method using structure-decomposition and the Color Invariant Descriptor. Experimental evaluation has been conducted using the proposed cascade-based face recognition system that consists of the aforementioned steps (i.e., pose estimation, face segmentation, and face identification) is assessed on four different datasets and its superiority has been shown over related state-of-the-art techniques. Results reveal the contribution of the separate representation, skin segmentation, and pose estimation in the recognition robustness.
Original languageEnglish
JournalNeural Computing and Applications
Publication statusAccepted/In press - 27 Oct 2021

Fingerprint

Dive into the research topics of 'Pose-invariant face recognition with multitask cascade networks'. Together they form a unique fingerprint.

Cite this