Face Identification Under Deformations


CALL: 2017

DOMAIN: IS - Information and Communication Technologies





HOST INSTITUTION: University of Luxembourg

KEYWORDS: Face recognition; RGB-D cameras; facial expressions; deformations; dynamic context; intrinsic deep learning; robust system; access control;

START: 2018-05-01

END: 2021-04-30

WEBSITE: https://www.uni.lu

Submitted Abstract

Automatic recognition of faces is a non-intrusive technology that has two main challenges: first, the large dynamics in the appearance of a face (pose, expression, occlusion), and second, the limitations due to the acquisition system (system noise, resolution, illumination). Both aspects make face recognition a highly non-linear problem that can quickly scale up in complexity. There are today impressive software that work well. What they lack is the dynamic aspect. Indeed, having an accurate face recognition technology can open the door to many innovative applications and revolutionize the interactions of humans with infrastructures and services. This revolution can only be possible if users are allowed to be in free motion and their faces to express their natural emotions. Being stoic and constrained to keep a straight face should no longer be a condition for a well-performing face recognition system. IDform proposes to robustly identify people from their faces in full dynamic conditions. The idea is to build on the success of today’s best performing face systems that use deep learning; however, instead of chasing the hugest datasets, the strategy is to use efficient facial models that can provide stable statistical information. The plan is to produce a robust dynamic facial recognition API using RGB-D cameras to be part of Artec3D’s products and commercialize it as a reviving tool for smarter machines. Given the new technology, the spectrum of applications is endless. It is expected to have a great socio-economic impact not only on Luxembourg but also on the international community.

This site uses cookies. By continuing to use this site, you agree to the use of cookies for analytics purposes. Find out more in our Privacy Statement