With current tools, it is very difficulty to produce high-quality and believable 3D human faces. Typically, the process of creating an expressive character in a game or other computer media takes many hours of artists' time, and elaborate performance capture systems.
The goal of this project is to explore the use of statistical models as a method for performance capture and face synthesis. Similar algorithms have been used in the past for use in face detection and recognition. We propose to extend the idea by applying it to animated RGB+D data. Such an approach allows for the extraction of expression and facial feature data in a form that can be useful for the creation of several types of artistic tools.
Related Research
- Face Registration for RGB+D Cameras: Sofien Bouaziz and Mark Pauly NotesSlides
- Tim Cootes' Active Appearance Models Overview
- Eigenfaces
- Realtime Performance-Based Facial Avatars
Other Related Links
- Portable KinectStructure
- Face Recognition Databaseshttp://www.face-rec.org/databases/
- Faceshifthttp://www.faceshift.com/
Team
- Mark Sherlock, MSc student
- Faramarz Samavati, Research lead.