SlideShare a Scribd company logo
Problem Statement
 The goal is to use a model based approach for facial
emotion recognition of driver in real time environment
 The system should work on a embedded platform
 The program is so developed to work on pipelined
architecture and parallel processing.
Why Model Based Approach?
 Illumination and pose variations are considered as major
concerns in facial emotion recognition which can be
overcome by model based approach.
State Of The Art
 Robert Niese, Ayoub Al-Hamadi, Axel Panning and Bernd
Michaelis
“Emotion Recognition based on 2D-3D Facial Feature
Extraction from Color Image Sequences”
 Narendra Patel, Mukesh Zaveri
“ 3D Facial Model Construction and Expression Synthesis
using a Single Frontal Face Image
State Of The Art
 Aitor Azcarate, Felix Hageloh, Koen van de Sande,
Robert Valenti
“ Automatic facial emotion recognition
 Tie Yun , Ling Guan
“Human Emotion Recognition Using Real 3D Visual
Features from Gabor Library”
Challenges
 Since the faces are non-rigid and have a high degree of
variability in location, colour and pose, several features of
the face make facial expressions based emotion recognition
more complex.
 Occlusion and lighting distortions, as well as illumination
conditions can also change the overall appearance of the
face. Such changes will cause the emotion classification
complex.
 Spontaneous emotion recognition.
 Complexity of background when there is more than one
face in the image, system should be able to distinguish
which one is being tracked
Emotion Recognition based on 2D-3D Facial
Feature Extraction from Color Image Sequences
Facial Feature Points in 2D
1. Detect the face
2. Define fiducial points
3. Detect eyes and mouth
Complete set of feature points are
as shown in the figure
Training
Set
(sub-
windows)
Integral
Representation
Feature
computation
AdaBoost
Feature Selection
Cascade trainer
Testing phase
Training phase
Strong Classifier 1
(cascade stage 1)
Strong Classifier N
(cascade stage N)
Classifier cascade
framework
Strong Classifier 2
(cascade stage 2)
FACE IDENTIFIED
Overview | Integral Image | AdaBoost | Cascade
Slide courtesy:Kostantina Palla, University Of Edinburgh
Camera Model
 Pin hole camera model is
used
 Camera parameters
Facilitating the camera
parameters, the
transformation of 3D
world points to image
points is well described
Geometric 3D model
 Initial registration step of the subject is captured one time
in frontal pose and with neutral expression.
 The face is localized in the stereo point cloud by using the
observation “surfaces are represented by more or less
connected point clusters.
 Similarity criterion h for clustering that combines color
and the Euclidean distance of points
 Surface reconstruction of the face cluster
Estimation of face pose
 Correspondence between model and real-world is
established using fiducial points.
 According to camera model, the image projection of each
anchor point is determined.
 Goal of pose estimation is to reduce error between 3d
anchor points and fiducial points.
 After pose determination, the image feature points are
projected to the surface model at its current pose
Feature Vector
 The feature vector consists of angles and distances
between a series of facial feature points in 3D.
 Feature vectors are normalized to increase classification
robustness
Classification
 Classification is done using ANN
Cons
 Misclassification at transition between facial expression
due to indistinct features.
 Performance is not optimised.
 Stereo-based initialization step can be inconvenient.
 Requires calibrated cameras.
References
 [1] H. D. Vankayalapati and K. Kyamakya, "Nonlinear Feature Extraction
Approaches for Scalable Face Recognition Applications," ISAST transactions
on computers and intelligent systems, vol. 2, 2009.
 [2]Hang-Bong Kang , “Various Approaches for Driver and Driving Behavior
Monitoring: A Review ", ICCV2013 workshop.
 [3]Emotion Recognition using Speech Features By K. Sreenivasa Rao, Shashidhar
G. Koolagudi
 [4] Yun Tie,” Human emotional state recognition using 3D facial expression
features-thesis”, Ryerson University
Thank You
http://en.wikipedia.org/wiki/Perspective_%
28graphical%29
camera calibration
and use subject specific surface
model data reduces perspective
foreshortening like in the
case of out of plane rotations
Photogrammetry is the science,
technology and art of obtaining
reliable information from
noncontact imaging and other
sensor systems about the Earth
Different Face Detection Techniques
 Two groups: holistic where face is treated as a whole unit and analytic where co-occurrence of characteristic
facial elements is studied.
Holistic face models:
 • Huang and Huang [7] used Point Distribution Model (PDM) which represents mean
 geometry of human face. Firstly, Canny edge detector is applied to find two symmetrical
 vertical edges which estimate the face position and then PDM is fitted.
 • Pantic and Rothkrantz [8] proposed system which process images of frontal and profile face
 view. Vertical and horizontal histogram analysis is used to find face boundaries. Then, face
 contour is obtained by thresholding the image with HSV color space values.
Analytic face models:
 • Kobayashi and Hara [9] used image captured in monochrome mode to find face brightness distribution.
Position of face is estimated by iris localization.
 • Kimura and Yachida [10] technique processes input image with an integral projection algorithm to find
position of eye and mouth corners by color and edge information. Face is represented with Potential Net model
which is fitted by the position of eyes and mouth.
All of the above mentioned systems were designed to process facial images, however, they are not able to detect
whether the face is present in the image. Systems which handle arbitrary images are listed below:
 • Essa and Pentland [11] created the “face space” by performing Principal Component Analysis of eigenfaces
from 128 face images. Face is detected in the image if its distance from the face space is acceptable.
 • Rowley et al. [12] proposed neural network based face detection. Input image is scanned with a window and
neural network decides if particular window contains a face or not.
 • Viola and Jones [13] introduced very efficient algorithm for object detection with use of Haar-like features as
object representation and Adaboost as machine learning method. This algorithm is widely used in face
detection.
 Three Dimensional Techniques
 Three dimensional models inherently provide more information than 2D models due to the presence of depth information, and are more robust than
2D models. Many 3D model extraction solutions are subject to expensive computational complexity, or over simplified models that do not accurately
represent the object.
 The acquisition of 3D data can also produce image artifacts that may affect the rendered model [25]. The camera can receive light at intensities that
saturate the detector or receive light levels too low to produce high quality images. This can occur in areas where there is specular reflection in stereo
systems. Stereo based systems also have trouble getting true dense sampling of the face surface, and spare sampling points in regions where there is
too much natural texture, leading to the exclusion of certain features (too smooth). Multimodal analysis with 3D and 2D data may be able to provide
better data for classification (of face recognition) than single modalities, but compared to multiple 2D images (without 3D rendering), it does not
show significant improvement, leading to a possible optimization problem in determining the best ways to use the acquired data [25]
 A process completed by Chaumont et al. [26] breaks this problem into two steps, which first formulates an estimation of the 3D model, followed by
model refinement. In the estimation section, a CANDIDE wireframe model (3D wireframe of an average face) is projected onto the 10
 2D space from the 3D space under the assumption that all feature points are coplanar. This approximation is realistic because the differences in depth
between features are very small compared to the distance to the camera. Making this assumption results in a projection of a 2D image on a 2D plane,
which is a problem much easier solved. Also, since few 2D-3D correspondence points are available for use, the matrix is very sparse, and can be solved
very quickly. After this approximation is determined, the wireframe is refined by perturbing the 3D points separately to match with the 2D points.
This method is a fast method for face tracking and 3D face model extraction, can predict feature positions due to rotations and translations and
model recovery in the presence of occultation because 3D information is known about the object.
 Soyel et al. [27] used 3D distance vectors to obtain 3D FAPs between feature points to measure quantities like openness of eyes, height of eyebrows,
openness of mouth, etc. to obtain distance vectors for test and training data for different expressions. They use only 23 facial features that are
associated with the selected measurements and classify with a neural network. Tang et al. [28] utilizes the same approach, but performs an algorithm
on the set of distances between the 83 points to determine the measurements that contain the most variation and are the most discriminatory,
allowing for better recognition than empirically determined measurements.
 Shape information is located in geometric features like ridges, ravines, peaks, pits, saddles, etc. local surface fitting is done, by centering the
coordinate system at the vertex of interest (for ease of computation) . The patch can expressed in local coordinates and a cubic approximation (x^3,
x^2y, xy^2, etc) can used to fit the surface locally, yielding two principle vectors that describe the maximum and minimum curvature at that point,
and two corresponding eigenvalues. Along with the normal direction at that point, the surface properties can be classified into labels (flat, peak,
ridge, etc) and a Primitive Surface Feature Distribution (PSFD) [29] can be generated as feature.
 Other methods attempt to fit surface models onto point clouds of 3D sensor data. Mpiperis et al. [30,31] used a neutral face with an average identity
and deformed it to the appropriate expression/identity. A triangular 3D mesh is placed on the face and subdivided into sub-triangles to increase the
density. First a set of landmarks is associated with vertices on the mesh, which remain unchanged during the fitting process. Fitting is done as an
energy minimization problem that consists of terms describing opposing forces between the landmarks 11
 and mesh points, the distance between the surface and the mesh, and a smoothness constraint, which is solved by setting partial derivatives to 0 and
solved using SVD. Asymmetric Bilinear models are used for facial expression recognition in which models identity in one dimension and expression
in another. 3D Facial shapes obtained through finding the difference between neutral and expressive faces in 3D can also be used to classify facial
expressions [32].
 Venkatesh et al. employed principal component analysis on 3D mesh datasets to attempt to classify facial expressions [10]. PCA is a popular
mathematical technique that for allows the dimensions of the problem to be reduced, making it easier to solve. For the training set, 68 feature points,
which have been known to effectively represent facial expressions, have been manually selected around the eyes, mouth and eyebrows. PCA is done
on the x, y, and z locations of these feature points to determine eigenvalues that can be used to find matrix projections on a given matrix A. This
method automatically extracts features after they are divided into bounding boxes using anthropomorphic properties. This method achieves the
automatic selection of points; however it is very computationally expensive.

More Related Content

PPTX
Artificial nueral network slideshare
DOC
Final Year Project - Enhancing Virtual Learning through Emotional Agents (Doc...
PDF
Human Emotion Recognition
PPTX
Pattern recognition facial recognition
PPTX
Predicting Emotions through Facial Expressions
PPTX
Facial Emotion Recognition: A Deep Learning approach
PPTX
Facial expression recognition projc 2 (3) (1)
PPTX
Face recognition using artificial neural network
Artificial nueral network slideshare
Final Year Project - Enhancing Virtual Learning through Emotional Agents (Doc...
Human Emotion Recognition
Pattern recognition facial recognition
Predicting Emotions through Facial Expressions
Facial Emotion Recognition: A Deep Learning approach
Facial expression recognition projc 2 (3) (1)
Face recognition using artificial neural network

What's hot (20)

PDF
Image recognition
PPTX
Emotion recognition
PPTX
Attendance Management System using Face Recognition
PPTX
Facial Expression Recognition System using Deep Convolutional Neural Networks.
PDF
Emotion detection using cnn.pptx
PDF
Face Recognition Methods based on Convolutional Neural Networks
PDF
Facial emotion recognition
DOCX
Facial Expression Recognition via Python
PDF
EMOTION DETECTION USING AI
PPTX
Face Recognition based Lecture Attendance System
PPTX
Emotion recognition and drowsiness detection using python.ppt
PPTX
Facial emotion detection on babies' emotional face using Deep Learning.
PPTX
Detection and recognition of face using neural network
PPTX
Neural Networks
PPTX
Attendance system based on face recognition using python by Raihan Sikdar
PPTX
Facial expression recognition based on image feature
PDF
Automated attendance system using Face recognition
PDF
Human Emotion Recognition using Machine Learning
PDF
Data preprocessing using Machine Learning
PPTX
Plant Disease Detection Using ML.pptx
Image recognition
Emotion recognition
Attendance Management System using Face Recognition
Facial Expression Recognition System using Deep Convolutional Neural Networks.
Emotion detection using cnn.pptx
Face Recognition Methods based on Convolutional Neural Networks
Facial emotion recognition
Facial Expression Recognition via Python
EMOTION DETECTION USING AI
Face Recognition based Lecture Attendance System
Emotion recognition and drowsiness detection using python.ppt
Facial emotion detection on babies' emotional face using Deep Learning.
Detection and recognition of face using neural network
Neural Networks
Attendance system based on face recognition using python by Raihan Sikdar
Facial expression recognition based on image feature
Automated attendance system using Face recognition
Human Emotion Recognition using Machine Learning
Data preprocessing using Machine Learning
Plant Disease Detection Using ML.pptx
Ad

Viewers also liked (14)

PPT
Automated Face Detection System
PPTX
Emotion detection
PPTX
HaiXiu: Emotion Recognition from Movements
PPTX
Mind Control to Major Tom: Is It Time to Put Your EEG Headset On?
PPT
A framework for emotion mining from text in online social networks(final)
PPT
Facial expression and emotions 1
PPTX
Facial emotion recognition corporatemeet
PDF
Semantic 3DTV Content Analysis and Description
PPTX
HUMAN EMOTION RECOGNIITION SYSTEM
PPT
Emotion Recognition from Frontal Facial Image
PDF
Emotion Detection from Text
PPTX
Facial expressions pp
PPTX
Facial expression
PDF
Real time emotion_detection_from_videos
Automated Face Detection System
Emotion detection
HaiXiu: Emotion Recognition from Movements
Mind Control to Major Tom: Is It Time to Put Your EEG Headset On?
A framework for emotion mining from text in online social networks(final)
Facial expression and emotions 1
Facial emotion recognition corporatemeet
Semantic 3DTV Content Analysis and Description
HUMAN EMOTION RECOGNIITION SYSTEM
Emotion Recognition from Frontal Facial Image
Emotion Detection from Text
Facial expressions pp
Facial expression
Real time emotion_detection_from_videos
Ad

Similar to Model Based Emotion Detection using Point Clouds (20)

PDF
A novel approach for performance parameter estimation of face recognition bas...
PDF
Vision based non-invasive tool for facial swelling assessment
PDF
Ck36515520
PDF
A Novel Mathematical Based Method for Generating Virtual Samples from a Front...
PDF
Multi modal face recognition using block based curvelet features
PDF
A study of techniques for facial detection and expression classification
PDF
2. 7698 8113-1-pb
PDF
Multimodal Approach for Face Recognition using 3D-2D Face Feature Fusion
PDF
Aa4102207210
PDF
FACE DETECTION USING PRINCIPAL COMPONENT ANALYSIS
PDF
Face detection using the 3 x3 block rank patterns of gradient magnitude images
PDF
Real time facial expression analysis using pca
PDF
Face Recognition based on STWT and DTCWT using two dimensional Q-shift Filters
PDF
3-D Face Recognition Using Improved 3D Mixed Transform
PDF
Report Face Detection
DOCX
Face recognition system
PDF
Image–based face-detection-and-recognition-using-matlab
PDF
Face Recognition & Detection Using Image Processing
PDF
Deep learning for understanding faces
PDF
Face Detection in Digital Image: A Technical Review
A novel approach for performance parameter estimation of face recognition bas...
Vision based non-invasive tool for facial swelling assessment
Ck36515520
A Novel Mathematical Based Method for Generating Virtual Samples from a Front...
Multi modal face recognition using block based curvelet features
A study of techniques for facial detection and expression classification
2. 7698 8113-1-pb
Multimodal Approach for Face Recognition using 3D-2D Face Feature Fusion
Aa4102207210
FACE DETECTION USING PRINCIPAL COMPONENT ANALYSIS
Face detection using the 3 x3 block rank patterns of gradient magnitude images
Real time facial expression analysis using pca
Face Recognition based on STWT and DTCWT using two dimensional Q-shift Filters
3-D Face Recognition Using Improved 3D Mixed Transform
Report Face Detection
Face recognition system
Image–based face-detection-and-recognition-using-matlab
Face Recognition & Detection Using Image Processing
Deep learning for understanding faces
Face Detection in Digital Image: A Technical Review

More from Lakshmi Sarvani Videla (20)

DOCX
Data Science Using Python
DOCX
Programs on multithreading
PPT
Menu Driven programs in Java
DOCX
Recursion in C
DOCX
Simple questions on structures concept
PPTX
Errors incompetitiveprogramming
DOCX
Relational Operators in C
PPTX
Recursive functions in C
PPTX
Function Pointer in C
PPTX
DOCX
Java sessionnotes
DOCX
Singlelinked list
PPTX
Functions in python3
PPTX
DOCX
DataStructures notes
DOCX
Solutionsfor co2 C Programs for data structures
Data Science Using Python
Programs on multithreading
Menu Driven programs in Java
Recursion in C
Simple questions on structures concept
Errors incompetitiveprogramming
Relational Operators in C
Recursive functions in C
Function Pointer in C
Java sessionnotes
Singlelinked list
Functions in python3
DataStructures notes
Solutionsfor co2 C Programs for data structures

Recently uploaded (20)

PDF
Encapsulation_ Review paper, used for researhc scholars
PPT
Teaching material agriculture food technology
PDF
Building Integrated photovoltaic BIPV_UPV.pdf
PPTX
sap open course for s4hana steps from ECC to s4
PDF
KodekX | Application Modernization Development
PDF
Chapter 3 Spatial Domain Image Processing.pdf
PDF
Agricultural_Statistics_at_a_Glance_2022_0.pdf
PDF
Approach and Philosophy of On baking technology
PDF
Mobile App Security Testing_ A Comprehensive Guide.pdf
PDF
Diabetes mellitus diagnosis method based random forest with bat algorithm
PDF
Machine learning based COVID-19 study performance prediction
DOCX
The AUB Centre for AI in Media Proposal.docx
PDF
Build a system with the filesystem maintained by OSTree @ COSCUP 2025
PDF
The Rise and Fall of 3GPP – Time for a Sabbatical?
PDF
Per capita expenditure prediction using model stacking based on satellite ima...
PPTX
MYSQL Presentation for SQL database connectivity
PDF
Review of recent advances in non-invasive hemoglobin estimation
PPTX
Cloud computing and distributed systems.
PPTX
Programs and apps: productivity, graphics, security and other tools
PDF
cuic standard and advanced reporting.pdf
Encapsulation_ Review paper, used for researhc scholars
Teaching material agriculture food technology
Building Integrated photovoltaic BIPV_UPV.pdf
sap open course for s4hana steps from ECC to s4
KodekX | Application Modernization Development
Chapter 3 Spatial Domain Image Processing.pdf
Agricultural_Statistics_at_a_Glance_2022_0.pdf
Approach and Philosophy of On baking technology
Mobile App Security Testing_ A Comprehensive Guide.pdf
Diabetes mellitus diagnosis method based random forest with bat algorithm
Machine learning based COVID-19 study performance prediction
The AUB Centre for AI in Media Proposal.docx
Build a system with the filesystem maintained by OSTree @ COSCUP 2025
The Rise and Fall of 3GPP – Time for a Sabbatical?
Per capita expenditure prediction using model stacking based on satellite ima...
MYSQL Presentation for SQL database connectivity
Review of recent advances in non-invasive hemoglobin estimation
Cloud computing and distributed systems.
Programs and apps: productivity, graphics, security and other tools
cuic standard and advanced reporting.pdf

Model Based Emotion Detection using Point Clouds

  • 1. Problem Statement  The goal is to use a model based approach for facial emotion recognition of driver in real time environment  The system should work on a embedded platform  The program is so developed to work on pipelined architecture and parallel processing.
  • 2. Why Model Based Approach?  Illumination and pose variations are considered as major concerns in facial emotion recognition which can be overcome by model based approach.
  • 3. State Of The Art  Robert Niese, Ayoub Al-Hamadi, Axel Panning and Bernd Michaelis “Emotion Recognition based on 2D-3D Facial Feature Extraction from Color Image Sequences”  Narendra Patel, Mukesh Zaveri “ 3D Facial Model Construction and Expression Synthesis using a Single Frontal Face Image
  • 4. State Of The Art  Aitor Azcarate, Felix Hageloh, Koen van de Sande, Robert Valenti “ Automatic facial emotion recognition  Tie Yun , Ling Guan “Human Emotion Recognition Using Real 3D Visual Features from Gabor Library”
  • 5. Challenges  Since the faces are non-rigid and have a high degree of variability in location, colour and pose, several features of the face make facial expressions based emotion recognition more complex.  Occlusion and lighting distortions, as well as illumination conditions can also change the overall appearance of the face. Such changes will cause the emotion classification complex.  Spontaneous emotion recognition.  Complexity of background when there is more than one face in the image, system should be able to distinguish which one is being tracked
  • 6. Emotion Recognition based on 2D-3D Facial Feature Extraction from Color Image Sequences
  • 7. Facial Feature Points in 2D 1. Detect the face 2. Define fiducial points 3. Detect eyes and mouth Complete set of feature points are as shown in the figure
  • 8. Training Set (sub- windows) Integral Representation Feature computation AdaBoost Feature Selection Cascade trainer Testing phase Training phase Strong Classifier 1 (cascade stage 1) Strong Classifier N (cascade stage N) Classifier cascade framework Strong Classifier 2 (cascade stage 2) FACE IDENTIFIED Overview | Integral Image | AdaBoost | Cascade Slide courtesy:Kostantina Palla, University Of Edinburgh
  • 9. Camera Model  Pin hole camera model is used  Camera parameters Facilitating the camera parameters, the transformation of 3D world points to image points is well described
  • 10. Geometric 3D model  Initial registration step of the subject is captured one time in frontal pose and with neutral expression.  The face is localized in the stereo point cloud by using the observation “surfaces are represented by more or less connected point clusters.  Similarity criterion h for clustering that combines color and the Euclidean distance of points  Surface reconstruction of the face cluster
  • 11. Estimation of face pose  Correspondence between model and real-world is established using fiducial points.  According to camera model, the image projection of each anchor point is determined.  Goal of pose estimation is to reduce error between 3d anchor points and fiducial points.  After pose determination, the image feature points are projected to the surface model at its current pose
  • 12. Feature Vector  The feature vector consists of angles and distances between a series of facial feature points in 3D.  Feature vectors are normalized to increase classification robustness
  • 14. Cons  Misclassification at transition between facial expression due to indistinct features.  Performance is not optimised.  Stereo-based initialization step can be inconvenient.  Requires calibrated cameras.
  • 15. References  [1] H. D. Vankayalapati and K. Kyamakya, "Nonlinear Feature Extraction Approaches for Scalable Face Recognition Applications," ISAST transactions on computers and intelligent systems, vol. 2, 2009.  [2]Hang-Bong Kang , “Various Approaches for Driver and Driving Behavior Monitoring: A Review ", ICCV2013 workshop.  [3]Emotion Recognition using Speech Features By K. Sreenivasa Rao, Shashidhar G. Koolagudi  [4] Yun Tie,” Human emotional state recognition using 3D facial expression features-thesis”, Ryerson University
  • 17. http://en.wikipedia.org/wiki/Perspective_% 28graphical%29 camera calibration and use subject specific surface model data reduces perspective foreshortening like in the case of out of plane rotations Photogrammetry is the science, technology and art of obtaining reliable information from noncontact imaging and other sensor systems about the Earth
  • 18. Different Face Detection Techniques  Two groups: holistic where face is treated as a whole unit and analytic where co-occurrence of characteristic facial elements is studied. Holistic face models:  • Huang and Huang [7] used Point Distribution Model (PDM) which represents mean  geometry of human face. Firstly, Canny edge detector is applied to find two symmetrical  vertical edges which estimate the face position and then PDM is fitted.  • Pantic and Rothkrantz [8] proposed system which process images of frontal and profile face  view. Vertical and horizontal histogram analysis is used to find face boundaries. Then, face  contour is obtained by thresholding the image with HSV color space values. Analytic face models:  • Kobayashi and Hara [9] used image captured in monochrome mode to find face brightness distribution. Position of face is estimated by iris localization.  • Kimura and Yachida [10] technique processes input image with an integral projection algorithm to find position of eye and mouth corners by color and edge information. Face is represented with Potential Net model which is fitted by the position of eyes and mouth. All of the above mentioned systems were designed to process facial images, however, they are not able to detect whether the face is present in the image. Systems which handle arbitrary images are listed below:  • Essa and Pentland [11] created the “face space” by performing Principal Component Analysis of eigenfaces from 128 face images. Face is detected in the image if its distance from the face space is acceptable.  • Rowley et al. [12] proposed neural network based face detection. Input image is scanned with a window and neural network decides if particular window contains a face or not.  • Viola and Jones [13] introduced very efficient algorithm for object detection with use of Haar-like features as object representation and Adaboost as machine learning method. This algorithm is widely used in face detection.
  • 19.  Three Dimensional Techniques  Three dimensional models inherently provide more information than 2D models due to the presence of depth information, and are more robust than 2D models. Many 3D model extraction solutions are subject to expensive computational complexity, or over simplified models that do not accurately represent the object.  The acquisition of 3D data can also produce image artifacts that may affect the rendered model [25]. The camera can receive light at intensities that saturate the detector or receive light levels too low to produce high quality images. This can occur in areas where there is specular reflection in stereo systems. Stereo based systems also have trouble getting true dense sampling of the face surface, and spare sampling points in regions where there is too much natural texture, leading to the exclusion of certain features (too smooth). Multimodal analysis with 3D and 2D data may be able to provide better data for classification (of face recognition) than single modalities, but compared to multiple 2D images (without 3D rendering), it does not show significant improvement, leading to a possible optimization problem in determining the best ways to use the acquired data [25]  A process completed by Chaumont et al. [26] breaks this problem into two steps, which first formulates an estimation of the 3D model, followed by model refinement. In the estimation section, a CANDIDE wireframe model (3D wireframe of an average face) is projected onto the 10  2D space from the 3D space under the assumption that all feature points are coplanar. This approximation is realistic because the differences in depth between features are very small compared to the distance to the camera. Making this assumption results in a projection of a 2D image on a 2D plane, which is a problem much easier solved. Also, since few 2D-3D correspondence points are available for use, the matrix is very sparse, and can be solved very quickly. After this approximation is determined, the wireframe is refined by perturbing the 3D points separately to match with the 2D points. This method is a fast method for face tracking and 3D face model extraction, can predict feature positions due to rotations and translations and model recovery in the presence of occultation because 3D information is known about the object.  Soyel et al. [27] used 3D distance vectors to obtain 3D FAPs between feature points to measure quantities like openness of eyes, height of eyebrows, openness of mouth, etc. to obtain distance vectors for test and training data for different expressions. They use only 23 facial features that are associated with the selected measurements and classify with a neural network. Tang et al. [28] utilizes the same approach, but performs an algorithm on the set of distances between the 83 points to determine the measurements that contain the most variation and are the most discriminatory, allowing for better recognition than empirically determined measurements.  Shape information is located in geometric features like ridges, ravines, peaks, pits, saddles, etc. local surface fitting is done, by centering the coordinate system at the vertex of interest (for ease of computation) . The patch can expressed in local coordinates and a cubic approximation (x^3, x^2y, xy^2, etc) can used to fit the surface locally, yielding two principle vectors that describe the maximum and minimum curvature at that point, and two corresponding eigenvalues. Along with the normal direction at that point, the surface properties can be classified into labels (flat, peak, ridge, etc) and a Primitive Surface Feature Distribution (PSFD) [29] can be generated as feature.  Other methods attempt to fit surface models onto point clouds of 3D sensor data. Mpiperis et al. [30,31] used a neutral face with an average identity and deformed it to the appropriate expression/identity. A triangular 3D mesh is placed on the face and subdivided into sub-triangles to increase the density. First a set of landmarks is associated with vertices on the mesh, which remain unchanged during the fitting process. Fitting is done as an energy minimization problem that consists of terms describing opposing forces between the landmarks 11  and mesh points, the distance between the surface and the mesh, and a smoothness constraint, which is solved by setting partial derivatives to 0 and solved using SVD. Asymmetric Bilinear models are used for facial expression recognition in which models identity in one dimension and expression in another. 3D Facial shapes obtained through finding the difference between neutral and expressive faces in 3D can also be used to classify facial expressions [32].  Venkatesh et al. employed principal component analysis on 3D mesh datasets to attempt to classify facial expressions [10]. PCA is a popular mathematical technique that for allows the dimensions of the problem to be reduced, making it easier to solve. For the training set, 68 feature points, which have been known to effectively represent facial expressions, have been manually selected around the eyes, mouth and eyebrows. PCA is done on the x, y, and z locations of these feature points to determine eigenvalues that can be used to find matrix projections on a given matrix A. This method automatically extracts features after they are divided into bounding boxes using anthropomorphic properties. This method achieves the automatic selection of points; however it is very computationally expensive.