SlideShare a Scribd company logo
WHAT HAPPENS IN FACE DURING A FACIAL EXPRESSION?
USING DATA MINING TECHNIQUES TO ANALYZE FACIAL
EXPRESSION MOTION VECTORS
A SHORT ELICIATION OF PAPER (MY VERSION) – LINK
-Arshia Sali
INTRODUCTION
• Facial expression is one of the fastest and most efficient ways in
communication. It prevails over words in human communication.
• Although it is easy for humans to recognize facial expressions,
Automatic Recognition remains difficult for machines.
• Human-computer interaction is probably the most important
application in automatic facial expression recognition.
• Other fields such as date-driven animation, psychology researches,
medicine, security, education and distance learning, customer
satisfaction and video conferences can use the results of these
researches.
• They can even be used in the (re)production of artificial emotions.
• Machines can analyze the changes in face during facial expression
presentation.
• The ideal automatic facial expression recognition system must be
completely automatic, person independent and robust to any
environmental condition.
• To do that, a three stage process must be done.
• Face Detection
• Facial Feature Extraction
• Facial Expression Classification
• This research only focused on facial feature extraction and facial
expression classification.
• Assumption: Face images are already available in suitable conditions.
DATA MINING TECHNIQUES
• Data mining is the process of exploration and extraction of the knowledge
from the data. It involves learning in a practical sense.
• The word learning means that these techniques learn from the changes
appearing in the data in a way that improves their performance in the future.
Thus, learning is tied to performance enhancement.
• Based on this learning process, the learning techniques can be employed to
map data into decision model in order to produce the predicting output from
the new data. This decision model is called classifier.
I. MOTION VECTOR EXTRACTION
(DATA COLLECTION PHASE)
• A facial expression results in some temporary shift of facial features because of
facial muscle movements.
• Motion vectors that show facial deformation were extracted from image sequence of
facial expression.
• In this research, Optical flow algorithm was used to extract motion vectors.
• Optical flow algorithm has some weaknesses. For example, luminance must not
change while the image sequence is created. Otherwise, this algorithm is not able to
extract motion vector correctly. However, as the changes in face because of facial
expression happens in a very short time, commonly luminance change was not
happen. So, this weakness of optical flow is not a critical problem in this algorithm.
METHODOLGY
• It is based on tracking points across multiple images.
• In this research, at most, 8-image sequence was used for each test. We used
image sequences of faces in a frontal view displaying various facial expressions
of emotion.
Preprocessing
• The images were converted into gray scale images and then segmented the face
in a rectangular bounding. Extra parts of images were cut. It resizes image
dimensions to about 280 × 330 pixels.
• Assumption : Facial expression sequences always started with a neutral facial
expression and ended with the apex of a facial expression. The faces were
without hair and glasses and no rigid head movement could be acceptable for
the method to work properly.
(a) shows the image sequence of disgust (b) shows the image sequence of happiness
• However, there are a few cases the subject has rigid head movement while they
show an expression. This disturbs the extracted motion vectors and will mislead
the classification algorithms. So, we eliminated these types of extracted motion
vectors.
• Different modifications have been done on optical flow algorithm. To minimize
the effects of luminance variation and inaccuracies in facial point tracking,
Gautama-VanHulle optical flow method has been used.
• It was claimed that this method is less sensitive to luminance variation and has
very good efficiency in motion vector extraction.
Two examples of using optical flow
algorithm on image sequences of facial
expression have been presented.
Figure 1: Disgust motion vectors
extracted from disgust image
sequence
Figure 2: Happiness motion vectors
extracted from happiness image
sequence.
FACE SEGMENTATION
• To classify extracted motion vectors into six basic emotions, the face was divided into
six parts as shown in the figure a.
• At first, the position of pupils and mouth must be located.
• Their initial location can be detected in the first frame of the input face image sequence.
As it is clear in Figure b, axis number 1 connects two pupils.
• Axis number 3 is perpendicular to axis number 1 and divides it into two equal parts.
• Axis number 2 shows the mouth position. It is not necessary to specify axes location
precisely. Approximation of axis locations is enough.
Face segmentation into 6 areas
in
a) A scheme
b) A real face image
Figure 5 shows Bassile facial expression deformation.
Figure 6 displays other types of deformations extracted experimentally from facial expression
image sequences in CK+ dataset.
• It is clear from these figures that the most important changes
happen on the top of eyes, around the eyebrows and mouth.
• So these areas are divided into smaller sections as shown in Figure 7(a).
• Dividing these areas into smaller sections gives the chance to analyze the deformations
in these sections more precisely.
• In Figure 7(b), nine vectors showing different directions and segmentations can be seen.
• As face is symmetric, the number and size of segments in X1 and X4, X2 and X5 and, X3
and X6 directions are the same. These segmentations can be different in width, length
and number.
• In each segment, the ratio of vector numbers and their average length in x and y
directions are extracted and used for mining.
• So, for each image sequence, 3n features are extracted, where n is the number of small
segments. For example, in Figure 7(a), the number of small segments is 120. So 360
features are extracted and used for mining. By using this 360 feature, the system should
analyze the face deformation and detect facial expressions
Figure 7. a) Face areas having the
most important deformation
during facial expression are
divided into smaller sections. b)
Nine different directions in which
segmentations are done.
FACIAL EXPRESSION CLASSIFICATION
(DATA ANALYSIS STAGE)
Data Cleaning - Handling Missing Values and Outliers
• Outliers are replaced with the nearest value that would not be considered as an outlier.
• For e.g., if an outlier was defined to be anything above or below three standard
deviations, then all of them would be replaced with the highest or lowest value within
this range.
• Extreme values would be discarded.
• Missing values (sections wherein there was no motion vector) would be replaced by
zero.
• Then, 10-fold cross validation was applied 50 times on these data.
• In this research, C5.0, CRT, QUEST, CHAID, Neural Networks, Deep Learning, SVM and
Discriminant algorithms were used to classify the extracted features. These
classification algorithms were used to extract knowledge from a dataset consisting of
motion vector features. These motion vectors formed feature vectors. These features
calculated according to equations 1, 2 and 3.
These features calculated according to equations 1, 2 and 3.
As illustrated in Figure 8, each feature vector was composed of the values of features calculated by
equations 1, 2 and 3. The ratio of vector number and the mean length of motion vectors in each
created section were used as characteristic features.
The location and size of deformation are often different from one image sequence to another. They
even differ from one occurrence to another one in a person. However, these differences are not a lot in
a specific facial expression.
PSEUDO-CODE OF PROPOSED ALGORITHM
• Feature vectors were extracted from
motion vectors in the image
sequences.
• For each image sequence, between
18 and 631 features were used.
• For each facial expression, 50 times
10-fold cross validation were
applied.
• In order to evaluate and compare
the performance of different
algorithms, different algorithms
were tested on 25 situations
(different width, height and number
of segments in different directions)
EXPERIMENTAL RESULTS
• To determine how many segments were more suitable and what size is
better for them, these different situations were compared.
• It was observed that DL in situation 21, SVM and C5.0 in situation 19
had the best performance with the accuracy rate of 95.3%, 92.8% and
90.2%, respectively.
• The confusion matrix for these algorithms after ignoring the confusion
between different types of facial expressions was observed . It means
that for example, both types of happiness type 1 and 2 are happiness
and it is not important which type of happiness is recognized. Their
overall averages were also in the same rank.
• As far as misidentifications produced by these methods are
concerned, most of them arose from confusion between
similar motion vector location and directions.
• Only the motion vectors direction in area number two and
three could distinguish angry type 1 from disgust, causing
misidentification of them in the case where the motion
vector directions were not recognized precisely in these
areas.
• Other high rate misclassifications happened between fear
and happiness. Since both types of happiness and both
types of fear had the same motion vectors in the lower part
of face, in about five percent, fear was classified as
happiness.
• The most important misclassifications are summarized in
the Table shown, sorted according to the misclassification
rate.
MISIDENTIFICATIONS
ANALYSIS OF BEST ALGORITHM AND BEST SEGMENTATIO
• What must be considered is which algorithm
can classify emotion vectors
• With higher accuracy
• With lower features.
• Three best situations of each algorithm are
shown in Table. In three algorithms CRT,
SVM and C5.0, situation 19 resulted in the
best.
• It had the second rank in DL. Meanwhile,
situation 19 had the best overall accuracy
among other situations.
• In situation 19, the face is divided into four
sections in each area. There were two
subsections in each direction with equal
width and height of 30 pixels. So, it was the
best segmentation for facial expression
recognition.
CONCLUSION
• In this research, some of the most famous classification algorithms were used upon
changes in the position of facial points. These points were tracked in an image sequence of
a frontal view of the face. The best methods were chosen. They were DL, SVM and C5.0,
with the accuracy rate of 95.3%, 92.8% and 90.2%, respectively.
• The most distinguishing changes were found to be deformation in Y direction, in the upper
and lower areas of the face. Meanwhile, some more changes of face during facial
expression were investigated in this research. It shows that six more changes can be
identified in addition to six basic changes happens in face during representation of facial
expression.
• This paper not only provided a basic understanding of how facial points could change
during a facial expression, but also it tried to classify these deformations.
• Future work on this issue aims at investigating automatic facial expression while head has
rigid motions. Meanwhile, the performance of the method must be invariant to occlusions
like glasses and facial hair. In addition, the method must perform well independently of the
changes in the illumination intensity while image sequence is created.
MY VIEWS AND LEARNINGS
• Automating the analysis of facial changes, especially from a frontal
view, is important to advance the studies on automatic facial
expression recognition, design human-machine interfaces.
• Facial Expression Analysis would boost applications in various areas
such as security, medicine, animations and education.
• Different changes in parts of face were analyzed to address what
exactly happened in face when a person shows an emotion. Changes
happening in the face because of facial expression were introduced in
this research for the first time.
• Introduced to some of the most state-of-the-art classification
algorithms such as C5.0, CRT, QUEST, CHAID, Deep Learning (DL),
SVM and Discriminant algorithms which were used to classify the
extracted motion vectors.

More Related Content

PDF
Ijariie1177
PDF
A017530114
PDF
Facial landmarking localization for emotion recognition using bayesian shape ...
PDF
FACIAL LANDMARKING LOCALIZATION FOR EMOTION RECOGNITION USING BAYESIAN SHAPE ...
PDF
Face Recognition System Using Local Ternary Pattern and Signed Number Multipl...
PDF
A Study on Face Recognition Technique based on Eigenface
PDF
Efficient Facial Expression and Face Recognition using Ranking Method
PDF
Reconstruction of partially damaged facial image
Ijariie1177
A017530114
Facial landmarking localization for emotion recognition using bayesian shape ...
FACIAL LANDMARKING LOCALIZATION FOR EMOTION RECOGNITION USING BAYESIAN SHAPE ...
Face Recognition System Using Local Ternary Pattern and Signed Number Multipl...
A Study on Face Recognition Technique based on Eigenface
Efficient Facial Expression and Face Recognition using Ranking Method
Reconstruction of partially damaged facial image

What's hot (12)

PDF
Distribution of Z values in task-fMRI not centered around zero (OHBM 2018)
DOCX
Optimization of Facial Landmark for Sentiment Analysis on Images with Human F...
PDF
V.KARTHIKEYAN PUBLISHED ARTICLE
PDF
V.karthikeyan published article a..a
PDF
Emotion Recognition using Image Processing
PPT
Normal distribution stat
PPT
Learning a multi-center convolutional network for unconstrained face alignment
PPTX
Real Applications of Normal Distributions
PDF
FACE RECOGNITION USING PRINCIPAL COMPONENT ANALYSIS WITH MEDIAN FOR NORMALIZA...
PDF
Estimating Human Pose from Occluded Images (ACCV 2009)
PDF
PREDICTING CLASS-IMBALANCED BUSINESS RISK USING RESAMPLING, REGULARIZATION, A...
PDF
Normal Distribution
Distribution of Z values in task-fMRI not centered around zero (OHBM 2018)
Optimization of Facial Landmark for Sentiment Analysis on Images with Human F...
V.KARTHIKEYAN PUBLISHED ARTICLE
V.karthikeyan published article a..a
Emotion Recognition using Image Processing
Normal distribution stat
Learning a multi-center convolutional network for unconstrained face alignment
Real Applications of Normal Distributions
FACE RECOGNITION USING PRINCIPAL COMPONENT ANALYSIS WITH MEDIAN FOR NORMALIZA...
Estimating Human Pose from Occluded Images (ACCV 2009)
PREDICTING CLASS-IMBALANCED BUSINESS RISK USING RESAMPLING, REGULARIZATION, A...
Normal Distribution
Ad

Similar to Data Mining - Facial Expression Recognition (20)

PPTX
Facial expression recognition android application
PDF
Facial Expression Recognition Based on Facial Motion Patterns
PDF
Facial Expression Detection for video sequences using local feature extractio...
PPT
09.ppt face x of the ai a d ml that are
PPT
09 (1).ppt the project that are done in the form
PPTX
Expression invariant face recognition
PDF
Paper id 29201416
PPTX
Predicting Emotions through Facial Expressions
PPTX
Facial expression recognition
PDF
Emotion Detector
PDF
Survey on Facial Expression Analysis and Recognition
PDF
Emotion Recognition from Facial Expression Based on Fiducial Points Detection...
PDF
presentation
PDF
A Literature Review On Emotion Recognition System Using Various Facial Expres...
PPTX
LEARNING BASES OF ACTICITY
PDF
IRJET- Facial Expression Recognition: Review
PPTX
Ph.D. Research
PDF
Fiducial Point Location Algorithm for Automatic Facial Expression Recognition
Facial expression recognition android application
Facial Expression Recognition Based on Facial Motion Patterns
Facial Expression Detection for video sequences using local feature extractio...
09.ppt face x of the ai a d ml that are
09 (1).ppt the project that are done in the form
Expression invariant face recognition
Paper id 29201416
Predicting Emotions through Facial Expressions
Facial expression recognition
Emotion Detector
Survey on Facial Expression Analysis and Recognition
Emotion Recognition from Facial Expression Based on Fiducial Points Detection...
presentation
A Literature Review On Emotion Recognition System Using Various Facial Expres...
LEARNING BASES OF ACTICITY
IRJET- Facial Expression Recognition: Review
Ph.D. Research
Fiducial Point Location Algorithm for Automatic Facial Expression Recognition
Ad

Recently uploaded (20)

PDF
annual-report-2024-2025 original latest.
PPTX
MODULE 8 - DISASTER risk PREPAREDNESS.pptx
PDF
TRAFFIC-MANAGEMENT-AND-ACCIDENT-INVESTIGATION-WITH-DRIVING-PDF-FILE.pdf
PPTX
Introduction to Knowledge Engineering Part 1
PPTX
Supervised vs unsupervised machine learning algorithms
PPTX
Microsoft-Fabric-Unifying-Analytics-for-the-Modern-Enterprise Solution.pptx
PDF
22.Patil - Early prediction of Alzheimer’s disease using convolutional neural...
PPTX
Computer network topology notes for revision
PPTX
Data_Analytics_and_PowerBI_Presentation.pptx
PPTX
IB Computer Science - Internal Assessment.pptx
PDF
168300704-gasification-ppt.pdfhghhhsjsjhsuxush
PPTX
Introduction-to-Cloud-ComputingFinal.pptx
PPT
Reliability_Chapter_ presentation 1221.5784
PPTX
The THESIS FINAL-DEFENSE-PRESENTATION.pptx
PPTX
Introduction to Firewall Analytics - Interfirewall and Transfirewall.pptx
PDF
Fluorescence-microscope_Botany_detailed content
PDF
BF and FI - Blockchain, fintech and Financial Innovation Lesson 2.pdf
PPTX
advance b rammar.pptxfdgdfgdfsgdfgsdgfdfgdfgsdfgdfgdfg
PPT
ISS -ESG Data flows What is ESG and HowHow
PPTX
climate analysis of Dhaka ,Banglades.pptx
annual-report-2024-2025 original latest.
MODULE 8 - DISASTER risk PREPAREDNESS.pptx
TRAFFIC-MANAGEMENT-AND-ACCIDENT-INVESTIGATION-WITH-DRIVING-PDF-FILE.pdf
Introduction to Knowledge Engineering Part 1
Supervised vs unsupervised machine learning algorithms
Microsoft-Fabric-Unifying-Analytics-for-the-Modern-Enterprise Solution.pptx
22.Patil - Early prediction of Alzheimer’s disease using convolutional neural...
Computer network topology notes for revision
Data_Analytics_and_PowerBI_Presentation.pptx
IB Computer Science - Internal Assessment.pptx
168300704-gasification-ppt.pdfhghhhsjsjhsuxush
Introduction-to-Cloud-ComputingFinal.pptx
Reliability_Chapter_ presentation 1221.5784
The THESIS FINAL-DEFENSE-PRESENTATION.pptx
Introduction to Firewall Analytics - Interfirewall and Transfirewall.pptx
Fluorescence-microscope_Botany_detailed content
BF and FI - Blockchain, fintech and Financial Innovation Lesson 2.pdf
advance b rammar.pptxfdgdfgdfsgdfgsdgfdfgdfgsdfgdfgdfg
ISS -ESG Data flows What is ESG and HowHow
climate analysis of Dhaka ,Banglades.pptx

Data Mining - Facial Expression Recognition

  • 1. WHAT HAPPENS IN FACE DURING A FACIAL EXPRESSION? USING DATA MINING TECHNIQUES TO ANALYZE FACIAL EXPRESSION MOTION VECTORS A SHORT ELICIATION OF PAPER (MY VERSION) – LINK -Arshia Sali
  • 2. INTRODUCTION • Facial expression is one of the fastest and most efficient ways in communication. It prevails over words in human communication. • Although it is easy for humans to recognize facial expressions, Automatic Recognition remains difficult for machines. • Human-computer interaction is probably the most important application in automatic facial expression recognition. • Other fields such as date-driven animation, psychology researches, medicine, security, education and distance learning, customer satisfaction and video conferences can use the results of these researches. • They can even be used in the (re)production of artificial emotions.
  • 3. • Machines can analyze the changes in face during facial expression presentation. • The ideal automatic facial expression recognition system must be completely automatic, person independent and robust to any environmental condition. • To do that, a three stage process must be done. • Face Detection • Facial Feature Extraction • Facial Expression Classification • This research only focused on facial feature extraction and facial expression classification. • Assumption: Face images are already available in suitable conditions.
  • 4. DATA MINING TECHNIQUES • Data mining is the process of exploration and extraction of the knowledge from the data. It involves learning in a practical sense. • The word learning means that these techniques learn from the changes appearing in the data in a way that improves their performance in the future. Thus, learning is tied to performance enhancement. • Based on this learning process, the learning techniques can be employed to map data into decision model in order to produce the predicting output from the new data. This decision model is called classifier.
  • 5. I. MOTION VECTOR EXTRACTION (DATA COLLECTION PHASE) • A facial expression results in some temporary shift of facial features because of facial muscle movements. • Motion vectors that show facial deformation were extracted from image sequence of facial expression. • In this research, Optical flow algorithm was used to extract motion vectors. • Optical flow algorithm has some weaknesses. For example, luminance must not change while the image sequence is created. Otherwise, this algorithm is not able to extract motion vector correctly. However, as the changes in face because of facial expression happens in a very short time, commonly luminance change was not happen. So, this weakness of optical flow is not a critical problem in this algorithm.
  • 6. METHODOLGY • It is based on tracking points across multiple images. • In this research, at most, 8-image sequence was used for each test. We used image sequences of faces in a frontal view displaying various facial expressions of emotion. Preprocessing • The images were converted into gray scale images and then segmented the face in a rectangular bounding. Extra parts of images were cut. It resizes image dimensions to about 280 × 330 pixels. • Assumption : Facial expression sequences always started with a neutral facial expression and ended with the apex of a facial expression. The faces were without hair and glasses and no rigid head movement could be acceptable for the method to work properly. (a) shows the image sequence of disgust (b) shows the image sequence of happiness
  • 7. • However, there are a few cases the subject has rigid head movement while they show an expression. This disturbs the extracted motion vectors and will mislead the classification algorithms. So, we eliminated these types of extracted motion vectors. • Different modifications have been done on optical flow algorithm. To minimize the effects of luminance variation and inaccuracies in facial point tracking, Gautama-VanHulle optical flow method has been used. • It was claimed that this method is less sensitive to luminance variation and has very good efficiency in motion vector extraction. Two examples of using optical flow algorithm on image sequences of facial expression have been presented. Figure 1: Disgust motion vectors extracted from disgust image sequence Figure 2: Happiness motion vectors extracted from happiness image sequence.
  • 8. FACE SEGMENTATION • To classify extracted motion vectors into six basic emotions, the face was divided into six parts as shown in the figure a. • At first, the position of pupils and mouth must be located. • Their initial location can be detected in the first frame of the input face image sequence. As it is clear in Figure b, axis number 1 connects two pupils. • Axis number 3 is perpendicular to axis number 1 and divides it into two equal parts. • Axis number 2 shows the mouth position. It is not necessary to specify axes location precisely. Approximation of axis locations is enough. Face segmentation into 6 areas in a) A scheme b) A real face image
  • 9. Figure 5 shows Bassile facial expression deformation. Figure 6 displays other types of deformations extracted experimentally from facial expression image sequences in CK+ dataset. • It is clear from these figures that the most important changes happen on the top of eyes, around the eyebrows and mouth.
  • 10. • So these areas are divided into smaller sections as shown in Figure 7(a). • Dividing these areas into smaller sections gives the chance to analyze the deformations in these sections more precisely. • In Figure 7(b), nine vectors showing different directions and segmentations can be seen. • As face is symmetric, the number and size of segments in X1 and X4, X2 and X5 and, X3 and X6 directions are the same. These segmentations can be different in width, length and number. • In each segment, the ratio of vector numbers and their average length in x and y directions are extracted and used for mining. • So, for each image sequence, 3n features are extracted, where n is the number of small segments. For example, in Figure 7(a), the number of small segments is 120. So 360 features are extracted and used for mining. By using this 360 feature, the system should analyze the face deformation and detect facial expressions Figure 7. a) Face areas having the most important deformation during facial expression are divided into smaller sections. b) Nine different directions in which segmentations are done.
  • 11. FACIAL EXPRESSION CLASSIFICATION (DATA ANALYSIS STAGE) Data Cleaning - Handling Missing Values and Outliers • Outliers are replaced with the nearest value that would not be considered as an outlier. • For e.g., if an outlier was defined to be anything above or below three standard deviations, then all of them would be replaced with the highest or lowest value within this range. • Extreme values would be discarded. • Missing values (sections wherein there was no motion vector) would be replaced by zero. • Then, 10-fold cross validation was applied 50 times on these data. • In this research, C5.0, CRT, QUEST, CHAID, Neural Networks, Deep Learning, SVM and Discriminant algorithms were used to classify the extracted features. These classification algorithms were used to extract knowledge from a dataset consisting of motion vector features. These motion vectors formed feature vectors. These features calculated according to equations 1, 2 and 3.
  • 12. These features calculated according to equations 1, 2 and 3. As illustrated in Figure 8, each feature vector was composed of the values of features calculated by equations 1, 2 and 3. The ratio of vector number and the mean length of motion vectors in each created section were used as characteristic features. The location and size of deformation are often different from one image sequence to another. They even differ from one occurrence to another one in a person. However, these differences are not a lot in a specific facial expression.
  • 14. • Feature vectors were extracted from motion vectors in the image sequences. • For each image sequence, between 18 and 631 features were used. • For each facial expression, 50 times 10-fold cross validation were applied. • In order to evaluate and compare the performance of different algorithms, different algorithms were tested on 25 situations (different width, height and number of segments in different directions) EXPERIMENTAL RESULTS
  • 15. • To determine how many segments were more suitable and what size is better for them, these different situations were compared. • It was observed that DL in situation 21, SVM and C5.0 in situation 19 had the best performance with the accuracy rate of 95.3%, 92.8% and 90.2%, respectively. • The confusion matrix for these algorithms after ignoring the confusion between different types of facial expressions was observed . It means that for example, both types of happiness type 1 and 2 are happiness and it is not important which type of happiness is recognized. Their overall averages were also in the same rank.
  • 16. • As far as misidentifications produced by these methods are concerned, most of them arose from confusion between similar motion vector location and directions. • Only the motion vectors direction in area number two and three could distinguish angry type 1 from disgust, causing misidentification of them in the case where the motion vector directions were not recognized precisely in these areas. • Other high rate misclassifications happened between fear and happiness. Since both types of happiness and both types of fear had the same motion vectors in the lower part of face, in about five percent, fear was classified as happiness. • The most important misclassifications are summarized in the Table shown, sorted according to the misclassification rate. MISIDENTIFICATIONS
  • 17. ANALYSIS OF BEST ALGORITHM AND BEST SEGMENTATIO • What must be considered is which algorithm can classify emotion vectors • With higher accuracy • With lower features. • Three best situations of each algorithm are shown in Table. In three algorithms CRT, SVM and C5.0, situation 19 resulted in the best. • It had the second rank in DL. Meanwhile, situation 19 had the best overall accuracy among other situations. • In situation 19, the face is divided into four sections in each area. There were two subsections in each direction with equal width and height of 30 pixels. So, it was the best segmentation for facial expression recognition.
  • 18. CONCLUSION • In this research, some of the most famous classification algorithms were used upon changes in the position of facial points. These points were tracked in an image sequence of a frontal view of the face. The best methods were chosen. They were DL, SVM and C5.0, with the accuracy rate of 95.3%, 92.8% and 90.2%, respectively. • The most distinguishing changes were found to be deformation in Y direction, in the upper and lower areas of the face. Meanwhile, some more changes of face during facial expression were investigated in this research. It shows that six more changes can be identified in addition to six basic changes happens in face during representation of facial expression. • This paper not only provided a basic understanding of how facial points could change during a facial expression, but also it tried to classify these deformations. • Future work on this issue aims at investigating automatic facial expression while head has rigid motions. Meanwhile, the performance of the method must be invariant to occlusions like glasses and facial hair. In addition, the method must perform well independently of the changes in the illumination intensity while image sequence is created.
  • 19. MY VIEWS AND LEARNINGS • Automating the analysis of facial changes, especially from a frontal view, is important to advance the studies on automatic facial expression recognition, design human-machine interfaces. • Facial Expression Analysis would boost applications in various areas such as security, medicine, animations and education. • Different changes in parts of face were analyzed to address what exactly happened in face when a person shows an emotion. Changes happening in the face because of facial expression were introduced in this research for the first time. • Introduced to some of the most state-of-the-art classification algorithms such as C5.0, CRT, QUEST, CHAID, Deep Learning (DL), SVM and Discriminant algorithms which were used to classify the extracted motion vectors.