SlideShare a Scribd company logo
International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056
Volume: 06 Issue: 04 | Apr 2019 www.irjet.net p-ISSN: 2395-0072
© 2019, IRJET | Impact Factor value: 7.211 | ISO 9001:2008 Certified Journal | Page 1833
Emotion Recognition using Facial Expression
Aman Agrawal1, Anubhav Bhardwaj2, Prof. Sanjeev Kumar3
1,2,3Computer Science and Engineering, ABES Institute of Technology, Ghaziabad, India
---------------------------------------------------------------------***----------------------------------------------------------------------
Abstract - Facial Expressions are themostimmediatemode
of communication and interaction between individuals for
sharing info between themselves. Although Facial Emotion
Recognition can be conducted using multiple sensors but this
review focuses on studies that exclusively use facial images,
because visual expressions are one of the main information
channels in interpersonal communication. Automaticemotion
detection using facial expressions recognition is now a main
area of interest within various fields. These do not disclose the
feelings of anyone or don’t disclose someone’s mental views at
a particular period of time. This paper basically includes the
introduction of the human emotion recognition using the
facial expressions of a particular human being at a particular
instance of time.
Key Words: Human Emotions, Facial Expressions,Inception
v3 model, Tenserflow, CK+ dataset.
1. INTRODUCTION
Human Facial Expressions plays a vital roleinday-to-daylife
communication between different individuals. By
recognizing the facial expression of a particularindividual,it
becomes easier for determining the basic human emotions
like anger, fear, disgust, sadness, happiness and surprise.
These expressions can vary in every individual. Facial
expressions are produced by the movement of different
parts of our faces at different conditions. The preliminary
observation of the mentality is primarily based on the most
common forms of facial recognition to distinguish and
recognize face talks. In general,SVM,LBPandGaborareused
to distinguish and recognize transparentfacebasedonHaar,
Adaboost and Neural Networks. For example, Kobayashi et
al. has made the isolation and recognition of the first state-
based neural network. Caifeng Shan et al. uses a SVM
dictionary based on LBP's activities to get a visual
recognition.
In recent years, as a new recognition method that combines
artificial neural network and deep learning theory, the
convolutional neural network has made great strides in the
field of image classification. This method uses the local
reception field, the weight of the exchange and exchange
technology, and significantly reduces training parameters
compared to the neural network. It also has some degree of
translational invariance,rotationanddistortionoftheimage.
It has been widely used in speech recognition, facial
recognition. Tensorflow is the second artificial intelligence
research and development system developed by Google to
support CNN (3N), the neural network (RNN) and another
model of neural network. Depth This system is widely used
in Google products and services. It has been used in more
than 100 automated learning projects, with more than a
dozen fields of speech recognition, artificial vision and
robotics.
1.1. CATEGORIZING FACIAL EXPRESSIONS & IT’S
FEATURES:
Facial expression presents a key mechanism for describing
human emotion. From the beginning to the end of the day,
humanity changes many emotions, it may be due to your
mental or physical circumstances. Although human beings
are full of various emotions, modern psychology defines six
basic facial expressions: happiness, sadness, surprise, fear,
disgust and anger as universal emotions[2].Themovements
of facial muscles help identify human emotions. The basic
facial features are the eyebrows, the mouth,the noseandthe
eyes. We show that it is possible to differentiate faces with a
prototypical expression from the neutral expression.
Moreover, we can achieve this with data that has been
massively reduced in size: in the best case the original
images are reduced to just 5 components. We also
investigate the effect size on face images, a concept which
has not been reported previously on faces. Thisenablesusto
identify those areas of the face that are involved in the
production of a facial expression. Fear, disgust ,anger,
surprise, happiness and sadness all are the differenttypes of
International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056
Volume: 06 Issue: 04 | Apr 2019 www.irjet.net p-ISSN: 2395-0072
© 2019, IRJET | Impact Factor value: 7.211 | ISO 9001:2008 Certified Journal | Page 1834
facial emotions which a human being expresses through its
facial expressions whichcanbemadethroughthemovement
of faces .These movement of faces i.e. making facial
expressions brings different emotions at the face of the
human being .
2. Related work:
In 1971, Paul from a psychological point of view Six basic
emotions (happiness, sadness, anger, disgust, astonishment,
fear) through cultures. In 1978, Ekman et al. [22] A facial
action was developed Encoding system (FACS) to describe
facial expressions. Today,itisastatementoffacialexpression
Work is based on previous work, this document They have
also selected six basic emotions and neutral emotions facial
expression classification. Lu Guanming et al. [2] proposed a
convolution neural network for recognition of facial
expressions, and failure strategy and expansion strategy for
the data set adopted to solve the problem of insufficient
training data and the problem of overfitting. C. Shan et al.
applications LBP and SVM algorithm to classify facial
expressions with an accuracy of 95.10%. Andre Teixeira
Lopes and others [7] used a deep convolutional neural
network to classify facial expression, accuracy reached
97.81%. In traditional layout and learning in order to ensure
the accuracy and reliability ofthe classification trained,must
meet two basic assumptions: first the training samples and
test samples are taken independent distribution; the second
is that it is necessary training data. In many cases, however,
these two the conditions are hard to find. The most likely
scenario is that the training data is outdated. This usually
requires label a large number of training data to needs of our
training, but it is very expensive to label new data, which
requires a lot of manpower and material resources.
The purpose of the transfer of education is apply the
knowledge of an environment to a new environment.
Compared to the traditional learning machine, transfer
learning relax requirements of the two basicassumptions,it
cannot require a large amount of training data, or only a
small amount of data can be labeled. Transfer of learning
makes. It does not have to be like traditional machine
learning. The training samples and test samples should be
independently and identically distributed. In the same
compared to the traditional network with initialization, the
learning speed of the transfer learning is much faster.
Proposed System:
This experiment is based on the Inception V3 model[8]. An
example of platform tensorflow [1], hardware core, Intel i7
at 2.9 GHz, 8 GB DDR3 at 1600 MHz. CK + dataset used for
experimental expression data sets (Cohn-Kanade [15], CK +
dataset). We can choose the 1004 images of the facial image,
which contains 7 basic shapes, happiness (158), Tris (155)
out of (103), horror (146) surprise (161), fear (137) and
neutral ( 144).
re-process the image. First, the inception v3 model [8] is the
image in jpg or jpeg format for the formation and format of
the image of the dataset for the PNG format, so that the
image format is converted to PNG format in JPG format
converted. Secondly, since the image of the dataset is the
image of the facial expression taken by the digital camera
and some of the color images and some images in gray, the
image must be converted into the image in grayscale. In
order to eliminate the interference of the floor and hair and
to improve the accuracy of the classification of the images,
we must cut off the area of the face of the image and use the
clipping image for its formation, verificationandtesting.The
Inception v3 network model [8]isa verycomplexnetwork.If
you train the model directly and the data from the CK +
record [12] is relatively small, it takes a long time and the
training data is insufficient. Therefore, we use transfer-
learning technology to recycle the Inception v3 model. The
last layer of the Inception v3 model [8] is a softmax classifier
because the Image Net dataset contains 1000 classes, so the
classifier has 1000 root nodes in the original network. Here
we need to remove the last layer of the network, set the
number of output nodes to 7 (the number of facial
expressions), and then reuse the network model. The last
layer of the model is formed by the later propagation
algorithm, and the Cross-Entropy Cost function is used to
International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056
Volume: 06 Issue: 04 | Apr 2019 www.irjet.net p-ISSN: 2395-0072
© 2019, IRJET | Impact Factor value: 7.211 | ISO 9001:2008 Certified Journal | Page 1835
adjust the weight parameter by computing the error
between the output of the softmaxlayerandthevectorofthe
given example category day.
RESULT:
The data used in the experiment werecomposedoffiveadult
human volunteers (three men and two women) and were
used to construct the system, recording each person twice
while watching a 210-second video. The video contained six
different scenarios (relaxation, humor, sadness, fear and
discussion) and experienced the most feelings from
FeelTrace; we noticed that every volunteer had a different
reaction to every part of the video. The video was collected
from YouTube and also contained audio,whichhada greater
effect on the participants. The face video of each participant
was collected from the FPGA Xilinx Spartan-6 LX45 and a
camera sensor connected to it, and then an HDMI output
from the FPGA was later connected to the computer; the
Opencv Acquisition Toolbox that is used to receiveandstore
the real-time video capabilities of the FPGA board for use in
the machine. The videos were in RGB colors and for each
color component of a pixel in a frame. Each participant was
tied from the shoulder with his face to the camera while
sitting in the chair. It is made to be natural and real;
therefore, the videos were recorded in the office with a
natural background and in some cases some people can be
seen walking in the frame of the camera. The 10 recorded
videos contain no audio, because it contains no paper; this
also reduced the volume of data and thereforethedata could
be processed faster. It was interesting that the volunteers
had a different reaction to each part of the video . So in
general the data collection contained 63,000 labeled
samples.
CONCLUSION:
This article deals with programs to see face-to-face and
various research challenges. Basically, these programs
include face recognition, resource extraction, and
classification. Differentapproachescanbeusedtodetermine
the level of better recognition. Highly respected standards
are effective. These methods provide a practical solution to
the visual impairment problem and can work well in a
restricted area. Emotional sensitivity to the state of thestate
is a complex problem and creates difficulties due to the
physical and psychological characteristics associated with
individual characteristics. Therefore, the research in this
field will continue to study for many years, as it is important
to solve many problems to create a user interface and better
recognition of the complex emotional conditions required.
REFERENCES
[1] Martín Abadi, Ashish Agarwal, et al: TensorFlow: Large-
Scale Machine Learning on Heterogeneous Distributed
Systems. CoRR abs/1603.04467 (2016)
[2] G. Donato, M.S. Bartlett, J.C. Hager, P. Ekman, T.J.
Sejnowski, “Classifying Facial Actions”, IEEE Trans.
Pattern Analysis and Machine Intelligence, Vol. 21, No.
10, pp. 974-989, 1999
[3] L.Torres, J. Reutter, and L. Lorente, “The importance of
the color information in face recognition,” in
Proceedings IEEE International Conference on Image
Processing, vol. 3, pp.627–631, 1999.
[4] Andre Teixeira Lopes, Edilson de Aguiar, Thiago
Oliveira-Santos: A Facial ExpressionRecognitionSystem
UsingConvolutional Networks.SIBGRAPI2015:273-280
[5] E. Friesen and P. Ekman: “Facial action coding system: a
technique for the measurement of facial movement,”
Palo Alto, 1978.
[6] Ioan Buciu, Constantine Kotropoulos, Ioannis Pitas: ICA
and Gabor representation for facial expression
recognition. ICIP (2) 2003: 855-858
[7] Caifeng Shan, Tommaso Gritti: Learning Discriminative
LBP-Histogram Bins for Facial Expression Recognition.
BMVC 2008: 1-10
[8] Martín Abadi, Ashish Agarwal, et al: TensorFlow: Large-
Scale Machine Learning on Heterogeneous Distributed
Systems. CoRR abs/1603.04467 (2016)
[9] Sung-Oh Lee, Yong-Guk Kim, Gwi-Tae Park: Facial
Expression Recognition Based upon Gabor-Wavelets
Based Enhanced Fisher Model. ISCIS 2003: 490-496
[10] C. Shan, S. Gong, and P. W. McOwan: “Facial expression
recognition based on local binary patterns: A
comprehensive study,” Image and Vision

More Related Content

PDF
IRJET - Gender and Age Prediction using Wideresnet Architecture
PDF
IRJET- Spot Me - A Smart Attendance System based on Face Recognition
PDF
Object Detection using SURF features
PDF
IRJET- Facial Emotion Detection using Convolutional Neural Network
PDF
Real Time Implementation Of Face Recognition System
PDF
Fuzzy Forest Learning based Online Facial Biometric Verification for Privacy ...
PDF
Facial Emotion Recognition using Convolution Neural Network
PDF
IRJET- Characteristics and Mood Prediction of Human by Signature and Facial E...
IRJET - Gender and Age Prediction using Wideresnet Architecture
IRJET- Spot Me - A Smart Attendance System based on Face Recognition
Object Detection using SURF features
IRJET- Facial Emotion Detection using Convolutional Neural Network
Real Time Implementation Of Face Recognition System
Fuzzy Forest Learning based Online Facial Biometric Verification for Privacy ...
Facial Emotion Recognition using Convolution Neural Network
IRJET- Characteristics and Mood Prediction of Human by Signature and Facial E...

What's hot (20)

PDF
IRJET- Neural Network based Script Recognition using Wavelet Features: An App...
PDF
IRJET- Automated Detection of Diabetic Retinopathy using Compressed Sensing
PDF
Facial Expression Recognition Using Enhanced Deep 3D Convolutional Neural Net...
PDF
Artificial Bee Colony Based Image Enhancement for Color Images in Discrete Wa...
PDF
IRJET- DNA Fragmentation Pattern and its Application in DNA Sample Type Class...
PDF
IRJET- Wearable AI Device for Blind
PDF
IRJET- Kidney Stone Classification using Deep Neural Networks and Facilitatin...
PDF
3D Dynamic Facial Sequences Analsysis for face recognition and emotion detection
PDF
Automatic Attendance System using Deep Learning Framework
PDF
IRJET- Survey on Face Recognition using Biometrics
PDF
Human Emotion Recognition
PDF
IRJET- A Study of Different Convolution Neural Network Architectures for Huma...
PDF
IRJET - Facial Recognition based Attendance System with LBPH
PDF
IRJET- Digiyathra
PDF
Progression in Large Age-Gap Face Verification
PDF
IRJET - Response Analysis of Educational Videos
DOCX
VTU FINAL YEAR PROJECT REPORT Front pages
PPTX
Face recognition v1
PDF
Comprehensive Study of the Work Done In Image Processing and Compression Tech...
PDF
IRJET - Human Eye Pupil Detection Technique using Center of Gravity Method
IRJET- Neural Network based Script Recognition using Wavelet Features: An App...
IRJET- Automated Detection of Diabetic Retinopathy using Compressed Sensing
Facial Expression Recognition Using Enhanced Deep 3D Convolutional Neural Net...
Artificial Bee Colony Based Image Enhancement for Color Images in Discrete Wa...
IRJET- DNA Fragmentation Pattern and its Application in DNA Sample Type Class...
IRJET- Wearable AI Device for Blind
IRJET- Kidney Stone Classification using Deep Neural Networks and Facilitatin...
3D Dynamic Facial Sequences Analsysis for face recognition and emotion detection
Automatic Attendance System using Deep Learning Framework
IRJET- Survey on Face Recognition using Biometrics
Human Emotion Recognition
IRJET- A Study of Different Convolution Neural Network Architectures for Huma...
IRJET - Facial Recognition based Attendance System with LBPH
IRJET- Digiyathra
Progression in Large Age-Gap Face Verification
IRJET - Response Analysis of Educational Videos
VTU FINAL YEAR PROJECT REPORT Front pages
Face recognition v1
Comprehensive Study of the Work Done In Image Processing and Compression Tech...
IRJET - Human Eye Pupil Detection Technique using Center of Gravity Method
Ad

Similar to IRJET- Emotion Recognition using Facial Expression (20)

PDF
IRJET- Emotion Classification of Human Face Expressions using Transfer Le...
PDF
IRJET- Facial Emotion Detection using Convolutional Neural Network
PDF
Human Emotion Recognition using Machine Learning
PDF
Ct35535539
PDF
Facial Emoji Recognition
PDF
icmi2015_ChaZhang
PDF
Emotion Detector
PPTX
Project report of thr facial expressionppt.pptx
PDF
ENHANCING THE HUMAN EMOTION RECOGNITION WITH FEATURE EXTRACTION TECHNIQUES
PDF
IRJET- Facial Expression Recognition System using Neural Network based on...
PDF
Gesture detection in real time that serve as feedback to improve the user exp...
PDF
IRJET-Facial Expression Recognition using Efficient LBP and CNN
PDF
Emotion Detection Using Facial Expression Recognition to Assist the Visually ...
PDF
IRJET- Intelligent Emotion Detection System using Facial Images
PDF
IRJET- An Innovative Approach for Interviewer to Judge State of Mind of an In...
PDF
Face expression recognition using Scaled-conjugate gradient Back-Propagation ...
PPTX
Emotion intelligence
PDF
A Real-Time Emotion Recognition from Facial Expression Using Conventional Neu...
PPTX
Facial Expression Recognition System using Deep Convolutional Neural Networks.
IRJET- Emotion Classification of Human Face Expressions using Transfer Le...
IRJET- Facial Emotion Detection using Convolutional Neural Network
Human Emotion Recognition using Machine Learning
Ct35535539
Facial Emoji Recognition
icmi2015_ChaZhang
Emotion Detector
Project report of thr facial expressionppt.pptx
ENHANCING THE HUMAN EMOTION RECOGNITION WITH FEATURE EXTRACTION TECHNIQUES
IRJET- Facial Expression Recognition System using Neural Network based on...
Gesture detection in real time that serve as feedback to improve the user exp...
IRJET-Facial Expression Recognition using Efficient LBP and CNN
Emotion Detection Using Facial Expression Recognition to Assist the Visually ...
IRJET- Intelligent Emotion Detection System using Facial Images
IRJET- An Innovative Approach for Interviewer to Judge State of Mind of an In...
Face expression recognition using Scaled-conjugate gradient Back-Propagation ...
Emotion intelligence
A Real-Time Emotion Recognition from Facial Expression Using Conventional Neu...
Facial Expression Recognition System using Deep Convolutional Neural Networks.
Ad

More from IRJET Journal (20)

PDF
Enhanced heart disease prediction using SKNDGR ensemble Machine Learning Model
PDF
Utilizing Biomedical Waste for Sustainable Brick Manufacturing: A Novel Appro...
PDF
Kiona – A Smart Society Automation Project
PDF
DESIGN AND DEVELOPMENT OF BATTERY THERMAL MANAGEMENT SYSTEM USING PHASE CHANG...
PDF
Invest in Innovation: Empowering Ideas through Blockchain Based Crowdfunding
PDF
SPACE WATCH YOUR REAL-TIME SPACE INFORMATION HUB
PDF
A Review on Influence of Fluid Viscous Damper on The Behaviour of Multi-store...
PDF
Wireless Arduino Control via Mobile: Eliminating the Need for a Dedicated Wir...
PDF
Explainable AI(XAI) using LIME and Disease Detection in Mango Leaf by Transfe...
PDF
BRAIN TUMOUR DETECTION AND CLASSIFICATION
PDF
The Project Manager as an ambassador of the contract. The case of NEC4 ECC co...
PDF
"Enhanced Heat Transfer Performance in Shell and Tube Heat Exchangers: A CFD ...
PDF
Advancements in CFD Analysis of Shell and Tube Heat Exchangers with Nanofluid...
PDF
Breast Cancer Detection using Computer Vision
PDF
Auto-Charging E-Vehicle with its battery Management.
PDF
Analysis of high energy charge particle in the Heliosphere
PDF
A Novel System for Recommending Agricultural Crops Using Machine Learning App...
PDF
Auto-Charging E-Vehicle with its battery Management.
PDF
Analysis of high energy charge particle in the Heliosphere
PDF
Wireless Arduino Control via Mobile: Eliminating the Need for a Dedicated Wir...
Enhanced heart disease prediction using SKNDGR ensemble Machine Learning Model
Utilizing Biomedical Waste for Sustainable Brick Manufacturing: A Novel Appro...
Kiona – A Smart Society Automation Project
DESIGN AND DEVELOPMENT OF BATTERY THERMAL MANAGEMENT SYSTEM USING PHASE CHANG...
Invest in Innovation: Empowering Ideas through Blockchain Based Crowdfunding
SPACE WATCH YOUR REAL-TIME SPACE INFORMATION HUB
A Review on Influence of Fluid Viscous Damper on The Behaviour of Multi-store...
Wireless Arduino Control via Mobile: Eliminating the Need for a Dedicated Wir...
Explainable AI(XAI) using LIME and Disease Detection in Mango Leaf by Transfe...
BRAIN TUMOUR DETECTION AND CLASSIFICATION
The Project Manager as an ambassador of the contract. The case of NEC4 ECC co...
"Enhanced Heat Transfer Performance in Shell and Tube Heat Exchangers: A CFD ...
Advancements in CFD Analysis of Shell and Tube Heat Exchangers with Nanofluid...
Breast Cancer Detection using Computer Vision
Auto-Charging E-Vehicle with its battery Management.
Analysis of high energy charge particle in the Heliosphere
A Novel System for Recommending Agricultural Crops Using Machine Learning App...
Auto-Charging E-Vehicle with its battery Management.
Analysis of high energy charge particle in the Heliosphere
Wireless Arduino Control via Mobile: Eliminating the Need for a Dedicated Wir...

Recently uploaded (20)

PDF
Well-logging-methods_new................
DOCX
ASol_English-Language-Literature-Set-1-27-02-2023-converted.docx
PPT
Project quality management in manufacturing
PPTX
Sustainable Sites - Green Building Construction
PPTX
Foundation to blockchain - A guide to Blockchain Tech
PPTX
CYBER-CRIMES AND SECURITY A guide to understanding
PPTX
FINAL REVIEW FOR COPD DIANOSIS FOR PULMONARY DISEASE.pptx
PDF
Mohammad Mahdi Farshadian CV - Prospective PhD Student 2026
PPTX
Welding lecture in detail for understanding
PPTX
Geodesy 1.pptx...............................................
PPT
Mechanical Engineering MATERIALS Selection
PDF
keyrequirementskkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkk
PPTX
IOT PPTs Week 10 Lecture Material.pptx of NPTEL Smart Cities contd
PPTX
Infosys Presentation by1.Riyan Bagwan 2.Samadhan Naiknavare 3.Gaurav Shinde 4...
PPTX
bas. eng. economics group 4 presentation 1.pptx
PPTX
web development for engineering and engineering
PPTX
Internet of Things (IOT) - A guide to understanding
PPTX
MET 305 2019 SCHEME MODULE 2 COMPLETE.pptx
PDF
SM_6th-Sem__Cse_Internet-of-Things.pdf IOT
PDF
July 2025 - Top 10 Read Articles in International Journal of Software Enginee...
Well-logging-methods_new................
ASol_English-Language-Literature-Set-1-27-02-2023-converted.docx
Project quality management in manufacturing
Sustainable Sites - Green Building Construction
Foundation to blockchain - A guide to Blockchain Tech
CYBER-CRIMES AND SECURITY A guide to understanding
FINAL REVIEW FOR COPD DIANOSIS FOR PULMONARY DISEASE.pptx
Mohammad Mahdi Farshadian CV - Prospective PhD Student 2026
Welding lecture in detail for understanding
Geodesy 1.pptx...............................................
Mechanical Engineering MATERIALS Selection
keyrequirementskkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkk
IOT PPTs Week 10 Lecture Material.pptx of NPTEL Smart Cities contd
Infosys Presentation by1.Riyan Bagwan 2.Samadhan Naiknavare 3.Gaurav Shinde 4...
bas. eng. economics group 4 presentation 1.pptx
web development for engineering and engineering
Internet of Things (IOT) - A guide to understanding
MET 305 2019 SCHEME MODULE 2 COMPLETE.pptx
SM_6th-Sem__Cse_Internet-of-Things.pdf IOT
July 2025 - Top 10 Read Articles in International Journal of Software Enginee...

IRJET- Emotion Recognition using Facial Expression

  • 1. International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056 Volume: 06 Issue: 04 | Apr 2019 www.irjet.net p-ISSN: 2395-0072 © 2019, IRJET | Impact Factor value: 7.211 | ISO 9001:2008 Certified Journal | Page 1833 Emotion Recognition using Facial Expression Aman Agrawal1, Anubhav Bhardwaj2, Prof. Sanjeev Kumar3 1,2,3Computer Science and Engineering, ABES Institute of Technology, Ghaziabad, India ---------------------------------------------------------------------***---------------------------------------------------------------------- Abstract - Facial Expressions are themostimmediatemode of communication and interaction between individuals for sharing info between themselves. Although Facial Emotion Recognition can be conducted using multiple sensors but this review focuses on studies that exclusively use facial images, because visual expressions are one of the main information channels in interpersonal communication. Automaticemotion detection using facial expressions recognition is now a main area of interest within various fields. These do not disclose the feelings of anyone or don’t disclose someone’s mental views at a particular period of time. This paper basically includes the introduction of the human emotion recognition using the facial expressions of a particular human being at a particular instance of time. Key Words: Human Emotions, Facial Expressions,Inception v3 model, Tenserflow, CK+ dataset. 1. INTRODUCTION Human Facial Expressions plays a vital roleinday-to-daylife communication between different individuals. By recognizing the facial expression of a particularindividual,it becomes easier for determining the basic human emotions like anger, fear, disgust, sadness, happiness and surprise. These expressions can vary in every individual. Facial expressions are produced by the movement of different parts of our faces at different conditions. The preliminary observation of the mentality is primarily based on the most common forms of facial recognition to distinguish and recognize face talks. In general,SVM,LBPandGaborareused to distinguish and recognize transparentfacebasedonHaar, Adaboost and Neural Networks. For example, Kobayashi et al. has made the isolation and recognition of the first state- based neural network. Caifeng Shan et al. uses a SVM dictionary based on LBP's activities to get a visual recognition. In recent years, as a new recognition method that combines artificial neural network and deep learning theory, the convolutional neural network has made great strides in the field of image classification. This method uses the local reception field, the weight of the exchange and exchange technology, and significantly reduces training parameters compared to the neural network. It also has some degree of translational invariance,rotationanddistortionoftheimage. It has been widely used in speech recognition, facial recognition. Tensorflow is the second artificial intelligence research and development system developed by Google to support CNN (3N), the neural network (RNN) and another model of neural network. Depth This system is widely used in Google products and services. It has been used in more than 100 automated learning projects, with more than a dozen fields of speech recognition, artificial vision and robotics. 1.1. CATEGORIZING FACIAL EXPRESSIONS & IT’S FEATURES: Facial expression presents a key mechanism for describing human emotion. From the beginning to the end of the day, humanity changes many emotions, it may be due to your mental or physical circumstances. Although human beings are full of various emotions, modern psychology defines six basic facial expressions: happiness, sadness, surprise, fear, disgust and anger as universal emotions[2].Themovements of facial muscles help identify human emotions. The basic facial features are the eyebrows, the mouth,the noseandthe eyes. We show that it is possible to differentiate faces with a prototypical expression from the neutral expression. Moreover, we can achieve this with data that has been massively reduced in size: in the best case the original images are reduced to just 5 components. We also investigate the effect size on face images, a concept which has not been reported previously on faces. Thisenablesusto identify those areas of the face that are involved in the production of a facial expression. Fear, disgust ,anger, surprise, happiness and sadness all are the differenttypes of
  • 2. International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056 Volume: 06 Issue: 04 | Apr 2019 www.irjet.net p-ISSN: 2395-0072 © 2019, IRJET | Impact Factor value: 7.211 | ISO 9001:2008 Certified Journal | Page 1834 facial emotions which a human being expresses through its facial expressions whichcanbemadethroughthemovement of faces .These movement of faces i.e. making facial expressions brings different emotions at the face of the human being . 2. Related work: In 1971, Paul from a psychological point of view Six basic emotions (happiness, sadness, anger, disgust, astonishment, fear) through cultures. In 1978, Ekman et al. [22] A facial action was developed Encoding system (FACS) to describe facial expressions. Today,itisastatementoffacialexpression Work is based on previous work, this document They have also selected six basic emotions and neutral emotions facial expression classification. Lu Guanming et al. [2] proposed a convolution neural network for recognition of facial expressions, and failure strategy and expansion strategy for the data set adopted to solve the problem of insufficient training data and the problem of overfitting. C. Shan et al. applications LBP and SVM algorithm to classify facial expressions with an accuracy of 95.10%. Andre Teixeira Lopes and others [7] used a deep convolutional neural network to classify facial expression, accuracy reached 97.81%. In traditional layout and learning in order to ensure the accuracy and reliability ofthe classification trained,must meet two basic assumptions: first the training samples and test samples are taken independent distribution; the second is that it is necessary training data. In many cases, however, these two the conditions are hard to find. The most likely scenario is that the training data is outdated. This usually requires label a large number of training data to needs of our training, but it is very expensive to label new data, which requires a lot of manpower and material resources. The purpose of the transfer of education is apply the knowledge of an environment to a new environment. Compared to the traditional learning machine, transfer learning relax requirements of the two basicassumptions,it cannot require a large amount of training data, or only a small amount of data can be labeled. Transfer of learning makes. It does not have to be like traditional machine learning. The training samples and test samples should be independently and identically distributed. In the same compared to the traditional network with initialization, the learning speed of the transfer learning is much faster. Proposed System: This experiment is based on the Inception V3 model[8]. An example of platform tensorflow [1], hardware core, Intel i7 at 2.9 GHz, 8 GB DDR3 at 1600 MHz. CK + dataset used for experimental expression data sets (Cohn-Kanade [15], CK + dataset). We can choose the 1004 images of the facial image, which contains 7 basic shapes, happiness (158), Tris (155) out of (103), horror (146) surprise (161), fear (137) and neutral ( 144). re-process the image. First, the inception v3 model [8] is the image in jpg or jpeg format for the formation and format of the image of the dataset for the PNG format, so that the image format is converted to PNG format in JPG format converted. Secondly, since the image of the dataset is the image of the facial expression taken by the digital camera and some of the color images and some images in gray, the image must be converted into the image in grayscale. In order to eliminate the interference of the floor and hair and to improve the accuracy of the classification of the images, we must cut off the area of the face of the image and use the clipping image for its formation, verificationandtesting.The Inception v3 network model [8]isa verycomplexnetwork.If you train the model directly and the data from the CK + record [12] is relatively small, it takes a long time and the training data is insufficient. Therefore, we use transfer- learning technology to recycle the Inception v3 model. The last layer of the Inception v3 model [8] is a softmax classifier because the Image Net dataset contains 1000 classes, so the classifier has 1000 root nodes in the original network. Here we need to remove the last layer of the network, set the number of output nodes to 7 (the number of facial expressions), and then reuse the network model. The last layer of the model is formed by the later propagation algorithm, and the Cross-Entropy Cost function is used to
  • 3. International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056 Volume: 06 Issue: 04 | Apr 2019 www.irjet.net p-ISSN: 2395-0072 © 2019, IRJET | Impact Factor value: 7.211 | ISO 9001:2008 Certified Journal | Page 1835 adjust the weight parameter by computing the error between the output of the softmaxlayerandthevectorofthe given example category day. RESULT: The data used in the experiment werecomposedoffiveadult human volunteers (three men and two women) and were used to construct the system, recording each person twice while watching a 210-second video. The video contained six different scenarios (relaxation, humor, sadness, fear and discussion) and experienced the most feelings from FeelTrace; we noticed that every volunteer had a different reaction to every part of the video. The video was collected from YouTube and also contained audio,whichhada greater effect on the participants. The face video of each participant was collected from the FPGA Xilinx Spartan-6 LX45 and a camera sensor connected to it, and then an HDMI output from the FPGA was later connected to the computer; the Opencv Acquisition Toolbox that is used to receiveandstore the real-time video capabilities of the FPGA board for use in the machine. The videos were in RGB colors and for each color component of a pixel in a frame. Each participant was tied from the shoulder with his face to the camera while sitting in the chair. It is made to be natural and real; therefore, the videos were recorded in the office with a natural background and in some cases some people can be seen walking in the frame of the camera. The 10 recorded videos contain no audio, because it contains no paper; this also reduced the volume of data and thereforethedata could be processed faster. It was interesting that the volunteers had a different reaction to each part of the video . So in general the data collection contained 63,000 labeled samples. CONCLUSION: This article deals with programs to see face-to-face and various research challenges. Basically, these programs include face recognition, resource extraction, and classification. Differentapproachescanbeusedtodetermine the level of better recognition. Highly respected standards are effective. These methods provide a practical solution to the visual impairment problem and can work well in a restricted area. Emotional sensitivity to the state of thestate is a complex problem and creates difficulties due to the physical and psychological characteristics associated with individual characteristics. Therefore, the research in this field will continue to study for many years, as it is important to solve many problems to create a user interface and better recognition of the complex emotional conditions required. REFERENCES [1] Martín Abadi, Ashish Agarwal, et al: TensorFlow: Large- Scale Machine Learning on Heterogeneous Distributed Systems. CoRR abs/1603.04467 (2016) [2] G. Donato, M.S. Bartlett, J.C. Hager, P. Ekman, T.J. Sejnowski, “Classifying Facial Actions”, IEEE Trans. Pattern Analysis and Machine Intelligence, Vol. 21, No. 10, pp. 974-989, 1999 [3] L.Torres, J. Reutter, and L. Lorente, “The importance of the color information in face recognition,” in Proceedings IEEE International Conference on Image Processing, vol. 3, pp.627–631, 1999. [4] Andre Teixeira Lopes, Edilson de Aguiar, Thiago Oliveira-Santos: A Facial ExpressionRecognitionSystem UsingConvolutional Networks.SIBGRAPI2015:273-280 [5] E. Friesen and P. Ekman: “Facial action coding system: a technique for the measurement of facial movement,” Palo Alto, 1978. [6] Ioan Buciu, Constantine Kotropoulos, Ioannis Pitas: ICA and Gabor representation for facial expression recognition. ICIP (2) 2003: 855-858 [7] Caifeng Shan, Tommaso Gritti: Learning Discriminative LBP-Histogram Bins for Facial Expression Recognition. BMVC 2008: 1-10 [8] Martín Abadi, Ashish Agarwal, et al: TensorFlow: Large- Scale Machine Learning on Heterogeneous Distributed Systems. CoRR abs/1603.04467 (2016) [9] Sung-Oh Lee, Yong-Guk Kim, Gwi-Tae Park: Facial Expression Recognition Based upon Gabor-Wavelets Based Enhanced Fisher Model. ISCIS 2003: 490-496 [10] C. Shan, S. Gong, and P. W. McOwan: “Facial expression recognition based on local binary patterns: A comprehensive study,” Image and Vision