International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056
Volume: 05 Issue: 12 | Dec 2018 www.irjet.net p-ISSN: 2395-0072
© 2018, IRJET | Impact Factor value: 7.211 | ISO 9001:2008 Certified Journal | Page 655
A Comprehensive Survey and Detailed Study on various Face
Recognition Methods
Tanuj Nagaria1, Dr. Dharmendra Chourishi2
1M.Tech (CSE) Research Scholar, NRI Institute of Science and Technology, RGPV, Bhopal, M.P, India
2Head PG, MCA, NRI Institute of Science and Technology, RGPV, Bhopal, M.P, India
---------------------------------------------------------------------***----------------------------------------------------------------------
Abstract – Face Recognition which is still one of the
challenging topic in Computer Vision and Image Processing
field remains an open problem as the recent advancements
has not yet reached high recognition performance in real
world environment. With the usage of technologies for
Computer Vision and Image Processing, Face Recognition has
gained more interest due to it applications and concerns on
high security. Human Face can be considered as a key
identifier in various fields and Computational models of face
recognition can be applied to a wide variety of problems
involving security system, Identification of criminals or
suspects, image and film processing, and human computer
interaction. This field ofcomputervisionand imageprocessing
involves recognition of face from image or a video source.
Several algorithms and methodologies for face identification
have been developed having their own pros and cons. In this
paper, we will provide review and survey of some famous
major face recognition algorithms, methodologies developed
so far. This paper will study various face recognition
algorithms developed and is categorized into 5 aspects, first
involving introduction and review on existing history, second
gives the technical details on methods, approaches developed
so far, third gives benefits and applications of face recognition
system, fourth one is the limitations involved in face
recognition algorithms and fifth is the conclusion. It is our
hope that by reviewing existing algorithms, we will see even
better method developed to solve this fundamental problem.
Key Words: Face Detection, Face Recognition,
Eigenfaces, LDA, ICA, LBP, SNoW, Neural Network
1. INTRODUCTION
With the rapid increase of computational powers and
development in sensing, analysis, equipment and
technologies, computers are becoming smarter. Several
research projects and commercial products have
demonstrated the capability for a computer to interact with
human in a natural way by looking at people through
cameras. Identification of a human being using biometrics
has been proved to be one of the best methodologies yet
developed and is one of the key area of research or say
interest. Biometric based techniques have emerged as the
most promising option for recognizing individuals in recent
time. Identification of an individual or entity and object
through biometrics provides better results and also various
features.
Biometricsbasedtechnologiesincludeidentification
based on physiological characters (such as face, iris, retina,
finger prints, finger geometry, hand veins, hand and palm
geometry, voice etc.) and behavioural traits (such as gait,
signature and keystroke dynamics). Among all the features
of human being, used as identification, Face Recognition
seems to have more advantages over otherbiometrics based
methods. And so it is one of the key research interest in
computer vision and image processing. Many applications
rely on the performance of digital image processing systems
like biometricsauthentication,multimedia,computerhuman
interaction, security applications etc.
Figure 1.Types of Biometrics Based Identification
Human Face Identification (HFI) among a set of
images, is an area of research which hasmanychallenges but
has received great deal of attention over the last few years
due to its many applications in various domains. As Human
Face is a complex multidimensional structure and is a rich
source of information about human behaviour, it requires
good computing techniques for its recognition. Also Human
Face displays emotion, indicate feelings, regulate social
behaviour, reveal brain function etc.
1.1 ADVANTAGES OF BIOMETRIC BASED RECOGNITION
METHODS
Biometric based techniques have now emerged as
promising option for identifying individuals for
authenticating people and granting access to physical and
virtual domains over others like passwords, PINs, smart or
plastic cards etc. Benefits of Biometric face recognition
methods are as below:
A. Better Security and No More Time Fraud
B. Automated System with Easy Integration
C. High Success Rate, User Friendly Systems and
Convenient Security Solution
BIOMETRICS BASED
TECHNOLOGIES
PHYSIOLOGICAL
CHARACTERS (Face, Iris,
Retina, Fingerprints)
BEHAVIOURAL TRAITS
(Gait, Signature,
Keystroke Dynamics)
International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056
Volume: 05 Issue: 12 | Dec 2018 www.irjet.net p-ISSN: 2395-0072
© 2018, IRJET | Impact Factor value: 7.211 | ISO 9001:2008 Certified Journal | Page 656
D. Beneficial for security and surveillance purposes
E. Face recognition can be done passively without any
explicit action or participation by user. Face images can
be acquired from a distance by a camera.
F. Facial features like individual biological traits cannot be
misplaced forgotten, stolen or forged.
G. Iris and Retina identification require expensive
equipment, voice recognition is susceptible to
background noises, signatures can be tampered or
forged or modified but face recognition is totally non-
intrusive and does not carry any such risks.
2. FACE RECOGNITION METHODS
There are various methods used in face recognition.
Each and every method has different features under
different conditions like illumination, expression and pose
change. In this section, classification and detailed study on
various methodologies developed is provided.
2.1 CLASSIFICATIONOFFACERECOGNITIONMETHODS
Face Recognition Methods (Approaches) are
classified/divided into following four categories[1][2][3][6].
A. Knowledge Based Methods: This methodcanbedefined
as a process which use pre-defined rules to determine a face
based on human knowledge. It is a rule based method which
involvescapturing the knowledge of face andconvertinginto
set of rules. It is simple to guess some easy rules. For
example, a face usually has two symmetric eyes, and the eye
area is darker than the cheeks. Facial features could be the
distance between eyes or the color intensity difference
between the eye area and the lower zone. A major
disadvantage with these methods is the difficulty in building
an appropriate set of rules. If the rules are general then they
are false positive. Furthermore, if the rules were too detailed
then there false negatives. The solution to overcome these
problems is to make hierarchicalknowledge-basedmethods.
These are efficient with simple inputs. These rule-based
methods uses human knowledge of what makes a typical
human face and captures relationships between facial
features. They are designed mainly for face localization.
Limitation of this is, if a person is wearing glasses, itisalmost
impossible to find the face. There are algorithms that detect
face-like textures or the skin color in which it is important to
select the best color model to detect faces.
B. Feature based methods: AlsoknownasFeatureinvariant
approaches, these methods aim to find structural features
that exist even when the pose, viewpoint, or lighting
conditions differ and uses these to locate faces.Itapproaches
to find face structure features that are robust to pose and
lighting variations like mouth, cheek, eyes, ears, nose, chin,
lips etc. Distance between eyes, ears or location of eyes and
nose, length of nose is used as to determine the face. Also
potential faces are normalized to a fixed size, position and
orientation. Then, the face area or region in an image is
verified using a back propagation neural network. These are
designed mainly for face localization.
C. Template Matching Methods: It uses pre-stored face
templates to judge if an image is a face. This method
compares input image with stored template of faces or
features. It defines a face as a function. Each features can be
defined independently. These methods are mainly used for
both face localization and detection and are easy to
implement but incomplete for face detection and do not give
good results for variations in scale, shape and pose.
D. Appearance based methods: The appearance based
methodsas the namesuggestuses a set oftrainingimagesfor
learning of the models or templates. It shows superior
performance over others. In general, they rely on techniques
from statistical analysis and machine learning to find the
relevant characteristics of face and non- face images. The
learned characteristicsare in the form of distributionmodels
or discriminant functions that are consequently usedforface
detection [2]. Algorithms used in this methods are PCA
(Eigenface), Distribution based methods, Neural Networks,
Support Vector Machines (SVM), Hidden Markov model etc.
The Categorization of Methods for Face Detection in a Single
Image is mentioned in below table.
Table-1 Face Detection Approach and Representative
Works
Approach Representative Works
Knowledge Based Multiresolution rulebasedmethod[20].
Feature invariant
Facial Features Grouping of Edges[21]
Textures Space Gray-Level Dependence Matrix
(SGLD) of face pattern[22]
Skin Color Mixture of Gaussian[23]
Multiple Features Integration of skin color,size,shape[24]
Template Matching
Predefined Face Shape Template[25]
Deformable
Templates
Active Shape Model[26]
Appearance Based Model
Eigenface Eigenvector Decomposition and
Clustering[6]
Distribution
Based
Gaussian Distribution and Multilayer
Perceptron[27]
Neural Network Ensemble of Neural Networks and
arbitration schemes[28]
SVM SVM with polynomial kernel[29]
Naïve Bayes
Classifier
Joint statistics of local appearance and
position[30]
Hidden Markov
Model
High order statistics with HMM[31]
Information -
Theoretical
Kullback relative information[32]
International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056
Volume: 05 Issue: 12 | Dec 2018 www.irjet.net p-ISSN: 2395-0072
© 2018, IRJET | Impact Factor value: 7.211 | ISO 9001:2008 Certified Journal | Page 657
2.2 CLASSIFICATION BASED ON APPROACH TO DETECT
THE FACE
Recognition of face can be performed both in still image
and in video based. In this study, we are performing face
recognition in still image. Face recognition for still images
can be classified in 3 main approaches as mentioned below:
Holistic based Approach:
In this approach, the whole face region is taken into
consideration as input data into face detection system. This
method has proved to be an excellent technique for
recognizing face in terms of recognition rate.
Types of holistic method are
a. Principal Component Analysis (PCA)
b. Single Value Decomposition (SVD)
c. Artificial Neural Network (ANN)
Feature based Approach:
In this approach, local features on face such as eyes,
nose, ears, lips, nose length, cheek, chin their position,
location, length etc. are taken into consideration and are
used as input data for structural classifier. Hidden Markov
Model method belongs to this category.
Hybrid based Approach:
This originates as a combinationof bothholisticand
feature based approach. This idea comes from how human
vision system perceives both local feature and whole face.
Modular Eigenfaces, hybrid local feature, shape normalized,
component based methodsare examplesofhybridapproach.
2.3 DETAILED STUDY OF METHODS AND ALGORITHMS
USED IN FACE RECOGNITION
In this section, a detailed study on various human
face recognition methodologies developed is provided.
Figure 2. Types of Face Recognition Methods
Appearance Based Methods:
Figure 3. Types of Appearance based Face Recognition
a. Eigenface Based Method
This uses Principal Component Analysis (PCA) scheme.
A detailed description of PCA can be found in [1][2][4][6].
Principal Component Analysis (PCA)isa powerful technique
for extracting a structure from potentially high-dimensional
data sets, which corresponds to extracting the eigenvectors
that are associated with the largest eigenvalues from the
input distribution. This eigenvector analysis has already
been widely used in face processing.
Step 1: Prepare the data
The faces constituting the training set (Γi) should be
prepared for processing.
Step 2: Subtract the mean
Average matrix Ψ has to be calculated, then subtracted from
the original faces (Γi) and the result stored in the
Variable Φi:
Step 3: Calculate the covariance matrix. In step three, the
covariance matrix C is calculated according to
Step 4: Calculate the eigenvectors and eigenvalues of the co-
variance matrix. The eigenvectors (Eigenfaces) and the
corresponding eigenvalues should be calculated. The Eigen-
faces must be normalized so that they are unit vectors, i.e.
length 1. The description of the exact algorithm for
determination of eigenvalues and eigenvectors is
eliminating, as it belongs to the standard arsenal of most
math programming libraries.
Step 5: Select the Principal Components
From M eigenvectors(Eigenfaces)Ʋi,onlyM0 should
be chosen, which have the highest eigenvalues. The higher
International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056
Volume: 05 Issue: 12 | Dec 2018 www.irjet.net p-ISSN: 2395-0072
© 2018, IRJET | Impact Factor value: 7.211 | ISO 9001:2008 Certified Journal | Page 658
the eigenvalue, the more characteristic features of a face
does the particular eigenvector describe. Eigenfaces with
low eigenvalues can be omitted, as they explain only a small
part of characteristic features of the faces. After M0
Eigenfaces Ʋi are determined, the” training” phase of the
algorithm is finished.
b. Distribution based Methods – LDA Algorithm [8]
Linear Discriminant Analysis (LDA) is also called as
Fisher’s Discriminant Analysis or Fisherface Analysis and is
another dimensionality reductiontechnique.Itisan example
of class specific method. In LDA, the goal is to find an
efficient way to represent the face vector space. LDA finds
the vectors in the underlying space that best discriminate
among classes. LDA maximizes between class scattering
matrix measure while minimizes the within class scatter
matrix measure, which make it moresteadyforclassification
[8]. Lih-Heng Chan [9] proposed a framework of facial
biometric was designed based on two subspacemethodsi.e.,
Principal Component Analysis (PCA) and Linear
Discriminant Analysis (LDA). First, PCA is used for
dimension reduction, where original face images are
projected into lower-dimensional face representations.
Second, LDA was proposed to provide a solution of better
discriminant. Both PCA and LDA features were presented to
Euclidean distance measurementwhichisconvenientlyused
as a benchmark. LDA-based methods outperform PCA for
both face identification and verification. Fisher facesareone
the most successfully widely used method for face
recognition. It is based on appearance method. In 1930
Fisher developed linear/fisher discriminant analysisforface
recognition which shows successful result in this process.
Linear Discriminant Analysis (LDA) tries to differentiate
between classes rather than trying to present the data.
Therefore, LDA cares about getting feature vectors for Class
Discrimination. We define two below scatter matrices
The first is called the within-class scatter matrix while the
second is called the between-class scatter matrix. j denotes
the class while i denotes the image number. μj is the mean of
class j while μ is the mean of all classes. M j is the number of
images in class j and R is the number of classes. The
algorithm aims at maximizing the between-class matrix
while minimizing the within-class matrix. The limitation of
LDA is that, within the class scatter matrix is always single,
after all the number of pixels in images is larger than the
number of images so it can boost detection of error rate if
there is a variation in pose and lighting condition within
same images. To overcome this problem algorithms like the
fisher face technique uses the advantage of within-class
information so it minimizes the variation within class,sothe
problem with variations in the same images such as lighting
variations can be overcome [9][10].
c. Independent Component Analysis (ICA)
ICA [10] is considered as a generalization of PCA.
PCA considers image elements as random variables with
minimized 2nd order statistics. ICA proposed by [11, 12]
minimizes both second-order and higher order
dependencies in the input data and tries to get the basis of
which the projected data is statistically independent. Also
here PCA is used to reduce dimensionality prior to
performing ICA. Two different approaches or architectures
are taken by the ICA for face recognition which are
mentioned as:
1. ICA Architecture 1: In this approach according to [11]
images are considered as random variables and pixels as
trials. So here we care about independence of images or
functions of images. It tries to find a set of statistically
independent basis images.
2. ICA Architecture 2: In this pixelsareconsideredasrandom
variables and images as trials. So in this, we care about
independence of pixels or functions of pixels. In otherwords
ICA architecture 2 uses ICA to get a representation in which
the coefficients used for coding images are statistically
independent.
d. Local Binary Patterns (LBP)
Local Binary Patterns (LBP) was firstpresentedby Ojala
et al. in [13] to use in texture description. The basic method,
labels each pixel with decimal values called LBPs or LBP
codes, to describe the local structure around of pixel. As
illustrated in Figure 1, value of the center pixel is subtracted
from the 8 neighbor pixels' values, if the result is negative
the binary value is 0, otherwise 1.Thecalculationstartsfrom
the pixel at the top left corner of the 8-neighborhood and
continues in clockwise direction. After calculating with all
neighbors, an eight digit binary value isproduced.Whenthis
binary value is converted to decimal, the LBP code of the
pixel is generated, and placed to the coordinates of pixel in
matrix [14].
Figure 4. Basic LBP Operator Display
There is a drawback of LBP which uses 8-
neighborhood (3x3) thatcannotcoverlarge-scalestructures.
To take into account texture of different size structures, the
method is generalized. In [15] Ojala et al. revisedthemethod
to be flexible for any radius and any number of sampling
points and named the new method as Extended LBP(ELBP).
The histograms of LBP are used for face recognition since
LBP histograms contain information about the distribution
of local micro patterns.
International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056
Volume: 05 Issue: 12 | Dec 2018 www.irjet.net p-ISSN: 2395-0072
© 2018, IRJET | Impact Factor value: 7.211 | ISO 9001:2008 Certified Journal | Page 659
Figure 5 below shows different examples of ELBP
operator. ‘P’ represents the number of neighbors and ‘R’
represents the radius of a circle on which neighbors are
located. Because the face image is toobigforLBPcalculation,
dividing the image into small regions is proposed in [16].
Some parts of face (like eyes, mouth) contain more
information for face recognition.
Figure 5. ELBP Operator Examples
Yang et al. proposes to train and allocate different weights
for face parts, by their information covering and then
concatenating them end to end to buildupglobal description
of face. This helps to collect local pattern information with
spatial details of the whole image. To decide if two face
images are belong to same person, the images histograms
are compared. Chi square statistic similaritymeasureisused
for comparison of histograms. It can be defined as follows:
where i = 0, 1,..., n-1, j = 0,1,...,m-1, wj is the weight for region
j, S is target face image histogram and M is the query face
image histogram [16].
e. Support Vector Machine (SVM)
While most methods for training a classifier (e.g.
Bayesian, neural networks, and RBF) are based on
minimizing the training error, i.e. empirical risk, SVMs
operates on another induction principle, called structural
risk minimization, which aims to minimize an upper bound
on the expected generalization error. An SVM classifier is a
linear classifier where the separating hyperplane is chosen
to minimize the expected classification error of the unseen
test patterns. This optimal hyperplane is defined by a
weighted combination of a small subset of the training
vectors, called support vectors hence named SupportVector
Machine [17]. Given a set of points belonging to two
classes, a Support Vector Machine (SVM) finds the hyper
plane that separates the largest possible fraction of
points of the same class on the same side, while maximizing
the distance from either class to the hyper plane. PCA is
first used to extract features of face images and then
discrimination functions between each pair of images are
learned by SVMs. SVMs as defined in [17] can be considered
as a new paradigm to train neural networks or radial basis
function (RBF) classifiers or polynomial function.
f. Sparse Networks of Winnows (SNoW)
Yang et al. proposed a method that uses SNoW learning
architecture [18] to detect faces with different features and
expressions, in different poses and under different lightning
conditions. SNoW is a sparsenetwork oflinearfunctionsthat
uses Winnow update rule [18]. It is mainly tailored for
learning in domains in which potential number of features
taking part in decisions is very large, but may be not known
in priori.
g. Neural Networks Based
Neural networks based have been applied successfully in
many pattern recognition problems, such as optical
character recognition, object recognition, and autonomous
robot driving. Neural network architectures proposed as
face recognition is considered as a two class pattern
recognition. The advantage of usingneural networksforface
recognition is the feasibility of training a system to capture
the complex class conditional density of face patterns. The
drawback of neural network architecture is that they has to
be extensively tuned to get exceptional performance.
Table -2: Types of Neural Network Architectures
S. No Name
1 Hierarchical Neural Network
2 Auto associative Neural Networks
3. Probabilistic decision based Neural Network
4. Multilayer Neural Network
h. Naïve Bayes Classifier
This appearance based approach was developed by
Schneiderman and Kanade as Naïve Bayes classifier to
estimate the joint probability of local appearance and
position of face patterns (subregions of the face) at multiple
resolutions. Since some local patterns of an object are more
unique than others; like intensity patterns around eyes are
much more distinctive than the pattern found around the
cheeks. Naïve Bayes classifier provides better estimation of
the conditional density functions of the subregions and also
givesa functional form of theposteriorprobabilitytocapture
the joint statistics of local appearance and position on the
object.
i. Hidden Markov Model
The underlying assumption of the Hidden Markov Model
(HMM) is that patterns can be characterized as a parametric
random process and that the parameters of this process can
be estimated in a precise, well defined manner.Indeveloping
an HMM for a pattern recognition problem, a number of
hidden states need to be decided first to form a model. Then,
one can train HMM to learn the transitional probability
between states from the examples where each example is
represented as a sequence of observations. The goal of
training an HMM is to maximize the probability of observing
International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056
Volume: 05 Issue: 12 | Dec 2018 www.irjet.net p-ISSN: 2395-0072
© 2018, IRJET | Impact Factor value: 7.211 | ISO 9001:2008 Certified Journal | Page 660
the training data by adjusting the parameters in an HMM
model with the standard Viterbi segmentation method and
Baum-Welch algorithms.AftertheHMMhasbeentrained,the
output probability of an observation determines the class to
which it belongs [19]. HMM-based methods usually treat a
face pattern as a sequence ofobservation vectorswhereeach
vector is a strip of pixels. HMMs have been applied to both
face recognition and localization [19].
j. Information – Theoretical Approach
The spatial property of the face pattern can be modeled
through different aspects. The contextual constraint, among
others, is a powerful one and has often been applied to
texture segmentation. The contextual constraints in a face
pattern are usually specified by a small neighborhood of
pixels. Markov Random Field (MRF) theory provides a
convenient and consistent way to model context dependent
entities such as image pixels and correlated features. The
face and nonface distributions can be estimated using
histograms. A probability function p(x) for event that
templateis a face and probability function q(x) for event that
templateis a nonface id defined. Trainingdatabaseisdecided
with faces of individuals. From the training sets, the most
informative pixels (MIP) are selected to maximize the
kullback relative information between p(x) and q(x) (i.e. to
give the maximum class separation). MIP is then used to
obtain linear features for classification and representation.
Distance from face space (DFFS) is calculated. If the DFFS to
the face subspace is lower than the distance to the nonface
subspace, a face is assumed to exist withinthewindowwhich
is passed over input image [19].
3. APPLICATIONS OF FACE RECOGNITION SYSTEM
 Security and Surveillance:-Access control of buildings,
airports/seaports, computer or network security; CCTV
Surveillance to look for suspects/criminals.
 General Identity Verification:-Driving License,
Electoral registrations,National Ids (UID AADHARcards
in India), Passports, Employee IDs, Bank Account ID etc.
 Criminal justice systems: - Mug-shot or booking
systems, post-event analysis, forensics.
 Access Control: - Face verification, matching a face
against a single enrolled exemplar.
 Image database investigations:-Searching image
databases of licensed drivers benefit recipients, missing
children, immigrants and police bookings.
 Video Indexing in Multimedia environments with
adaptive human computer interfaces
4. LIMITATIONS OF FACE RECOGNITION ALGORITHMS
There are various challenges and limitations which are
associated with face detection and face recognition
algorithms affects the performance or result of the method
are mentioned below:
1. Facial Aging occurred due to hormonal and biological
changes.
2. Pose Invariance which is the result of camera- face pose
due to which facial features (eyes, nose) gets occluded.
3. Image Orientation occurs as the face images vary for
different rotations about the camera’s optical axis and
imaging conditions like lighting, camera characteristics
(sensor, flash, lenses) affect the appearance of face.
4. Occlusion might be limitation as in an image of group of
people, some persons face may get partially or fully
occluded due to other people or object.
5. Presence or absence of structural components: Facial
features like beards, moustache,glasses,sunglasses,and
nose ring etc. which cause great deal of variability in
shape, size, color or texture of face.
5. CONCLUSION
This paper attempts to provide a comprehensive
study and tried to survey all the important and influential
algorithms in simple and understandable manner on a
significant number of papers to cover the recent
development in face recognitionfield. Presentstudyexposes
that face recognition algorithm can be enhanced using
hybrid methods for better performance. The list of
references to provide more detailed understanding of the
approaches described is enlisted. When appropriate, we
have reported on the relative performance of methods, and
are also cognizant that there is a lack of uniformity in how
methods are evaluated and, so, it is imprudent to explicitly
declare which methods indeed have the lowest error rates.
Also categorization of algorithms have been done and pros
and cons have been provided. We apologize to researchers
whose important contributions may have been overlooked.
ACKNOWLEDGEMENT
The author would like to thank professors and
management of NRI Institute of Science and Technology,
Bhopal (Madhya Pradesh) for providing their valuable
guidance and excellent facilities for carrying the research
work.
REFERENCES
[1] B.S.Khade et al, “FaceRecognitionTechniques:ASurvey”
International Journal of Computer Science and Mobile
Computing, Vol.5 Issue.11, November- 2016, pg. 65-72.
[2] M.-H. Yang, D. J. Kriegman, and N. Ahuja. Detecting faces
in images: A survey. IEEE Trans. on PAMI, 24(1):34–58,
2002. 1, 11, 12, 13
[3] A Survey of Recent Advances in Face Detection: Cha
Zhang and Zheng you Zhang June2010Technical Report
MSR-TR-2010-66.
[4] Comparative Study on Face Recognition Using HGPP,
PCA, LDA, ICA and SVM”, Global Journal of Computer
Science and Technology Graphics & Vision Volume 12
Issue 15 Version 1.0 Year 2012 Type: Double Blind Peer
International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056
Volume: 05 Issue: 12 | Dec 2018 www.irjet.net p-ISSN: 2395-0072
© 2018, IRJET | Impact Factor value: 7.211 | ISO 9001:2008 Certified Journal | Page 661
Reviewed International Research Journal Publisher:
Global Journals Inc. (USA) Online ISSN: 0975-4172 &
Print ISSN: 0975-4350 By Hardik Kadiya , Merchant
Engineering College.
[5] “Towards a Practical Face Recognition System: Robust
Alignment and Illumination by Sparse Representation”
Andrew Wagner, Student Member, IEEE, John Wright,
Member, IEEE, Arvind Ganesh, Student Member, IEEE,
Zihan Zhou, StudentMember,IEEE,HosseinMobahi,and
Yi Ma, Senior Member, IEEE.
[6] Turk, Matthew A and Pentland, Alex P. Face recognition
using eigenfaces. Computer Vision and Pattern
Recognition, 1991. Proceedings {CVPR'91.}, {IEEE}
Computer Society Conference on 1991.
[7] Deepika Dubey and Dr.G.S.Tomar, ”Deep perusal of
human face Recognition Algorithms from Facial
Snapshots, International Journal of Signal Processing,
Image Processing and Pattern Recognition ”,Vol-9,No-
9,(2016),pp.103-112
[8] An Efficient LDA Algorithm for Face Recognition Jie
Yang, Hua Yu, William Kunz School of Computer Science
Interactive Systems Laboratories Carnegie Mellon
University Pittsburgh, PA 15213.
[9] Lih-Heng Chan, Sh-Hussain Salleh and Chee-Ming Ting.
“Face Biometrics Based on Principal Component
Analysis and Linear Discriminant Analysis.” J.Computer
Sci., 6 (7): 693-699, 2010.
[10] M. Sharkas, M. Abou Elenien “Eigenfaces vs. Fisherfaces
vs. ICA for Face Recognition; AComparativeStudy”,ICSP
2008 Proceedings, 978-1-4244-2179-4/08/ ©2008
IEEE.
[11] M.S. Bartlett, J.R. Movellan, and T.J. Sejnowski, Face
recognition by independent component analysis, IEEE
Trans Neural Networks 13 (2002), 1450–1464.
[12] B. Draper, K. Baek, M.S. Bartlett, and J.R. Beveridge,
Recognizing faces with PCA and ICA, Computer Vis
Image Understand (Special Issue on Face Recognition)
91 (2003).
[13] T. Ojala, M.Pietikäinen and D. Harwood (1994),
"Performance evaluation of texture measures with
classification based on Kullback discrimination of
distributions", Proceedings of the 12th IAPR
International Conference on Pattern Recognition (ICPR
1994), vol. 1, pp. 582 – 585
[14] D. Huang, C. Shan, M. Ardabilian, Y. Wang and L. Chen
"Local binary patterns and its applicationtofacial image
analysis: A survey", IEEE Transactions on Systems,Man,
and Cybernetics, Part C, vol. 41, pp.765 -781 2011.
[15] T. Ojala, M. Pietikäinen, and T. Maenpaa,
“Multiresolution grayscale and rotation invariant
texture classification with local binary patterns,” IEEE
Transaction on Pattern Analysis and Machine
Intelligence., vol. 24, no. 7, pp. 971–987, Jul. 2002.
[16] H. Yang and Y. Wang, “A LBP-based face recognition
method with Hamming distance constraint,”inProc.Int.
Conf. Image Graph., Aug., 2007, pp. 645–649.
[17] M.-H. Yang, D. J. Kriegman, and N. Ahuja. Detecting faces
in images: A survey. IEEE Trans. on PAMI, 24(1):13,
2002. pp. 13
[18] A. Carleson, C. Cumby, J. Rosen, and D. Roth, “The SNoW
Learning Architecture”, Technical Report UIUCDCS-R-
99-2101, Univ. of Illinois at Urbana-Champaign
Computer Science Dept., 1999.
[19] M.-H. Yang, D. J. Kriegman, and N. Ahuja. Detecting faces
in images: A survey. IEEE Trans. on PAMI, 24(1):34–58,
2002. pp 14-15.
[20] G. Yang and T. S. Huang, “Human Face Detection in
Complex Background,” Pattern Recognition, vol. 27, no.
1, pp. 53-63, 1994.
[21] T.K. Leung, M.C. Burl, and P. Perona, “Finding Faces in
Cluttered Scenes Using Random Labeled Graph
Matching,” Proc. Fifth IEEE Int’l Conf. Computer Vision,
pp. 637-644, 1995.
[22] Y. Dai and Y. Nakano, “Face-Texture Model Based on
SGLD and Its Application in Face Detection in a Color
Scene,” Pattern Recognition, vol. 29, no. 6, pp. 1007-
1017, 1996.
[23] S. McKenna, S. Gong, and Y. Raja, “Modelling Facial
Colour and Identity with Gaussian Mixtures,” Pattern
Recognition, vol. 31, no. 12, pp. 1883-1892, 1998.
[24] R. Kjeldsen and J. Kender, “Finding Skin in Color
Images,” Proc. Second Int’l Conf. Automatic Face and
Gesture Recognition, pp. 312- 317, 1996.
[25] I. Craw, D. Tock, and A. Bennett, “FindingFaceFeatures,”
Proc. Second European Conf. Computer Vision, pp. 92-
96, 1992.
[26] A. Lanitis, C.J. Taylor, and T.F. Cootes, “An Automatic
Face Identification System Using Flexible Appearance
Models,” Image and Vision Computing, vol. 13, no. 5, pp.
393-401, 1995.
[27] K.-K. Sung and T. Poggio, “Example-Based Learning for
View Based Human Face Detection,”IEEETrans.Pattern
Analysis and Machine Intelligence, vol. 20, no. 1, pp. 39-
51, Jan. 1998.
[28] H. Rowley, S. Baluja, and T. Kanade, “Neural Network-
Based Face Detection,” IEEE Trans. PatternAnalysisand
Machine Intelligence, vol. 20, no. 1, pp. 23-38, Jan. 1998.
[29] E. Osuna, R. Freund, and F. Girosi, “Training Support
Vector Machines: An Application to Face Detection,”
Proc. IEEE Conf. Computer Vision and Pattern
Recognition, pp. 130-136, 1997.
[30] H. Schneiderman and T. Kanade,“ProbabilisticModeling
of Local Appearance and Spatial RelationshipsforObject
Recognition,” Proc. IEEE Conf. Computer Vision and
Pattern Recognition, pp. 45-51, 1998.
[31] A. Rajagopalan, K. Kumar,J.Karlekar,R.Manivasakan,M.
Patil, U. Desai, P. Poonacha, and S. Chaudhuri, “Finding
Faces in Photographs,” Proc. Sixth IEEE Int’l Conf.
Computer Vision, pp. 640- 645, 1998.
[32] M.S. Lew, “Information Theoretic View-Based and
Modular Face Detection,” Proc. Second Int’l Conf.
Automatic Face and Gesture Recognition, pp. 198-203,
1996.

More Related Content

PDF
IRJET- Persons Identification Tool for Visually Impaired - Digital Eye
PDF
IRJET- Age Analysis using Face Recognition with Hybrid Algorithm
PDF
Progression in Large Age-Gap Face Verification
PDF
ADVANCED FACE RECOGNITION FOR CONTROLLING CRIME USING PCA
PDF
DETECTING FACIAL EXPRESSION IN IMAGES
PDF
Kh3418561861
PDF
IRJET- Facial Expression Recognition
PDF
IRJET- Library Management System with Facial Biometric Authentication
IRJET- Persons Identification Tool for Visually Impaired - Digital Eye
IRJET- Age Analysis using Face Recognition with Hybrid Algorithm
Progression in Large Age-Gap Face Verification
ADVANCED FACE RECOGNITION FOR CONTROLLING CRIME USING PCA
DETECTING FACIAL EXPRESSION IN IMAGES
Kh3418561861
IRJET- Facial Expression Recognition
IRJET- Library Management System with Facial Biometric Authentication

What's hot (17)

PDF
IRJET-A Survey on Face Recognition based Security System and its Applications
PDF
C017431730
PDF
Multi Local Feature Selection Using Genetic Algorithm For Face Identification
PDF
IRJET - Real Time Facial Analysis using Tensorflowand OpenCV
PDF
IRJET- An Innovative Approach for Interviewer to Judge State of Mind of an In...
PDF
International Journal of Image Processing (IJIP) Volume (1) Issue (2)
PDF
Fl33971979
PDF
IRJET- A Review on Various Approaches of Face Recognition
PDF
IRJET- Facial Expression Recognition using Deep Learning: A Review
PDF
Age Invariant Face Recognition using Convolutional Neural Network
PDF
Review of face detection systems based artificial neural networks algorithms
PDF
Human Face Detection and Tracking for Age Rank, Weight and Gender Estimation ...
PDF
EFFECT OF FACE TAMPERING ON FACE RECOGNITION
PDF
IRJET- Emotionalizer : Face Emotion Detection System
DOCX
PDF
Recent Advances in Face Analysis: database, methods, and software.
DOCX
IRJET-A Survey on Face Recognition based Security System and its Applications
C017431730
Multi Local Feature Selection Using Genetic Algorithm For Face Identification
IRJET - Real Time Facial Analysis using Tensorflowand OpenCV
IRJET- An Innovative Approach for Interviewer to Judge State of Mind of an In...
International Journal of Image Processing (IJIP) Volume (1) Issue (2)
Fl33971979
IRJET- A Review on Various Approaches of Face Recognition
IRJET- Facial Expression Recognition using Deep Learning: A Review
Age Invariant Face Recognition using Convolutional Neural Network
Review of face detection systems based artificial neural networks algorithms
Human Face Detection and Tracking for Age Rank, Weight and Gender Estimation ...
EFFECT OF FACE TAMPERING ON FACE RECOGNITION
IRJET- Emotionalizer : Face Emotion Detection System
Recent Advances in Face Analysis: database, methods, and software.
Ad

Similar to IRJET- A Comprehensive Survey and Detailed Study on Various Face Recognition Methods (20)

PDF
International Journal of Engineering and Science Invention (IJESI)
PDF
IRJET- Design, Test the Performance Evaluation of Automobile Security Tec...
PDF
HUMAN FACE RECOGNITION USING IMAGE PROCESSING PCA AND NEURAL NETWORK
PDF
Face and facial expressions recognition for blind people
PDF
IRJET - Emotionalizer : Face Emotion Detection System
PDF
Facial Expression Identification System
PDF
Cross Pose Facial Recognition Method for Tracking any Person's Location an Ap...
PDF
Paper id 25201496
PDF
Comparative Studies for the Human Facial Expressions Recognition Techniques
PDF
FACE SHAPE CLASSIFIER USING DEEP LEARNING
PDF
Development of Real Time Face Recognition System using OpenCV
PDF
Attendance System using Face Recognition
PDF
IRJET- Library Management System with Facial Biometric Authentication
PDF
Face Recognition Technology
PDF
Criminal Face Identification
PDF
Review of Face Detection Techniques
PDF
AN IMPROVED TECHNIQUE FOR HUMAN FACE RECOGNITION USING IMAGE PROCESSING
PDF
ATTENDANCE BY FACE RECOGNITION USING AI
DOCX
Innovative Analytic and Holistic Combined Face Recognition and Verification M...
PDF
Age and Gender Classification using Convolutional Neural Network
International Journal of Engineering and Science Invention (IJESI)
IRJET- Design, Test the Performance Evaluation of Automobile Security Tec...
HUMAN FACE RECOGNITION USING IMAGE PROCESSING PCA AND NEURAL NETWORK
Face and facial expressions recognition for blind people
IRJET - Emotionalizer : Face Emotion Detection System
Facial Expression Identification System
Cross Pose Facial Recognition Method for Tracking any Person's Location an Ap...
Paper id 25201496
Comparative Studies for the Human Facial Expressions Recognition Techniques
FACE SHAPE CLASSIFIER USING DEEP LEARNING
Development of Real Time Face Recognition System using OpenCV
Attendance System using Face Recognition
IRJET- Library Management System with Facial Biometric Authentication
Face Recognition Technology
Criminal Face Identification
Review of Face Detection Techniques
AN IMPROVED TECHNIQUE FOR HUMAN FACE RECOGNITION USING IMAGE PROCESSING
ATTENDANCE BY FACE RECOGNITION USING AI
Innovative Analytic and Holistic Combined Face Recognition and Verification M...
Age and Gender Classification using Convolutional Neural Network
Ad

More from IRJET Journal (20)

PDF
Enhanced heart disease prediction using SKNDGR ensemble Machine Learning Model
PDF
Utilizing Biomedical Waste for Sustainable Brick Manufacturing: A Novel Appro...
PDF
Kiona – A Smart Society Automation Project
PDF
DESIGN AND DEVELOPMENT OF BATTERY THERMAL MANAGEMENT SYSTEM USING PHASE CHANG...
PDF
Invest in Innovation: Empowering Ideas through Blockchain Based Crowdfunding
PDF
SPACE WATCH YOUR REAL-TIME SPACE INFORMATION HUB
PDF
A Review on Influence of Fluid Viscous Damper on The Behaviour of Multi-store...
PDF
Wireless Arduino Control via Mobile: Eliminating the Need for a Dedicated Wir...
PDF
Explainable AI(XAI) using LIME and Disease Detection in Mango Leaf by Transfe...
PDF
BRAIN TUMOUR DETECTION AND CLASSIFICATION
PDF
The Project Manager as an ambassador of the contract. The case of NEC4 ECC co...
PDF
"Enhanced Heat Transfer Performance in Shell and Tube Heat Exchangers: A CFD ...
PDF
Advancements in CFD Analysis of Shell and Tube Heat Exchangers with Nanofluid...
PDF
Breast Cancer Detection using Computer Vision
PDF
Auto-Charging E-Vehicle with its battery Management.
PDF
Analysis of high energy charge particle in the Heliosphere
PDF
A Novel System for Recommending Agricultural Crops Using Machine Learning App...
PDF
Auto-Charging E-Vehicle with its battery Management.
PDF
Analysis of high energy charge particle in the Heliosphere
PDF
Wireless Arduino Control via Mobile: Eliminating the Need for a Dedicated Wir...
Enhanced heart disease prediction using SKNDGR ensemble Machine Learning Model
Utilizing Biomedical Waste for Sustainable Brick Manufacturing: A Novel Appro...
Kiona – A Smart Society Automation Project
DESIGN AND DEVELOPMENT OF BATTERY THERMAL MANAGEMENT SYSTEM USING PHASE CHANG...
Invest in Innovation: Empowering Ideas through Blockchain Based Crowdfunding
SPACE WATCH YOUR REAL-TIME SPACE INFORMATION HUB
A Review on Influence of Fluid Viscous Damper on The Behaviour of Multi-store...
Wireless Arduino Control via Mobile: Eliminating the Need for a Dedicated Wir...
Explainable AI(XAI) using LIME and Disease Detection in Mango Leaf by Transfe...
BRAIN TUMOUR DETECTION AND CLASSIFICATION
The Project Manager as an ambassador of the contract. The case of NEC4 ECC co...
"Enhanced Heat Transfer Performance in Shell and Tube Heat Exchangers: A CFD ...
Advancements in CFD Analysis of Shell and Tube Heat Exchangers with Nanofluid...
Breast Cancer Detection using Computer Vision
Auto-Charging E-Vehicle with its battery Management.
Analysis of high energy charge particle in the Heliosphere
A Novel System for Recommending Agricultural Crops Using Machine Learning App...
Auto-Charging E-Vehicle with its battery Management.
Analysis of high energy charge particle in the Heliosphere
Wireless Arduino Control via Mobile: Eliminating the Need for a Dedicated Wir...

Recently uploaded (20)

PPTX
Software Engineering and software moduleing
PDF
22EC502-MICROCONTROLLER AND INTERFACING-8051 MICROCONTROLLER.pdf
PPT
INTRODUCTION -Data Warehousing and Mining-M.Tech- VTU.ppt
PDF
distributed database system" (DDBS) is often used to refer to both the distri...
PDF
EXPLORING LEARNING ENGAGEMENT FACTORS INFLUENCING BEHAVIORAL, COGNITIVE, AND ...
PPTX
AUTOMOTIVE ENGINE MANAGEMENT (MECHATRONICS).pptx
PPTX
Current and future trends in Computer Vision.pptx
PDF
August -2025_Top10 Read_Articles_ijait.pdf
PPT
Total quality management ppt for engineering students
PDF
null (2) bgfbg bfgb bfgb fbfg bfbgf b.pdf
PDF
Accra-Kumasi Expressway - Prefeasibility Report Volume 1 of 7.11.2018.pdf
PDF
Exploratory_Data_Analysis_Fundamentals.pdf
PPTX
Fundamentals of safety and accident prevention -final (1).pptx
PDF
Level 2 – IBM Data and AI Fundamentals (1)_v1.1.PDF
PPTX
Graph Data Structures with Types, Traversals, Connectivity, and Real-Life App...
PPTX
Amdahl’s law is explained in the above power point presentations
PDF
Visual Aids for Exploratory Data Analysis.pdf
PDF
Abrasive, erosive and cavitation wear.pdf
PDF
ChapteR012372321DFGDSFGDFGDFSGDFGDFGDFGSDFGDFGFD
PPTX
Fundamentals of Mechanical Engineering.pptx
Software Engineering and software moduleing
22EC502-MICROCONTROLLER AND INTERFACING-8051 MICROCONTROLLER.pdf
INTRODUCTION -Data Warehousing and Mining-M.Tech- VTU.ppt
distributed database system" (DDBS) is often used to refer to both the distri...
EXPLORING LEARNING ENGAGEMENT FACTORS INFLUENCING BEHAVIORAL, COGNITIVE, AND ...
AUTOMOTIVE ENGINE MANAGEMENT (MECHATRONICS).pptx
Current and future trends in Computer Vision.pptx
August -2025_Top10 Read_Articles_ijait.pdf
Total quality management ppt for engineering students
null (2) bgfbg bfgb bfgb fbfg bfbgf b.pdf
Accra-Kumasi Expressway - Prefeasibility Report Volume 1 of 7.11.2018.pdf
Exploratory_Data_Analysis_Fundamentals.pdf
Fundamentals of safety and accident prevention -final (1).pptx
Level 2 – IBM Data and AI Fundamentals (1)_v1.1.PDF
Graph Data Structures with Types, Traversals, Connectivity, and Real-Life App...
Amdahl’s law is explained in the above power point presentations
Visual Aids for Exploratory Data Analysis.pdf
Abrasive, erosive and cavitation wear.pdf
ChapteR012372321DFGDSFGDFGDFSGDFGDFGDFGSDFGDFGFD
Fundamentals of Mechanical Engineering.pptx

IRJET- A Comprehensive Survey and Detailed Study on Various Face Recognition Methods

  • 1. International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056 Volume: 05 Issue: 12 | Dec 2018 www.irjet.net p-ISSN: 2395-0072 © 2018, IRJET | Impact Factor value: 7.211 | ISO 9001:2008 Certified Journal | Page 655 A Comprehensive Survey and Detailed Study on various Face Recognition Methods Tanuj Nagaria1, Dr. Dharmendra Chourishi2 1M.Tech (CSE) Research Scholar, NRI Institute of Science and Technology, RGPV, Bhopal, M.P, India 2Head PG, MCA, NRI Institute of Science and Technology, RGPV, Bhopal, M.P, India ---------------------------------------------------------------------***---------------------------------------------------------------------- Abstract – Face Recognition which is still one of the challenging topic in Computer Vision and Image Processing field remains an open problem as the recent advancements has not yet reached high recognition performance in real world environment. With the usage of technologies for Computer Vision and Image Processing, Face Recognition has gained more interest due to it applications and concerns on high security. Human Face can be considered as a key identifier in various fields and Computational models of face recognition can be applied to a wide variety of problems involving security system, Identification of criminals or suspects, image and film processing, and human computer interaction. This field ofcomputervisionand imageprocessing involves recognition of face from image or a video source. Several algorithms and methodologies for face identification have been developed having their own pros and cons. In this paper, we will provide review and survey of some famous major face recognition algorithms, methodologies developed so far. This paper will study various face recognition algorithms developed and is categorized into 5 aspects, first involving introduction and review on existing history, second gives the technical details on methods, approaches developed so far, third gives benefits and applications of face recognition system, fourth one is the limitations involved in face recognition algorithms and fifth is the conclusion. It is our hope that by reviewing existing algorithms, we will see even better method developed to solve this fundamental problem. Key Words: Face Detection, Face Recognition, Eigenfaces, LDA, ICA, LBP, SNoW, Neural Network 1. INTRODUCTION With the rapid increase of computational powers and development in sensing, analysis, equipment and technologies, computers are becoming smarter. Several research projects and commercial products have demonstrated the capability for a computer to interact with human in a natural way by looking at people through cameras. Identification of a human being using biometrics has been proved to be one of the best methodologies yet developed and is one of the key area of research or say interest. Biometric based techniques have emerged as the most promising option for recognizing individuals in recent time. Identification of an individual or entity and object through biometrics provides better results and also various features. Biometricsbasedtechnologiesincludeidentification based on physiological characters (such as face, iris, retina, finger prints, finger geometry, hand veins, hand and palm geometry, voice etc.) and behavioural traits (such as gait, signature and keystroke dynamics). Among all the features of human being, used as identification, Face Recognition seems to have more advantages over otherbiometrics based methods. And so it is one of the key research interest in computer vision and image processing. Many applications rely on the performance of digital image processing systems like biometricsauthentication,multimedia,computerhuman interaction, security applications etc. Figure 1.Types of Biometrics Based Identification Human Face Identification (HFI) among a set of images, is an area of research which hasmanychallenges but has received great deal of attention over the last few years due to its many applications in various domains. As Human Face is a complex multidimensional structure and is a rich source of information about human behaviour, it requires good computing techniques for its recognition. Also Human Face displays emotion, indicate feelings, regulate social behaviour, reveal brain function etc. 1.1 ADVANTAGES OF BIOMETRIC BASED RECOGNITION METHODS Biometric based techniques have now emerged as promising option for identifying individuals for authenticating people and granting access to physical and virtual domains over others like passwords, PINs, smart or plastic cards etc. Benefits of Biometric face recognition methods are as below: A. Better Security and No More Time Fraud B. Automated System with Easy Integration C. High Success Rate, User Friendly Systems and Convenient Security Solution BIOMETRICS BASED TECHNOLOGIES PHYSIOLOGICAL CHARACTERS (Face, Iris, Retina, Fingerprints) BEHAVIOURAL TRAITS (Gait, Signature, Keystroke Dynamics)
  • 2. International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056 Volume: 05 Issue: 12 | Dec 2018 www.irjet.net p-ISSN: 2395-0072 © 2018, IRJET | Impact Factor value: 7.211 | ISO 9001:2008 Certified Journal | Page 656 D. Beneficial for security and surveillance purposes E. Face recognition can be done passively without any explicit action or participation by user. Face images can be acquired from a distance by a camera. F. Facial features like individual biological traits cannot be misplaced forgotten, stolen or forged. G. Iris and Retina identification require expensive equipment, voice recognition is susceptible to background noises, signatures can be tampered or forged or modified but face recognition is totally non- intrusive and does not carry any such risks. 2. FACE RECOGNITION METHODS There are various methods used in face recognition. Each and every method has different features under different conditions like illumination, expression and pose change. In this section, classification and detailed study on various methodologies developed is provided. 2.1 CLASSIFICATIONOFFACERECOGNITIONMETHODS Face Recognition Methods (Approaches) are classified/divided into following four categories[1][2][3][6]. A. Knowledge Based Methods: This methodcanbedefined as a process which use pre-defined rules to determine a face based on human knowledge. It is a rule based method which involvescapturing the knowledge of face andconvertinginto set of rules. It is simple to guess some easy rules. For example, a face usually has two symmetric eyes, and the eye area is darker than the cheeks. Facial features could be the distance between eyes or the color intensity difference between the eye area and the lower zone. A major disadvantage with these methods is the difficulty in building an appropriate set of rules. If the rules are general then they are false positive. Furthermore, if the rules were too detailed then there false negatives. The solution to overcome these problems is to make hierarchicalknowledge-basedmethods. These are efficient with simple inputs. These rule-based methods uses human knowledge of what makes a typical human face and captures relationships between facial features. They are designed mainly for face localization. Limitation of this is, if a person is wearing glasses, itisalmost impossible to find the face. There are algorithms that detect face-like textures or the skin color in which it is important to select the best color model to detect faces. B. Feature based methods: AlsoknownasFeatureinvariant approaches, these methods aim to find structural features that exist even when the pose, viewpoint, or lighting conditions differ and uses these to locate faces.Itapproaches to find face structure features that are robust to pose and lighting variations like mouth, cheek, eyes, ears, nose, chin, lips etc. Distance between eyes, ears or location of eyes and nose, length of nose is used as to determine the face. Also potential faces are normalized to a fixed size, position and orientation. Then, the face area or region in an image is verified using a back propagation neural network. These are designed mainly for face localization. C. Template Matching Methods: It uses pre-stored face templates to judge if an image is a face. This method compares input image with stored template of faces or features. It defines a face as a function. Each features can be defined independently. These methods are mainly used for both face localization and detection and are easy to implement but incomplete for face detection and do not give good results for variations in scale, shape and pose. D. Appearance based methods: The appearance based methodsas the namesuggestuses a set oftrainingimagesfor learning of the models or templates. It shows superior performance over others. In general, they rely on techniques from statistical analysis and machine learning to find the relevant characteristics of face and non- face images. The learned characteristicsare in the form of distributionmodels or discriminant functions that are consequently usedforface detection [2]. Algorithms used in this methods are PCA (Eigenface), Distribution based methods, Neural Networks, Support Vector Machines (SVM), Hidden Markov model etc. The Categorization of Methods for Face Detection in a Single Image is mentioned in below table. Table-1 Face Detection Approach and Representative Works Approach Representative Works Knowledge Based Multiresolution rulebasedmethod[20]. Feature invariant Facial Features Grouping of Edges[21] Textures Space Gray-Level Dependence Matrix (SGLD) of face pattern[22] Skin Color Mixture of Gaussian[23] Multiple Features Integration of skin color,size,shape[24] Template Matching Predefined Face Shape Template[25] Deformable Templates Active Shape Model[26] Appearance Based Model Eigenface Eigenvector Decomposition and Clustering[6] Distribution Based Gaussian Distribution and Multilayer Perceptron[27] Neural Network Ensemble of Neural Networks and arbitration schemes[28] SVM SVM with polynomial kernel[29] Naïve Bayes Classifier Joint statistics of local appearance and position[30] Hidden Markov Model High order statistics with HMM[31] Information - Theoretical Kullback relative information[32]
  • 3. International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056 Volume: 05 Issue: 12 | Dec 2018 www.irjet.net p-ISSN: 2395-0072 © 2018, IRJET | Impact Factor value: 7.211 | ISO 9001:2008 Certified Journal | Page 657 2.2 CLASSIFICATION BASED ON APPROACH TO DETECT THE FACE Recognition of face can be performed both in still image and in video based. In this study, we are performing face recognition in still image. Face recognition for still images can be classified in 3 main approaches as mentioned below: Holistic based Approach: In this approach, the whole face region is taken into consideration as input data into face detection system. This method has proved to be an excellent technique for recognizing face in terms of recognition rate. Types of holistic method are a. Principal Component Analysis (PCA) b. Single Value Decomposition (SVD) c. Artificial Neural Network (ANN) Feature based Approach: In this approach, local features on face such as eyes, nose, ears, lips, nose length, cheek, chin their position, location, length etc. are taken into consideration and are used as input data for structural classifier. Hidden Markov Model method belongs to this category. Hybrid based Approach: This originates as a combinationof bothholisticand feature based approach. This idea comes from how human vision system perceives both local feature and whole face. Modular Eigenfaces, hybrid local feature, shape normalized, component based methodsare examplesofhybridapproach. 2.3 DETAILED STUDY OF METHODS AND ALGORITHMS USED IN FACE RECOGNITION In this section, a detailed study on various human face recognition methodologies developed is provided. Figure 2. Types of Face Recognition Methods Appearance Based Methods: Figure 3. Types of Appearance based Face Recognition a. Eigenface Based Method This uses Principal Component Analysis (PCA) scheme. A detailed description of PCA can be found in [1][2][4][6]. Principal Component Analysis (PCA)isa powerful technique for extracting a structure from potentially high-dimensional data sets, which corresponds to extracting the eigenvectors that are associated with the largest eigenvalues from the input distribution. This eigenvector analysis has already been widely used in face processing. Step 1: Prepare the data The faces constituting the training set (Γi) should be prepared for processing. Step 2: Subtract the mean Average matrix Ψ has to be calculated, then subtracted from the original faces (Γi) and the result stored in the Variable Φi: Step 3: Calculate the covariance matrix. In step three, the covariance matrix C is calculated according to Step 4: Calculate the eigenvectors and eigenvalues of the co- variance matrix. The eigenvectors (Eigenfaces) and the corresponding eigenvalues should be calculated. The Eigen- faces must be normalized so that they are unit vectors, i.e. length 1. The description of the exact algorithm for determination of eigenvalues and eigenvectors is eliminating, as it belongs to the standard arsenal of most math programming libraries. Step 5: Select the Principal Components From M eigenvectors(Eigenfaces)Ʋi,onlyM0 should be chosen, which have the highest eigenvalues. The higher
  • 4. International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056 Volume: 05 Issue: 12 | Dec 2018 www.irjet.net p-ISSN: 2395-0072 © 2018, IRJET | Impact Factor value: 7.211 | ISO 9001:2008 Certified Journal | Page 658 the eigenvalue, the more characteristic features of a face does the particular eigenvector describe. Eigenfaces with low eigenvalues can be omitted, as they explain only a small part of characteristic features of the faces. After M0 Eigenfaces Ʋi are determined, the” training” phase of the algorithm is finished. b. Distribution based Methods – LDA Algorithm [8] Linear Discriminant Analysis (LDA) is also called as Fisher’s Discriminant Analysis or Fisherface Analysis and is another dimensionality reductiontechnique.Itisan example of class specific method. In LDA, the goal is to find an efficient way to represent the face vector space. LDA finds the vectors in the underlying space that best discriminate among classes. LDA maximizes between class scattering matrix measure while minimizes the within class scatter matrix measure, which make it moresteadyforclassification [8]. Lih-Heng Chan [9] proposed a framework of facial biometric was designed based on two subspacemethodsi.e., Principal Component Analysis (PCA) and Linear Discriminant Analysis (LDA). First, PCA is used for dimension reduction, where original face images are projected into lower-dimensional face representations. Second, LDA was proposed to provide a solution of better discriminant. Both PCA and LDA features were presented to Euclidean distance measurementwhichisconvenientlyused as a benchmark. LDA-based methods outperform PCA for both face identification and verification. Fisher facesareone the most successfully widely used method for face recognition. It is based on appearance method. In 1930 Fisher developed linear/fisher discriminant analysisforface recognition which shows successful result in this process. Linear Discriminant Analysis (LDA) tries to differentiate between classes rather than trying to present the data. Therefore, LDA cares about getting feature vectors for Class Discrimination. We define two below scatter matrices The first is called the within-class scatter matrix while the second is called the between-class scatter matrix. j denotes the class while i denotes the image number. μj is the mean of class j while μ is the mean of all classes. M j is the number of images in class j and R is the number of classes. The algorithm aims at maximizing the between-class matrix while minimizing the within-class matrix. The limitation of LDA is that, within the class scatter matrix is always single, after all the number of pixels in images is larger than the number of images so it can boost detection of error rate if there is a variation in pose and lighting condition within same images. To overcome this problem algorithms like the fisher face technique uses the advantage of within-class information so it minimizes the variation within class,sothe problem with variations in the same images such as lighting variations can be overcome [9][10]. c. Independent Component Analysis (ICA) ICA [10] is considered as a generalization of PCA. PCA considers image elements as random variables with minimized 2nd order statistics. ICA proposed by [11, 12] minimizes both second-order and higher order dependencies in the input data and tries to get the basis of which the projected data is statistically independent. Also here PCA is used to reduce dimensionality prior to performing ICA. Two different approaches or architectures are taken by the ICA for face recognition which are mentioned as: 1. ICA Architecture 1: In this approach according to [11] images are considered as random variables and pixels as trials. So here we care about independence of images or functions of images. It tries to find a set of statistically independent basis images. 2. ICA Architecture 2: In this pixelsareconsideredasrandom variables and images as trials. So in this, we care about independence of pixels or functions of pixels. In otherwords ICA architecture 2 uses ICA to get a representation in which the coefficients used for coding images are statistically independent. d. Local Binary Patterns (LBP) Local Binary Patterns (LBP) was firstpresentedby Ojala et al. in [13] to use in texture description. The basic method, labels each pixel with decimal values called LBPs or LBP codes, to describe the local structure around of pixel. As illustrated in Figure 1, value of the center pixel is subtracted from the 8 neighbor pixels' values, if the result is negative the binary value is 0, otherwise 1.Thecalculationstartsfrom the pixel at the top left corner of the 8-neighborhood and continues in clockwise direction. After calculating with all neighbors, an eight digit binary value isproduced.Whenthis binary value is converted to decimal, the LBP code of the pixel is generated, and placed to the coordinates of pixel in matrix [14]. Figure 4. Basic LBP Operator Display There is a drawback of LBP which uses 8- neighborhood (3x3) thatcannotcoverlarge-scalestructures. To take into account texture of different size structures, the method is generalized. In [15] Ojala et al. revisedthemethod to be flexible for any radius and any number of sampling points and named the new method as Extended LBP(ELBP). The histograms of LBP are used for face recognition since LBP histograms contain information about the distribution of local micro patterns.
  • 5. International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056 Volume: 05 Issue: 12 | Dec 2018 www.irjet.net p-ISSN: 2395-0072 © 2018, IRJET | Impact Factor value: 7.211 | ISO 9001:2008 Certified Journal | Page 659 Figure 5 below shows different examples of ELBP operator. ‘P’ represents the number of neighbors and ‘R’ represents the radius of a circle on which neighbors are located. Because the face image is toobigforLBPcalculation, dividing the image into small regions is proposed in [16]. Some parts of face (like eyes, mouth) contain more information for face recognition. Figure 5. ELBP Operator Examples Yang et al. proposes to train and allocate different weights for face parts, by their information covering and then concatenating them end to end to buildupglobal description of face. This helps to collect local pattern information with spatial details of the whole image. To decide if two face images are belong to same person, the images histograms are compared. Chi square statistic similaritymeasureisused for comparison of histograms. It can be defined as follows: where i = 0, 1,..., n-1, j = 0,1,...,m-1, wj is the weight for region j, S is target face image histogram and M is the query face image histogram [16]. e. Support Vector Machine (SVM) While most methods for training a classifier (e.g. Bayesian, neural networks, and RBF) are based on minimizing the training error, i.e. empirical risk, SVMs operates on another induction principle, called structural risk minimization, which aims to minimize an upper bound on the expected generalization error. An SVM classifier is a linear classifier where the separating hyperplane is chosen to minimize the expected classification error of the unseen test patterns. This optimal hyperplane is defined by a weighted combination of a small subset of the training vectors, called support vectors hence named SupportVector Machine [17]. Given a set of points belonging to two classes, a Support Vector Machine (SVM) finds the hyper plane that separates the largest possible fraction of points of the same class on the same side, while maximizing the distance from either class to the hyper plane. PCA is first used to extract features of face images and then discrimination functions between each pair of images are learned by SVMs. SVMs as defined in [17] can be considered as a new paradigm to train neural networks or radial basis function (RBF) classifiers or polynomial function. f. Sparse Networks of Winnows (SNoW) Yang et al. proposed a method that uses SNoW learning architecture [18] to detect faces with different features and expressions, in different poses and under different lightning conditions. SNoW is a sparsenetwork oflinearfunctionsthat uses Winnow update rule [18]. It is mainly tailored for learning in domains in which potential number of features taking part in decisions is very large, but may be not known in priori. g. Neural Networks Based Neural networks based have been applied successfully in many pattern recognition problems, such as optical character recognition, object recognition, and autonomous robot driving. Neural network architectures proposed as face recognition is considered as a two class pattern recognition. The advantage of usingneural networksforface recognition is the feasibility of training a system to capture the complex class conditional density of face patterns. The drawback of neural network architecture is that they has to be extensively tuned to get exceptional performance. Table -2: Types of Neural Network Architectures S. No Name 1 Hierarchical Neural Network 2 Auto associative Neural Networks 3. Probabilistic decision based Neural Network 4. Multilayer Neural Network h. Naïve Bayes Classifier This appearance based approach was developed by Schneiderman and Kanade as Naïve Bayes classifier to estimate the joint probability of local appearance and position of face patterns (subregions of the face) at multiple resolutions. Since some local patterns of an object are more unique than others; like intensity patterns around eyes are much more distinctive than the pattern found around the cheeks. Naïve Bayes classifier provides better estimation of the conditional density functions of the subregions and also givesa functional form of theposteriorprobabilitytocapture the joint statistics of local appearance and position on the object. i. Hidden Markov Model The underlying assumption of the Hidden Markov Model (HMM) is that patterns can be characterized as a parametric random process and that the parameters of this process can be estimated in a precise, well defined manner.Indeveloping an HMM for a pattern recognition problem, a number of hidden states need to be decided first to form a model. Then, one can train HMM to learn the transitional probability between states from the examples where each example is represented as a sequence of observations. The goal of training an HMM is to maximize the probability of observing
  • 6. International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056 Volume: 05 Issue: 12 | Dec 2018 www.irjet.net p-ISSN: 2395-0072 © 2018, IRJET | Impact Factor value: 7.211 | ISO 9001:2008 Certified Journal | Page 660 the training data by adjusting the parameters in an HMM model with the standard Viterbi segmentation method and Baum-Welch algorithms.AftertheHMMhasbeentrained,the output probability of an observation determines the class to which it belongs [19]. HMM-based methods usually treat a face pattern as a sequence ofobservation vectorswhereeach vector is a strip of pixels. HMMs have been applied to both face recognition and localization [19]. j. Information – Theoretical Approach The spatial property of the face pattern can be modeled through different aspects. The contextual constraint, among others, is a powerful one and has often been applied to texture segmentation. The contextual constraints in a face pattern are usually specified by a small neighborhood of pixels. Markov Random Field (MRF) theory provides a convenient and consistent way to model context dependent entities such as image pixels and correlated features. The face and nonface distributions can be estimated using histograms. A probability function p(x) for event that templateis a face and probability function q(x) for event that templateis a nonface id defined. Trainingdatabaseisdecided with faces of individuals. From the training sets, the most informative pixels (MIP) are selected to maximize the kullback relative information between p(x) and q(x) (i.e. to give the maximum class separation). MIP is then used to obtain linear features for classification and representation. Distance from face space (DFFS) is calculated. If the DFFS to the face subspace is lower than the distance to the nonface subspace, a face is assumed to exist withinthewindowwhich is passed over input image [19]. 3. APPLICATIONS OF FACE RECOGNITION SYSTEM  Security and Surveillance:-Access control of buildings, airports/seaports, computer or network security; CCTV Surveillance to look for suspects/criminals.  General Identity Verification:-Driving License, Electoral registrations,National Ids (UID AADHARcards in India), Passports, Employee IDs, Bank Account ID etc.  Criminal justice systems: - Mug-shot or booking systems, post-event analysis, forensics.  Access Control: - Face verification, matching a face against a single enrolled exemplar.  Image database investigations:-Searching image databases of licensed drivers benefit recipients, missing children, immigrants and police bookings.  Video Indexing in Multimedia environments with adaptive human computer interfaces 4. LIMITATIONS OF FACE RECOGNITION ALGORITHMS There are various challenges and limitations which are associated with face detection and face recognition algorithms affects the performance or result of the method are mentioned below: 1. Facial Aging occurred due to hormonal and biological changes. 2. Pose Invariance which is the result of camera- face pose due to which facial features (eyes, nose) gets occluded. 3. Image Orientation occurs as the face images vary for different rotations about the camera’s optical axis and imaging conditions like lighting, camera characteristics (sensor, flash, lenses) affect the appearance of face. 4. Occlusion might be limitation as in an image of group of people, some persons face may get partially or fully occluded due to other people or object. 5. Presence or absence of structural components: Facial features like beards, moustache,glasses,sunglasses,and nose ring etc. which cause great deal of variability in shape, size, color or texture of face. 5. CONCLUSION This paper attempts to provide a comprehensive study and tried to survey all the important and influential algorithms in simple and understandable manner on a significant number of papers to cover the recent development in face recognitionfield. Presentstudyexposes that face recognition algorithm can be enhanced using hybrid methods for better performance. The list of references to provide more detailed understanding of the approaches described is enlisted. When appropriate, we have reported on the relative performance of methods, and are also cognizant that there is a lack of uniformity in how methods are evaluated and, so, it is imprudent to explicitly declare which methods indeed have the lowest error rates. Also categorization of algorithms have been done and pros and cons have been provided. We apologize to researchers whose important contributions may have been overlooked. ACKNOWLEDGEMENT The author would like to thank professors and management of NRI Institute of Science and Technology, Bhopal (Madhya Pradesh) for providing their valuable guidance and excellent facilities for carrying the research work. REFERENCES [1] B.S.Khade et al, “FaceRecognitionTechniques:ASurvey” International Journal of Computer Science and Mobile Computing, Vol.5 Issue.11, November- 2016, pg. 65-72. [2] M.-H. Yang, D. J. Kriegman, and N. Ahuja. Detecting faces in images: A survey. IEEE Trans. on PAMI, 24(1):34–58, 2002. 1, 11, 12, 13 [3] A Survey of Recent Advances in Face Detection: Cha Zhang and Zheng you Zhang June2010Technical Report MSR-TR-2010-66. [4] Comparative Study on Face Recognition Using HGPP, PCA, LDA, ICA and SVM”, Global Journal of Computer Science and Technology Graphics & Vision Volume 12 Issue 15 Version 1.0 Year 2012 Type: Double Blind Peer
  • 7. International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056 Volume: 05 Issue: 12 | Dec 2018 www.irjet.net p-ISSN: 2395-0072 © 2018, IRJET | Impact Factor value: 7.211 | ISO 9001:2008 Certified Journal | Page 661 Reviewed International Research Journal Publisher: Global Journals Inc. (USA) Online ISSN: 0975-4172 & Print ISSN: 0975-4350 By Hardik Kadiya , Merchant Engineering College. [5] “Towards a Practical Face Recognition System: Robust Alignment and Illumination by Sparse Representation” Andrew Wagner, Student Member, IEEE, John Wright, Member, IEEE, Arvind Ganesh, Student Member, IEEE, Zihan Zhou, StudentMember,IEEE,HosseinMobahi,and Yi Ma, Senior Member, IEEE. [6] Turk, Matthew A and Pentland, Alex P. Face recognition using eigenfaces. Computer Vision and Pattern Recognition, 1991. Proceedings {CVPR'91.}, {IEEE} Computer Society Conference on 1991. [7] Deepika Dubey and Dr.G.S.Tomar, ”Deep perusal of human face Recognition Algorithms from Facial Snapshots, International Journal of Signal Processing, Image Processing and Pattern Recognition ”,Vol-9,No- 9,(2016),pp.103-112 [8] An Efficient LDA Algorithm for Face Recognition Jie Yang, Hua Yu, William Kunz School of Computer Science Interactive Systems Laboratories Carnegie Mellon University Pittsburgh, PA 15213. [9] Lih-Heng Chan, Sh-Hussain Salleh and Chee-Ming Ting. “Face Biometrics Based on Principal Component Analysis and Linear Discriminant Analysis.” J.Computer Sci., 6 (7): 693-699, 2010. [10] M. Sharkas, M. Abou Elenien “Eigenfaces vs. Fisherfaces vs. ICA for Face Recognition; AComparativeStudy”,ICSP 2008 Proceedings, 978-1-4244-2179-4/08/ ©2008 IEEE. [11] M.S. Bartlett, J.R. Movellan, and T.J. Sejnowski, Face recognition by independent component analysis, IEEE Trans Neural Networks 13 (2002), 1450–1464. [12] B. Draper, K. Baek, M.S. Bartlett, and J.R. Beveridge, Recognizing faces with PCA and ICA, Computer Vis Image Understand (Special Issue on Face Recognition) 91 (2003). [13] T. Ojala, M.Pietikäinen and D. Harwood (1994), "Performance evaluation of texture measures with classification based on Kullback discrimination of distributions", Proceedings of the 12th IAPR International Conference on Pattern Recognition (ICPR 1994), vol. 1, pp. 582 – 585 [14] D. Huang, C. Shan, M. Ardabilian, Y. Wang and L. Chen "Local binary patterns and its applicationtofacial image analysis: A survey", IEEE Transactions on Systems,Man, and Cybernetics, Part C, vol. 41, pp.765 -781 2011. [15] T. Ojala, M. Pietikäinen, and T. Maenpaa, “Multiresolution grayscale and rotation invariant texture classification with local binary patterns,” IEEE Transaction on Pattern Analysis and Machine Intelligence., vol. 24, no. 7, pp. 971–987, Jul. 2002. [16] H. Yang and Y. Wang, “A LBP-based face recognition method with Hamming distance constraint,”inProc.Int. Conf. Image Graph., Aug., 2007, pp. 645–649. [17] M.-H. Yang, D. J. Kriegman, and N. Ahuja. Detecting faces in images: A survey. IEEE Trans. on PAMI, 24(1):13, 2002. pp. 13 [18] A. Carleson, C. Cumby, J. Rosen, and D. Roth, “The SNoW Learning Architecture”, Technical Report UIUCDCS-R- 99-2101, Univ. of Illinois at Urbana-Champaign Computer Science Dept., 1999. [19] M.-H. Yang, D. J. Kriegman, and N. Ahuja. Detecting faces in images: A survey. IEEE Trans. on PAMI, 24(1):34–58, 2002. pp 14-15. [20] G. Yang and T. S. Huang, “Human Face Detection in Complex Background,” Pattern Recognition, vol. 27, no. 1, pp. 53-63, 1994. [21] T.K. Leung, M.C. Burl, and P. Perona, “Finding Faces in Cluttered Scenes Using Random Labeled Graph Matching,” Proc. Fifth IEEE Int’l Conf. Computer Vision, pp. 637-644, 1995. [22] Y. Dai and Y. Nakano, “Face-Texture Model Based on SGLD and Its Application in Face Detection in a Color Scene,” Pattern Recognition, vol. 29, no. 6, pp. 1007- 1017, 1996. [23] S. McKenna, S. Gong, and Y. Raja, “Modelling Facial Colour and Identity with Gaussian Mixtures,” Pattern Recognition, vol. 31, no. 12, pp. 1883-1892, 1998. [24] R. Kjeldsen and J. Kender, “Finding Skin in Color Images,” Proc. Second Int’l Conf. Automatic Face and Gesture Recognition, pp. 312- 317, 1996. [25] I. Craw, D. Tock, and A. Bennett, “FindingFaceFeatures,” Proc. Second European Conf. Computer Vision, pp. 92- 96, 1992. [26] A. Lanitis, C.J. Taylor, and T.F. Cootes, “An Automatic Face Identification System Using Flexible Appearance Models,” Image and Vision Computing, vol. 13, no. 5, pp. 393-401, 1995. [27] K.-K. Sung and T. Poggio, “Example-Based Learning for View Based Human Face Detection,”IEEETrans.Pattern Analysis and Machine Intelligence, vol. 20, no. 1, pp. 39- 51, Jan. 1998. [28] H. Rowley, S. Baluja, and T. Kanade, “Neural Network- Based Face Detection,” IEEE Trans. PatternAnalysisand Machine Intelligence, vol. 20, no. 1, pp. 23-38, Jan. 1998. [29] E. Osuna, R. Freund, and F. Girosi, “Training Support Vector Machines: An Application to Face Detection,” Proc. IEEE Conf. Computer Vision and Pattern Recognition, pp. 130-136, 1997. [30] H. Schneiderman and T. Kanade,“ProbabilisticModeling of Local Appearance and Spatial RelationshipsforObject Recognition,” Proc. IEEE Conf. Computer Vision and Pattern Recognition, pp. 45-51, 1998. [31] A. Rajagopalan, K. Kumar,J.Karlekar,R.Manivasakan,M. Patil, U. Desai, P. Poonacha, and S. Chaudhuri, “Finding Faces in Photographs,” Proc. Sixth IEEE Int’l Conf. Computer Vision, pp. 640- 645, 1998. [32] M.S. Lew, “Information Theoretic View-Based and Modular Face Detection,” Proc. Second Int’l Conf. Automatic Face and Gesture Recognition, pp. 198-203, 1996.