SlideShare a Scribd company logo
FACE RECOGNITION USING LAPLACIANFACES

(SYNOPSIS)
ABSTRACT
We propose an appearance-based face recognition method called the
Laplacianface approach. By using Locality Preserving Projections (LPP),
the face images are mapped into a face subspace for analysis.
Different

from

Principal

Component

Analysis

(PCA)

and

Linear

Discriminant Analysis (LDA) which effectively see only the Euclidean
structure of face space, LPP finds an embedding that preserves local
information, and obtains a face subspace that best detects the
essential face manifold structure. The Laplacianfaces are the optimal
linear approximations to the eigenfunctions of the Laplace Beltrami
operator on the face manifold. In this way, the unwanted variations
resulting from changes in lighting, facial expression, and pose may be
eliminated or reduced.
Theoretical analysis shows that PCA, LDA, and LPP can be obtained
from different graph models. We compare the proposed Laplacianface
approach with Eigenface and Fisherface methods on three different
face data sets. Experimental results suggest that the proposed
Laplacianface approach provides a better representation and achieves
lower error rates in face recognition.

SOFTWARE REQUIREMENTS
Language
Operating System

: J2SDK 1.4
: Windows 98.
HARDWARE REQUIREMENTS
Processor

: Intel Pentium III Processor

Random Memory

: 128MB

Hard Disk

: 20GB

Processor Speed

: 300 min

EXISTING SYSTEM
Facial recognition systems are computer-based security systems that
are able to automatically detect and identify human faces. These
systems depend on a recognition algorithm. Principal Component
Analysis (PCA) is a statistical method under the broad title of factor
analysis. The purpose of PCA is to reduce the large dimensionality of
the

data

space

(observed

variables)

to

the

smaller

intrinsic

dimensionality of feature space (independent variables), which are
needed to describe the data economically. This is the case when there
is a strong correlation between observed variables. The jobs which PCA
can do are prediction, redundancy removal, feature extraction, data
compression, etc. Because PCA is a known powerful technique which
can do something in the linear domain, applications having linear
models are suitable, such as signal processing, image processing,
system and control theory, communications, etc.

The main idea of using PCA for face recognition is to express the large
1-D vector of pixels constructed from 2-D face image into the compact
principal components of the feature space. This is called eigenspace
projection. Eigenspace is calculated by identifying the eigenvectors of
the covariance matrix derived from a set of fingerprint images
(vectors).
But the most of the algorithm considers some what global data
patterns while recognition process. This will not yield accurate
recognition system.
 Less accurate
 Does not deal with manifold structure
 It doest not deal with biometric characteristics.

PROPOSED SYSTEM
PCA and LDA aim to preserve the global structure. However, in many
real-world applications, the local structure is more important. In this
section, we describe
Locality Preserving Projection (LPP) [9], a new algorithm for learning a
locality preserving subspace. The complete derivation and theoretical
justifications of LPP can be traced back to [9]. LPP seeks to preserve
the intrinsic geometry of the data and local structure. The objective
function of LPP is as follows:
LPP is a general method for manifold learning. It is obtained by finding
the optimal linear approximations to the eigenfunctions of the Laplace
Betrami operator on the manifold [9]. Therefore, though it is still a
linear technique, it seems to recover important aspects of the intrinsic
nonlinear manifold structure by preserving local structure. Based on
LPP, we describe our Laplacianfaces method for face representation in
a locality preserving subspace. In the face analysis and recognition
problem, one is confronted with the difficulty that the matrix XDXT is
sometimes singular. This stems from the fact that sometimes the
number of images in the training set ðnÞ is much smaller than
thenumberof pixels in eachimageðmÞ. Insuch a case, the rank ofXDXT
is at most n, whileXDXT is anm _ mmatrix, which implies that XDXT is
singular. To overcome the complication of a singular XDXT , we first
project the image set to a PCA subspace so that the resulting matrix
XDXT

is

nonsingular.

preprocessing

is

for

Another
noise

consideration

reduction.

This

of

using

method,

PCA
we

as
call

Laplacianfaces, can learn an optimal subspace for face representation
and recognition. The algorithmic procedure of Laplacianfaces is
formally stated below:
1. PCA projection. We project the image set fxig into the PCA subspace
by

throwing

away

the

smallest

principal

components.

In

our

experiments, we kept 98 percent information in the sense of
reconstruction error. For the sake of simplicity, we still use x to denote
the images in the PCA subspace in the following steps. We denote by
WPCA the transformation matrix of PCA.

2. Constructing the nearest-neighbor graph. Let G denote a graph with
n nodes. The ith node corresponds to the face image xi. We put an
edge between nodes i and j if xi and xj are “close,” i.e., xj is among k
nearest neighbors of xi, or xi is among k nearest neighbors of xj. The
constructed nearest neighbor graph is an approximation of the local
manifold structure. Note that here we do not use the
"-neighborhood to construct the graph. This is simply because it is
often difficult to choose the optimal “in the real-world applications,
while k nearest-neighbor
graph can be constructed more stably. The disadvantage is that the k
nearest-neighbor search will increase the computational complexity of
our algorithm.
When the computational complexity is a major concern, one can switch
to the "-neighborhood.

3. Choosing the weights. If node i and j are connected, put
Sij ¼ e_
xi_xj k k2
t ; ð34Þ
column (or row, since S is symmetric) sums of S, Dii ¼ Pj Sji. L ¼ D _
S is the Laplacian matrix. The ith row of matrix X is xi. Let w0;w1; . . .
;wk_1 be the solutions of (35), ordered according to their eigenvalues,
0 _ _0 _ _1 _ _ _ _ _ _k_1.
These eigenvalues are equal to or greater than zero because the
matrices XLXT and XDXT are both symmetric and positive semidefinite.
Thus, the embedding is as follows: x ! y ¼ WTx; ð36Þ
W ¼ WPCAWLPP ; ð37Þ
WLPP ¼ ½w0;w1; _ _ _ ;wk_1_; ð38Þ
where y is a k-dimensional vector. W is the transformation matrix. This
linear mapping best preserves the manifold’s estimated intrinsic
geometry in a linear sense. The column vectors of W are the so-called
Laplacianfaces. This principle is implemented with unsupervised
learning concept with training and test data.
Modules
1. Pre processing
In the preprocessing take the single gray image in 10
different directions
And measure the points in 28 dimensions of each gray image

2. PCA projection (Principal Component Analysis)
We project the image set fxig into the PCA subspace by
throwing away
the smallest principal components. In our experiments, we kept 98
percent information in the sense of reconstruction error. For the sake
of simplicity, we still use x to denote the images in the PCA subspace
in the following steps. We denote by WPCA the transformation matrix
of PCA.

3. Constructing the nearest-neighbor graph
Let G denote a graph with n nodes. The ith node
corresponds to
the face image xi. We put an edge between nodes i and j if xi and xj
are “close,” i.e., xj is among k nearest neighbors of xi, or xi is among
k nearest neighbors of xj. The constructed nearest neighbor graph is
an approximation of the local manifold structure. Note that here we do
not use the "-neighborhood to construct the graph. This is simply
because it is often difficult to choose the optimal “in the real-world
applications, while k nearest-neighbor graph can be constructed more
stably. The disadvantage is that the k nearest-neighbor search will
increase the computational complexity of our algorithm.

4. Choosing the weights of neighboring pixel
If node i and j are connected, put
Sij ¼ e_
xi_xj k k2
t ; ð34Þ
column (or row, since S is symmetric) sums of S, Dii ¼ Pj Sji. L ¼ D _
S is the Laplacian matrix. The ith row of matrix X is xi. Let w0;w1; . . .
;wk_1 be the solutions of (35), ordered according to their eigenvalues,
0 _ _0 _ _1 _ _ _ _ _ _k_1.
These eigenvalues are equal to or greater than zero because the
matrices XLXT and XDXT are both symmetric and positive semidefinite.
Thus, the embedding is as follows: x ! y ¼ WTx; ð36Þ
W ¼ WPCAWLPP ; ð37Þ
WLPP ¼ ½w0;w1; _ _ _ ;wk_1_; ð38Þ
where y is a k-dimensional vector. W is the transformation matrix. This
linear mapping best preserves the manifold’s estimated intrinsic
geometry in a linear sense. The column vectors of W are the so-called
Laplacianfaces. This principle is implemented with unsupervised
learning concept with training and test data.

5. Recognize the image
Then measure the value as from test DIR which contain
more gray image
If it is match with any gray image then it recognize and show the
image or else it not recognize

More Related Content

PPT
Face recognition using laplacian faces
PDF
Face Identification Project Abstract 2017
PDF
Facial Expression Recognition
PDF
Facial emotion recognition
PDF
Facial expression recongnition Techniques, Database and Classifiers
PPTX
Predicting Emotions through Facial Expressions
PPTX
Facial Expression Recognition System using Deep Convolutional Neural Networks.
PDF
J01116164
Face recognition using laplacian faces
Face Identification Project Abstract 2017
Facial Expression Recognition
Facial emotion recognition
Facial expression recongnition Techniques, Database and Classifiers
Predicting Emotions through Facial Expressions
Facial Expression Recognition System using Deep Convolutional Neural Networks.
J01116164

What's hot (20)

PDF
Facial expression recognition
PDF
Facial expression recognition using pca and gabor with jaffe database 11748
PPTX
Facial Emotion Recognition: A Deep Learning approach
PDF
Happiness Expression Recognition at Different Age Conditions
PDF
Facial Expression Recognition Using Enhanced Deep 3D Convolutional Neural Net...
DOCX
Facial Expression Recognition via Python
PDF
Facial Emotion Recognition using Convolution Neural Network
PPT
Term11566
PPTX
Facial emotion detection on babies' emotional face using Deep Learning.
PPT
4837410 automatic-facial-emotion-recognition
PPTX
Recognition of Partially Occluded Face Using Gradientface and Local Binary Pa...
PDF
Human Emotion Recognition
PDF
Face Emotion Analysis Using Gabor Features In Image Database for Crime Invest...
PPTX
Facial expression recognition on real world face images (OPTIK)
PPT
Fcv bio cv_cottrell
PPTX
Facial expression recognition system : survey
PDF
Synops emotion recognize
PPT
A study on face recognition technique based on eigenface
PDF
IRJET- Comparative Study of PCA, KPCA, KFA and LDA Algorithms for Face Re...
PPTX
Facial expression recognition based on image feature
Facial expression recognition
Facial expression recognition using pca and gabor with jaffe database 11748
Facial Emotion Recognition: A Deep Learning approach
Happiness Expression Recognition at Different Age Conditions
Facial Expression Recognition Using Enhanced Deep 3D Convolutional Neural Net...
Facial Expression Recognition via Python
Facial Emotion Recognition using Convolution Neural Network
Term11566
Facial emotion detection on babies' emotional face using Deep Learning.
4837410 automatic-facial-emotion-recognition
Recognition of Partially Occluded Face Using Gradientface and Local Binary Pa...
Human Emotion Recognition
Face Emotion Analysis Using Gabor Features In Image Database for Crime Invest...
Facial expression recognition on real world face images (OPTIK)
Fcv bio cv_cottrell
Facial expression recognition system : survey
Synops emotion recognize
A study on face recognition technique based on eigenface
IRJET- Comparative Study of PCA, KPCA, KFA and LDA Algorithms for Face Re...
Facial expression recognition based on image feature
Ad

Similar to Face recognition using laplacianfaces (synopsis) (20)

PDF
Joint3DShapeMatching - a fast approach to 3D model matching using MatchALS 3...
PDF
Joint3DShapeMatching
PDF
Scalable Constrained Spectral Clustering
PDF
Classification of handwritten characters by their symmetry features
PDF
Image Super-Resolution Reconstruction Based On Multi-Dictionary Learning
PDF
Lec15 graph laplacian embedding
PDF
On image intensities, eigenfaces and LDA
PDF
Spectral Clustering Report
PDF
B02402012022
DOCX
M Jamee Raza (BSE-23S-056)LA project.docx
PDF
Master Thesis on the Mathematial Analysis of Neural Networks
PDF
Exact network reconstruction from consensus signals and one eigen value
PPTX
Principal component analysis
PDF
directed-research-report
PDF
Linear regression [Theory and Application (In physics point of view) using py...
PDF
Human Face Detection Based on Combination of Logistic Regression, Distance of...
PDF
Paper id 24201464
PDF
PDF
COMPARISON OF VOLUME AND DISTANCE CONSTRAINT ON HYPERSPECTRAL UNMIXING
Joint3DShapeMatching - a fast approach to 3D model matching using MatchALS 3...
Joint3DShapeMatching
Scalable Constrained Spectral Clustering
Classification of handwritten characters by their symmetry features
Image Super-Resolution Reconstruction Based On Multi-Dictionary Learning
Lec15 graph laplacian embedding
On image intensities, eigenfaces and LDA
Spectral Clustering Report
B02402012022
M Jamee Raza (BSE-23S-056)LA project.docx
Master Thesis on the Mathematial Analysis of Neural Networks
Exact network reconstruction from consensus signals and one eigen value
Principal component analysis
directed-research-report
Linear regression [Theory and Application (In physics point of view) using py...
Human Face Detection Based on Combination of Logistic Regression, Distance of...
Paper id 24201464
COMPARISON OF VOLUME AND DISTANCE CONSTRAINT ON HYPERSPECTRAL UNMIXING
Ad

More from Mumbai Academisc (20)

DOC
Non ieee java projects list
DOC
Non ieee dot net projects list
DOC
Ieee java projects list
DOC
Ieee 2014 java projects list
DOC
Ieee 2014 dot net projects list
DOC
Ieee 2013 java projects list
DOC
Ieee 2013 dot net projects list
DOC
Ieee 2012 dot net projects list
PPT
Spring ppt
PDF
Ejb notes
PDF
Java web programming
PDF
Java programming-examples
PPTX
Hibernate tutorial
DOCX
J2ee project lists:-Mumbai Academics
PPT
Web based development
PPTX
Java tutorial part 4
PPTX
Java tutorial part 3
PPTX
Java tutorial part 2
PDF
Engineering
Non ieee java projects list
Non ieee dot net projects list
Ieee java projects list
Ieee 2014 java projects list
Ieee 2014 dot net projects list
Ieee 2013 java projects list
Ieee 2013 dot net projects list
Ieee 2012 dot net projects list
Spring ppt
Ejb notes
Java web programming
Java programming-examples
Hibernate tutorial
J2ee project lists:-Mumbai Academics
Web based development
Java tutorial part 4
Java tutorial part 3
Java tutorial part 2
Engineering

Recently uploaded (20)

PDF
Profit Center Accounting in SAP S/4HANA, S4F28 Col11
PDF
Encapsulation_ Review paper, used for researhc scholars
PPTX
Machine Learning_overview_presentation.pptx
PDF
A comparative analysis of optical character recognition models for extracting...
PPT
Teaching material agriculture food technology
PDF
TokAI - TikTok AI Agent : The First AI Application That Analyzes 10,000+ Vira...
PDF
Empathic Computing: Creating Shared Understanding
PPTX
KOM of Painting work and Equipment Insulation REV00 update 25-dec.pptx
PPTX
A Presentation on Artificial Intelligence
PDF
Encapsulation theory and applications.pdf
PDF
Per capita expenditure prediction using model stacking based on satellite ima...
PDF
Review of recent advances in non-invasive hemoglobin estimation
PDF
Peak of Data & AI Encore- AI for Metadata and Smarter Workflows
PDF
Chapter 3 Spatial Domain Image Processing.pdf
PDF
NewMind AI Weekly Chronicles - August'25-Week II
PDF
Optimiser vos workloads AI/ML sur Amazon EC2 et AWS Graviton
PPTX
Programs and apps: productivity, graphics, security and other tools
PDF
Building Integrated photovoltaic BIPV_UPV.pdf
PDF
Spectral efficient network and resource selection model in 5G networks
PDF
Dropbox Q2 2025 Financial Results & Investor Presentation
Profit Center Accounting in SAP S/4HANA, S4F28 Col11
Encapsulation_ Review paper, used for researhc scholars
Machine Learning_overview_presentation.pptx
A comparative analysis of optical character recognition models for extracting...
Teaching material agriculture food technology
TokAI - TikTok AI Agent : The First AI Application That Analyzes 10,000+ Vira...
Empathic Computing: Creating Shared Understanding
KOM of Painting work and Equipment Insulation REV00 update 25-dec.pptx
A Presentation on Artificial Intelligence
Encapsulation theory and applications.pdf
Per capita expenditure prediction using model stacking based on satellite ima...
Review of recent advances in non-invasive hemoglobin estimation
Peak of Data & AI Encore- AI for Metadata and Smarter Workflows
Chapter 3 Spatial Domain Image Processing.pdf
NewMind AI Weekly Chronicles - August'25-Week II
Optimiser vos workloads AI/ML sur Amazon EC2 et AWS Graviton
Programs and apps: productivity, graphics, security and other tools
Building Integrated photovoltaic BIPV_UPV.pdf
Spectral efficient network and resource selection model in 5G networks
Dropbox Q2 2025 Financial Results & Investor Presentation

Face recognition using laplacianfaces (synopsis)

  • 1. FACE RECOGNITION USING LAPLACIANFACES (SYNOPSIS)
  • 2. ABSTRACT We propose an appearance-based face recognition method called the Laplacianface approach. By using Locality Preserving Projections (LPP), the face images are mapped into a face subspace for analysis. Different from Principal Component Analysis (PCA) and Linear Discriminant Analysis (LDA) which effectively see only the Euclidean structure of face space, LPP finds an embedding that preserves local information, and obtains a face subspace that best detects the essential face manifold structure. The Laplacianfaces are the optimal linear approximations to the eigenfunctions of the Laplace Beltrami operator on the face manifold. In this way, the unwanted variations resulting from changes in lighting, facial expression, and pose may be eliminated or reduced. Theoretical analysis shows that PCA, LDA, and LPP can be obtained from different graph models. We compare the proposed Laplacianface approach with Eigenface and Fisherface methods on three different face data sets. Experimental results suggest that the proposed Laplacianface approach provides a better representation and achieves lower error rates in face recognition. SOFTWARE REQUIREMENTS Language Operating System : J2SDK 1.4 : Windows 98.
  • 3. HARDWARE REQUIREMENTS Processor : Intel Pentium III Processor Random Memory : 128MB Hard Disk : 20GB Processor Speed : 300 min EXISTING SYSTEM Facial recognition systems are computer-based security systems that are able to automatically detect and identify human faces. These systems depend on a recognition algorithm. Principal Component Analysis (PCA) is a statistical method under the broad title of factor analysis. The purpose of PCA is to reduce the large dimensionality of the data space (observed variables) to the smaller intrinsic dimensionality of feature space (independent variables), which are needed to describe the data economically. This is the case when there is a strong correlation between observed variables. The jobs which PCA can do are prediction, redundancy removal, feature extraction, data compression, etc. Because PCA is a known powerful technique which can do something in the linear domain, applications having linear
  • 4. models are suitable, such as signal processing, image processing, system and control theory, communications, etc. The main idea of using PCA for face recognition is to express the large 1-D vector of pixels constructed from 2-D face image into the compact principal components of the feature space. This is called eigenspace projection. Eigenspace is calculated by identifying the eigenvectors of the covariance matrix derived from a set of fingerprint images (vectors). But the most of the algorithm considers some what global data patterns while recognition process. This will not yield accurate recognition system.  Less accurate  Does not deal with manifold structure  It doest not deal with biometric characteristics. PROPOSED SYSTEM PCA and LDA aim to preserve the global structure. However, in many real-world applications, the local structure is more important. In this section, we describe
  • 5. Locality Preserving Projection (LPP) [9], a new algorithm for learning a locality preserving subspace. The complete derivation and theoretical justifications of LPP can be traced back to [9]. LPP seeks to preserve the intrinsic geometry of the data and local structure. The objective function of LPP is as follows: LPP is a general method for manifold learning. It is obtained by finding the optimal linear approximations to the eigenfunctions of the Laplace Betrami operator on the manifold [9]. Therefore, though it is still a linear technique, it seems to recover important aspects of the intrinsic nonlinear manifold structure by preserving local structure. Based on LPP, we describe our Laplacianfaces method for face representation in a locality preserving subspace. In the face analysis and recognition problem, one is confronted with the difficulty that the matrix XDXT is sometimes singular. This stems from the fact that sometimes the number of images in the training set ðnÞ is much smaller than thenumberof pixels in eachimageðmÞ. Insuch a case, the rank ofXDXT is at most n, whileXDXT is anm _ mmatrix, which implies that XDXT is singular. To overcome the complication of a singular XDXT , we first project the image set to a PCA subspace so that the resulting matrix XDXT is nonsingular. preprocessing is for Another noise consideration reduction. This of using method, PCA we as call Laplacianfaces, can learn an optimal subspace for face representation
  • 6. and recognition. The algorithmic procedure of Laplacianfaces is formally stated below: 1. PCA projection. We project the image set fxig into the PCA subspace by throwing away the smallest principal components. In our experiments, we kept 98 percent information in the sense of reconstruction error. For the sake of simplicity, we still use x to denote the images in the PCA subspace in the following steps. We denote by WPCA the transformation matrix of PCA. 2. Constructing the nearest-neighbor graph. Let G denote a graph with n nodes. The ith node corresponds to the face image xi. We put an edge between nodes i and j if xi and xj are “close,” i.e., xj is among k nearest neighbors of xi, or xi is among k nearest neighbors of xj. The constructed nearest neighbor graph is an approximation of the local manifold structure. Note that here we do not use the "-neighborhood to construct the graph. This is simply because it is often difficult to choose the optimal “in the real-world applications, while k nearest-neighbor graph can be constructed more stably. The disadvantage is that the k nearest-neighbor search will increase the computational complexity of our algorithm.
  • 7. When the computational complexity is a major concern, one can switch to the "-neighborhood. 3. Choosing the weights. If node i and j are connected, put Sij ¼ e_ xi_xj k k2 t ; ð34Þ column (or row, since S is symmetric) sums of S, Dii ¼ Pj Sji. L ¼ D _ S is the Laplacian matrix. The ith row of matrix X is xi. Let w0;w1; . . . ;wk_1 be the solutions of (35), ordered according to their eigenvalues, 0 _ _0 _ _1 _ _ _ _ _ _k_1. These eigenvalues are equal to or greater than zero because the matrices XLXT and XDXT are both symmetric and positive semidefinite. Thus, the embedding is as follows: x ! y ¼ WTx; ð36Þ W ¼ WPCAWLPP ; ð37Þ WLPP ¼ ½w0;w1; _ _ _ ;wk_1_; ð38Þ where y is a k-dimensional vector. W is the transformation matrix. This linear mapping best preserves the manifold’s estimated intrinsic geometry in a linear sense. The column vectors of W are the so-called Laplacianfaces. This principle is implemented with unsupervised learning concept with training and test data.
  • 8. Modules 1. Pre processing In the preprocessing take the single gray image in 10 different directions And measure the points in 28 dimensions of each gray image 2. PCA projection (Principal Component Analysis) We project the image set fxig into the PCA subspace by throwing away the smallest principal components. In our experiments, we kept 98 percent information in the sense of reconstruction error. For the sake of simplicity, we still use x to denote the images in the PCA subspace in the following steps. We denote by WPCA the transformation matrix of PCA. 3. Constructing the nearest-neighbor graph Let G denote a graph with n nodes. The ith node corresponds to the face image xi. We put an edge between nodes i and j if xi and xj are “close,” i.e., xj is among k nearest neighbors of xi, or xi is among
  • 9. k nearest neighbors of xj. The constructed nearest neighbor graph is an approximation of the local manifold structure. Note that here we do not use the "-neighborhood to construct the graph. This is simply because it is often difficult to choose the optimal “in the real-world applications, while k nearest-neighbor graph can be constructed more stably. The disadvantage is that the k nearest-neighbor search will increase the computational complexity of our algorithm. 4. Choosing the weights of neighboring pixel If node i and j are connected, put Sij ¼ e_ xi_xj k k2 t ; ð34Þ column (or row, since S is symmetric) sums of S, Dii ¼ Pj Sji. L ¼ D _ S is the Laplacian matrix. The ith row of matrix X is xi. Let w0;w1; . . . ;wk_1 be the solutions of (35), ordered according to their eigenvalues, 0 _ _0 _ _1 _ _ _ _ _ _k_1. These eigenvalues are equal to or greater than zero because the matrices XLXT and XDXT are both symmetric and positive semidefinite. Thus, the embedding is as follows: x ! y ¼ WTx; ð36Þ W ¼ WPCAWLPP ; ð37Þ WLPP ¼ ½w0;w1; _ _ _ ;wk_1_; ð38Þ where y is a k-dimensional vector. W is the transformation matrix. This linear mapping best preserves the manifold’s estimated intrinsic geometry in a linear sense. The column vectors of W are the so-called
  • 10. Laplacianfaces. This principle is implemented with unsupervised learning concept with training and test data. 5. Recognize the image Then measure the value as from test DIR which contain more gray image If it is match with any gray image then it recognize and show the image or else it not recognize