SlideShare a Scribd company logo
Illumination Invariant Hand Gesture Classification against Complex
Background using Combinational Features.
Anjali Patil
Electronics Engineering
DKTE TEI, Ichalkaranji
Maharashtra, India
anjalirpatil@gmail.com
Dr. Mrs. S. Subbaraman
Professor, Electronics Department
Walchand College of Engineering, Sangli.
Maharashtra, India
s.subbaraman@gmail.com
Abstract: Hand gesture classification is popularly used in
wide applications like Human-Machine Interface, Virtual
Reality, Sign Language Recognition, Animations etc. The
classification accuracy of static gestures depends on the
technique used to extract the features as well as the classifier
used in the system. To achieve the invariance to illumination
against complex background, experimentation has been
carried out to generate a feature vector based on skin color
detection by fusing the Fourier descriptors of the image with
its geometrical features. Such feature vectors are then used in
Neural Network environment implementing Back
Propagation algorithm to classify the hand gestures. The set
of images for the hand gestures used in the proposed research
work are collected from the standard databases viz.
Sebastien Marcel Database, Cambridge Hand Gesture Data
set and NUS Hand Posture dataset. An average classification
accuracy of 95.25% has been observed which is on par with
that reported in the literature by the earlier researchers.
Index Terms: Back-propagation, Combinational Features,
Fourier Descriptor, Neural Network, Skin color, Static hand
gesture
I. INTRODUCTION
Hand gesture recognition plays an important role
in the areas covering the applications from virtual reality
to sign language recognition. The images captured for
hand gestures fall into two categories viz. glove based
images and non-glove based images. Hand gestures
recognition also is correspondingly classified as glove
based recognition and non-glove based i.e. vision based
recognition.
Fig.1 Steps involved in proposed Hand Gesture Classification
In glove based approach, users have to wear
cumbersome wires which may hinder the ease and
naturalness with which the user interacts with computers
or machines. The awkwardness in using gloves and other
devices can be overcome by using vision based systems
that means video based interactive systems. This technique
uses cameras and computer vision techniques to recognize
the gestures in a much simpler way. [1] [2]. Vision based
approaches are further classified as 3D model (which is
exact representation of shape but is computationally
expensive) and appearance based 2D model which is
projection of 3-D object onto 2-D plane and is economical
computationally. This paper focuses on appearance based
methods for recognition of hand postures.
As shown in Figure 1, after capturing the image
of hand gesture, segmentation is done based on the skin
color. In the skin color detection process the RGB color
model is first transformed to appropriate color space and a
skin classifier is used to find a pixel is skin pixel or non
skin pixel. Skin color is the low level features extraction
technique which is robust to scale, geometric
transformations, occlusions etc. By the skin classification
the region of interest is observed which then is used to find
the boundary of the hand. After extracting the hand
contour, the Fourier Descriptors (FDs) are extracted and
combined with the geometrical features. The feature
vectors, thus formed, are given to artificial neural network
used as a classifier to classify the hand gestures.
Skin Color
Detection
Fourier
Descriptor
Geometrical
Features
Combination
of features
Classification
using ANN
Image Input
Gesture
Output
International Journal of Computer Science and Information Security (IJCSIS),
Vol. 16, No. 3, March 2018
63 https://sites.google.com/site/ijcsis/
ISSN 1947-5500
The detailed implementation is explained in the
successive sessions. The main objective of this paper is to
present the contribution of the work done in direction to
classify the hand gestures with the help of skin color
correctly from images captured under different
illumination conditions. The system which will be robust
against variation in illumination and hence can be called as
illumination invariant.
The rest of this paper is organized as follows:
Section 2 presents the literature review on illumination
normalization and skin color detection. Experimental work
is discussed in Section 3. Detailed results are presented in
Section 4 followed by conclusions and future scope in
Section 5.
II. RELATED WORK
Detection of skin color in an image is sensitive to
several factors such as illumination conditions, camera
characteristics, background, shadows, motions besides
person dependent characteristics such as gender, ethnicity,
makeup etc. A good skin color detector must be robust
against illumination variations and must be able to cope up
with the great variability of skin color between ethnic
groups. Another challenge in detecting human skin color is
the fact that the objects in the real world which may be in
the background of the image can have skin tone colors, for
example, leather, wood, skin-colored clothing, hairs etc.
The systems not taking care of this aspect may have false
detections. The purpose of the research work is to identify
and classify the hand gestures with this type of
uncontrolled environment.
Image is represented in different color spaces
including RGB, normalized RGB, HSV, YCbCr, YUV,
YIQ, etc. Color spaces efficiently separating the
chromaticity from the luminance components of color are
typically considered preferable (Luma-Chroma model).
This is due to the fact that by employing chromaticity-
dependent components of color only, some degree of
robustness to illumination changes can be achieved.
Different Skin color models with comparison of their
performance have been presented by Terrillon et.al. in [3].
The detection and segmentation of skin pixels
using HSV and YCbCr color space has been explained by
Khamar et. al. in [4] wherein an approach to discriminate
color and intensity information under uneven illumination
conditions is highlighted. The threshold based on
histograms of the Hue, Saturation and Value (HSV) has
been to classify the pixels into skin or non-skin category.
The typical values of threshold applied to the chrominance
components followed the limits as 150 <Cr<200 &&
100<Cb<150. Chromacity clustering using k means of
YCbCr color space to segment the hand against the uneven
illumination and complex background has been
implemented in [5] by Zhang Qiu et.al . The different
experiments performed on the Jochen Triesch Static Hand
Posture Database II were reported with comparison in
terms of time consumed.
Bahare Jalilian et.al. detected regions of face and
hands in complex background and non-uniform
illumination in [6]. The steps involved in their approach
were skin color detection based on YCbCr color space,
application of single Gaussian model followed by Bayes
rule and morphological operations. Recognition accuracy
for images with complex background reported was 95%.
YCbCr color space was used in [7] by Hsiang et.al. to
detect hand contour based on skin color against the
complex background. Convex hull was calculated and the
angle between finger spacing and the finger tip positions
were derived to classify the hand gesture. The accuracy of
the recognition rate reported was more than 95.1%.
HSV based skin color detection was implemented
by Nasser Dardas et.al in [8], The method has been
reported to have real time performance and is robust
against rotations, scaling and lighting conditions.
Additionally it can tolerate occlusion well. The
thresholding proposed was H between 0o
to 20o
and S
between 75 and 190. The segmenting resulted in giving
the hand contour which was subsequently compared with
the templates of the contours of the hand postures. Four
gestures were tested by the authors which indicated an
average accuracy of more than 90%.
HSV based hand skin color segmentation was
used by Zhi-hua et.al in [9]. They presented an efficient
and effective method for hand gesture recognition. The
hand region is detected using HSV color model whrein
they applied the thresholds as 315, 94, and 37 on H, S, V
respectively through the background subtraction method.
After hand detection, segmentation was carried out to
separate out palm and fingers. Fingers and thumb were
counted to recognize the gesture. The total classification
accuracy of 1300 images tested by them has been reported
was 96.69%. However the system failed to work
satisfactorily in case of complex background.
Wei Ren Tan et.al [10] proposed a novel human
skin detection approach that combined a smoothened 2-D
histogram and Gaussian model, for automatic human skin
detection in color image(s). In their approach, an eye
detector was used to refine the skin model for a specific
person. This approach drastically reduced the
computational costs as no training was required, and it
improved the accuracy of skin detection to 90.39% despite
wide variation in ethnicity and illumination.
Log Chromaticity Color Space (LCCS) was
proposed in [11] by Bishesh Khanal et.al. which gave
illumination invariant representation of image. LCCS
resulted into an overall classification rate (CR) of about
85%. A better CR (90.45%) was obtained when LCCS was
calculated as against only luminance. In [12] , Yong Luo
et.al. removed illumination component by subtracting the
International Journal of Computer Science and Information Security (IJCSIS),
Vol. 16, No. 3, March 2018
64 https://sites.google.com/site/ijcsis/
ISSN 1947-5500
mean estimation from the original image. To make the
standardization of the overall gray values of the different
face images, ration matrix and modulus mean was
calculated and used as features. The reported recognition
rate using PCA was 92% for Yale B+ face database and
using LDA 94.28%. Hsu et. al. addressed the issue of
illumination changes, by first normalizing the image using
the geometric mean followed by a natural log of the
normalized image.[13]. The false rejection and false
acceptance ratios reported by them were as low as 0.47%
and 0% respectively.
Mohmed Alshekhali et.al. in [14] proposed the
technique for detection of hand and determination of its
center, tracking the hands trajectory and analyzing the
variations in the hand locations, and finally recognizing
the gesture. Their technique resulted in overcoming the
background complexity and gave satisfactory results for
the camera located up to 2.5 meters from the object of
interest. Experimental results indicate that this technique
could recognize 12 gestures with more than 94%
recognition accuracy.
Extensive literature review reveals that the
Luminance-Chrominance color model can be used to
detect the skin color which provides robustness against
illumination variation. Chroma (Chrominance) sampling is
the key for color based segmentation in real time
environment. YCbCr found to be promising for complex
background while HSV indicates its robustness against the
variation in the intensity of illumination while capturing
the images. In order to achieve the benefits of both YCbCr
and HSV, an approach based on the combination/fusion
of two viz. YUV (variant of YCbCr) and HSV color space
is proposed in this paper to detect the skin color. YUV
color space which was initially coded for PAL analog
video, now is also used in the CCIR 601 standard for
digital video. The detailed implementation of this fusion
and the results thereof are discussed in section III.
III. EXPERIMENTAL WORK
As discussed in section II the first clue to
segment the hand from the image is skin color. For this
purpose Luminance-Chrominance color model is used.
Pure color space (chrominance value) is used to model the
skin color; for instance UV space in YUV and SV space in
HSV color space. But under varying illumination
conditions, the skin color of the hands from different
databases, either different persons or even same person,
may vary. The sample images of hand gestures captured
under varying illumination conditions used in this paper
are shown in the Figure 2. These are available online for
research purpose and are from Sebestien Marcel database
(Figure 2.a) [15], Cambridge Hand Gesture database
(Figure 2.b) [16] and NUS Hand Posture database II
(Figure 2.c) [17].
To reduce the effects of illumination variation
effects, a normalized color space is used. Normalization is
achieved by combining YUV and HSV color spaces. For
this firstly the RGB image is converted into the YUV and
HSV color spaces using (1) to (6). This separates the
luminance and chrominance components from the image.
Separation of the chrominance approximates the
“chromaticity” of skin (or, in essence, its absorption
spectrum) rather than its apparent color value thereby
increasing the robustness against variation in illumination.
In this process, typically the luminance component is
eliminated to remove the effect of shadows, variations in
illumination etc.
Fig. 2: Images of Hand Gestures with Variation in Illumination
a) ‘Five’ from Sebestien Marcel database [15] b) ‘Flat’ from
Cambridge Hand Gesture database [16] and c) ‘B’ from NUS
Hand Posture database II [17].
YUV is an orthogonal color space in which the
color is represented with statistically independent
components. The luminance (Y) component is computed
as a weighted sum of RGB values while the chrominance
(U and V) components are computed by subtracting the
luminance component from B and R values respectively.
Mathematically this conversion is given by the following
equations:
= + 2 + (1)
= − (2)
= − (3)
HSV is invariant to dull surfaces and lighting. HSV
approximates the way humans perceive and interpret color.
The research shows that the luminance may vary due to
ambient lighting conditions and is not reliable measure to
detect the skin pixels. Saturation and Value V (brightness)
can be used in order to minimize the influence of shadow
International Journal of Computer Science and Information Security (IJCSIS),
Vol. 16, No. 3, March 2018
65 https://sites.google.com/site/ijcsis/
ISSN 1947-5500
and uneven lighting. Conversion from RGB to HSV color
space is done using following equations:
H=
⌊( ) ( )⌋
(( ) ) ( )( )
(4)
= 1 −
( , , )
(5)
= ( + + ) (6)
The same algorithm mentioned in [4] is used for human
skin detection from YUV and HSV color spaces.
Histogram is used for deciding the threshold level for
discriminating the skin and non skin pixels. The output
will be image with only skin pixels. The largest blob is
detected as hand. Arm removal algorithm is implemented
to segment the palm for further processing.
After segmenting the hand using skin color, the boundary
of the hand is detected. The object is generally described
by its boundary in a meaningful manner. Since each
boundary is composition of collection of all connected
curves, the concentration is upon the description of
connected curves. In hand gesture recognition, the
techniques which provides unique features that are used
primarily for shape representation as well as its time
complexity is less, is chosen so that the recognition of
static hand gestures can be done in real time. It is also
expected that the technique used should be invariant to
translation, rotation, and scaling.
Different methods in the literature include the use
of eccentricity, scale space and Fourier descriptors for
shape detection. 2-D Fourier transformation is extensively
used for shape representation and analysis. The details of
the literature review describing the use of Fourier
descriptors for 2-D shape detection and hand shape
detection and its implementation can be found in [18]. The
coefficients calculated by applying Fourier transform on
the input image forms the Fourier descriptors of the shape.
These descriptors generally represent the shape in a
frequency domain. The global features of the shape are
given by the low frequency descriptors and finer details of
the shape are given by the higher frequency descriptors.
The number of coefficients obtained after transformation
are generally large, some of them are sufficient to properly
define the overall features of the shape. High frequency
descriptors that are generally used to provide the finer
details of the shapes are not used for discrimination of the
shape, so they can be ignored. By doing this, the
dimensions of the Fourier descriptors used for capturing
shapes are significantly reduced and the size of feature
vector is also reduced.
As shape is connected object and is described
using a closed contour that can be represented as a
collection of the pixel coordinates in x and y direction.
The coordinates can be considered to be sampling values.
Suppose that the boundary of a particular shape has P
pixels numbered from 0 to P - 1. The pth
pixel along
boundary of the contour has position (xp, yp). The contour
can be described using two parametric equations:
x(p)= xp
y(p)= yp (7)
The Cartesian coordinates of the boundary pixel is not
considered as Cartesian coordinates instead they are
converted to the complex plane by using the following
equation:
s(p)= x(p)+ iy(p) (8)
The above equation means that the x-axis is treated as
real axis and y-axis as imaginary axis of a sequence of
complex numbers. Although the interpretation of the
sequence was recast, the nature of the boundary itself was
not changed. Of course this representation has one great
advantage: It reduces a 2-D to 1-D problem. The Discrete
Fourier Transform of this function is taken and frequency
spectra are obtained. Discrete Fourier transform of s(p) is
given by
(9)
Where u= 0, 1,2, ….P-1.
The complex coefficients a(u) are called the Fourier
descriptors of the boundary. The inverse Fourier transform
of these coefficients restores s(P) and given by the
following equation:
(10)
where p=0, 1, 2, ….P-1
To increase the robustness of the system the geometrical
features like eccentricity, aspect ratio of the area and
perimeter of the closed contour are also calculated from
the properties of the region of the hand contour. The
feature vector is formed combining the skin color based
shape features and geometrical features. The complete
algorithm of feature vector formation and classification is
represented in the following algorithm





1
0
/2
)(
1
)(
p
k
Pupj
eps
P
ua 





1
0
/2
)(
1
)(
p
u
Ppj
eua
P
PS 
International Journal of Computer Science and Information Security (IJCSIS),
Vol. 16, No. 3, March 2018
66 https://sites.google.com/site/ijcsis/
ISSN 1947-5500
Algorithm:
a) Read RGB image
b) Convert RGB to HSV and YUV
c) Apply skin detector algorithm based on the threshold
on S and U.
d) Perform the morphological operations.
e) Find the largest blob
f) Detect the palm by using arm removal algorithm.
g) Extract the boundary co-ordinates of the contour
h) Apply Fast Fourier Transform and calculate Fourier
descriptors
i) Calculate the geometrical properties of the blob
j) Combine the features (Skin color + Fourier +
Geometrical Features.) to form the feature vector.
k) Repeat the procedure for all the images in the training
and testing database.
l) Train the Backpropogation neural network to classify
the gestures.
m) Test the network and find out the accuracy.
IV.RESULTS AND DISCUSSION
As mentioned in Section III, the performance of the
system is tested using three different datasets with details
as given below.
1.Sebestien Marcel dataset consists of total 6 postures
viz. A, B, C, Point, Five and V of 10 persons in 3
different backgrounds (light, dark and complex).
2.Cambridge Hand Gesture consists of 900 image
sequences of 9 gesture classes. Each class has 100
image sequences performed by 2 subjects, captured
under 5 different illuminations and 10 arbitrary
motions. The 9 classes are defined by three
primitive hand shapes and three primitive motions.
For the experimentation we are focusing on the
hand shapes in different illumination conditions.
3.NUS Hand Posture database consists of the postures
by 40 subjects, with different ethnicities against
different complex backgrounds. The database used
in this experimentation consists of 4 hand postures
repeated five times by each of the subjects. Hand
posture images of size 160x120. 100 images are
used for training and 100 for testing.
The skin detector is first applied to extract skin regions in
the images from the three databases using fusion of HSV
and YUV color space and applying the threshold. The
results of the skin detector algorithm on the images of
three sets are presented in Fig. 4,5 and 6. Hand posture
shown in these figures are number ‘Five’ from Sebastian
Marcel dataset II, ‘B’ from NUS and ‘Flat’ from
Cambridge hand gesture dataset. The purpose of
presenting the same hand shape for all the database is to
show that the proposed system works better for complex
background and illuminations conditions. The fig. 4 shows
that the algorithm and works quiet better for Sebastian
Marcel dataset. Fig.5 represents empirical results that
show the detection of the hand region is not up to the mark
for Cambridge hand gesture database with the 5th
illumination conditions as can be seen from the Fig. 2b.
Fig. 6 interprets the result of the skin detection on the
NUS dataset. After detecting the skin, morphological
operations were performed to get the closed contour of the
hand. As explained in the algorithm in section III, the
Fourier descriptors were chosen as features and hence
were calculated from the closed contour. The descriptors
were then normalized by nullifying the 0th
Fourier
descriptor to get the invariance to the translation. Scale
invariance was obtained by dividing all Fourier descriptors
by the magnitude of the 1st
Fourier descriptor. Rotation
invariance is achieved by only considering the magnitude
of the Fourier coefficients.
The feature vector was formed by considering 20
coefficients of Fourier descriptors (which are invariant to
scale, rotation and translation) and two geometrical
features viz. area to perimeter aspect ratio and eccentricity,
thus making a total of 22 features. Geometrical features
were calculated from the closed contour of the segmented
hand. The feature vectors thus formed were then used to
train and test the multilayer feed forward neural network
to classify the hand gesture. For learning the network,
Back propagation algorithm with Levenberg-Marquardt
algorithm has been used to train the network. The
activation function used is “Sigmoid”. Fig. 3 shows the
architecture of the NN used in this experiment.
Fig. 3. Neural Network Architecture
The same experiment is tested for NUS Hand Posture
dataset I which consists of 10 classes of postures with 24
samples of each. As there is uniform background the
classification accuracy observed is 100%.
International Journal of Computer Science and Information Security (IJCSIS),
Vol. 16, No. 3, March 2018
67 https://sites.google.com/site/ijcsis/
ISSN 1947-5500
Fig. 4. Results of skin detector -Sebastian Marcel dataset II.
Fig. 5. Results of skin detector for posture ‘flat’ from the
Cambridge dataset
Fig . 6. Result of skin detector for posture ‘B’ from NUS Hand
Posture Dataset II.
The results of the proposed work are presented in the
following tables. For the experimentation six postures ‘A’,
‘B’, ‘C’, ‘point’, ‘V’, ‘Five’ from Sebastien Marcel
database has been used. Table 1 describes the individual
accuracy for each of these postures. The average accuracy
achieved is 96%.
TABLE 1: CLASSIFICATION ACCURACY FOR SEBESTIEN MARCEL
DATABASE
Static
gesture
No. of
gesture
samples
Correct Incorrect Classification
Accuracy %
A 100 96 04 96
B 100 94 6 94
C 100 95 5 95
Point 100 98 2 98
V 100 96 4 96
Five 100 97 3 97
Avearage Classification Accuracy 96
The proposed work is compared with the existing state of
art techniques on the same benchmark dataset in Table 2
which expose that the experiment conducted in this paper
is comparable with those of existing techniques.
TABLE 2: COMPARISON WITH EXISTING STATE OF ART TECHNIQUES FOR
SEBASTIAN MARCEL DATABASE
Paper No. Features Classifier Accuracy
(%)
[19] Modified Cesnsus
Transform
AdaBoost 81.25
[20] Haar like features AdaBoost 90.0
[21] Haar wavelets Penalty score 94.89
[22] Scale space features AdaBoost 93.8
[23] Bag of features Support Vector
Machine
96.23
[24] Normalized
Moment of Inertia
(NMI) and Hu
invariant moments
Support Vector
Machine
96.9
Proposed
method
Skin color and
Fourier Descriptor
Artificial
Neural Network
96
Four hand gestures ‘A’, ‘B’, ‘C’, ‘D’ are used for the
experiments from the NUS Hand Posture dataset. 100
samples for each posture are used for training and 100 are
used for testing. The results of the experiment are
presented in Table 3. The average accuracy of 95.25% is
achieved.
International Journal of Computer Science and Information Security (IJCSIS),
Vol. 16, No. 3, March 2018
68 https://sites.google.com/site/ijcsis/
ISSN 1947-5500
TABLE 3. CLASSIFICATION ACCURACY FOR NUS HAND POSTURE
DATABASE II
Static
gesture
No. of
gesture
samples
Correct Incorrect Classification
Accuracy %
A 100 96 4 94
B 100 94 6 94
C 100 97 3 97
D 100 96 4 96
Avearage Classification Accuracy 95.25
The results obtained through this experimentation are
compared with the state of art techniques. The comparison
reveals that the proposed method is better than the existing
methods. The details of this are given in the table 4.
TABLE 4. COMPARISON WITH EXISTING STATE OF ART TECHNIQUES FOR
NUS HAND POSTURE DATABASE II
Paper
No.
Features Classifier Accuracy
[25] Shape based and
texture based
features
GentleBoost 75.71
[26] Viola jones Real Time
Deformable
Detetctor
90.66
[27] NUS
standard model
features (SMFs)
Fuzzy Rule
Classifier
Support Vector
Machine
93.33
92.50
[28] Shape texture
color
Support Vector
Machine
94.36
Propos
ed
method
Skin color and
Fourier Descriptor
Artificial Neural
Network
95.25
Three primitive hand shapes ‘flat’, ‘Spread’ and
‘V’ from Cambridge Hand gesture database are used for
testing the proposed algorithm. The results of the proposed
work are presented in the Table 5. The experiment is
carried out for each set of database and reported in the
table. The average accuracy is 93.67% .
The results obtained through this
experimentation are compared with the state of art
techniques and reported in Table 6.
V.CONCLUSION AND FUTURE SCOPE
The paper proposed a system for hand
segmentation and classification. The main component of
the system is to track the hand based on skin color under
different illumination conditions and with complex
background. Fusion of HSV and YUV color space to
detect the skin color gave the invariance to the
illumination even in the complex background. The closed
contour of the segmented hand is used to detect the shape
of the hand gesture. Fourier descriptors are calculated as
shape descriptors. To improve the robustness for the shape
detection, the geometrical features are added in the feature
vector. The feature vector thus achieved by combining the
shape features and geometrical features are given to the
artificial neural network for classification. The average
classification accuracy of 95.25% is achieved for all the
three databases.
The hand postures in the databases have the
different viewing angle. So the classification accuracy can
be further increased by extracting the view invariant
features from the images. This lays a direction for further
reseach in this area.
TABLE 5. CLASSIFICATION ACCURACY FOR CAMBRIDGE HAND GESTURE
DATABASE
Static
gesture
No. of
gesture
samples
Set 1 Set2 Set 3 Set 4 Set 5
Flat 100 93 96 96 92 94
Spread 100 94 95 93 93 93
V 100 95 96 95 92 94
Avearage
Classification
Accuracy
94 95.67 94.67 92.33 93.67
TABLE 6. COMPARISON WITH EXISTING STATE OF ART TECHNIQUES FOR
CAMBRIDGE HAND GESTURE DATABASE
Paper No. Features Classifier Accuracy
(%)
[29] PCA on Motion
gradient orientation
Sparse
Bayesian
Classifier
80
[30] Canonical
Correlation Analysis
(CCA) + SIFT
Support Vector
Machine
85
[31] Concatenated HOG Kernel
Diceriminant
analysis with
RBF kernel
91.1
[32] Fourier Descriptors
(Static postures 4
shapes)
Support Vector
Machine
92.5
Proposed
method
Skin color and
Fourier Descriptor
ANN 94.50
REFERENCES
[1]. Vladimir I. et.al. ‘Visual Interpretation of Hand Gestures for
Human-Computer Interaction: A Review’, IEEE transactions on
Pattern Analysis and Machine Intelligence, vol. 19, no. 7, July
1997.
[2]. Sushmita Mitra, Tinku Acharya, ‘Gesture Recognition: A Survey’
IEEE Transactions on Systems, Man, and Cybernetics—Part C:
Applications and Reviews, Vol. 37, No. 3, MAY 2007.
International Journal of Computer Science and Information Security (IJCSIS),
Vol. 16, No. 3, March 2018
69 https://sites.google.com/site/ijcsis/
ISSN 1947-5500
[3]. Terrillon, J., Shirazi, M. N., Fukamachi, H., Akamatsu, S,
‘Comparative performance of different skin chrominance models
and chrominance spaces for the automatic detection of human faces
in color images’, IEEE International Conference on Face and
Gesture Recognition, pp. 54-61 (2000)
[4]. Khamar Basha Shaika et.al ‘Comparative Study of Skin Color
Detection and Segmentation in HSV and YCbCr Color Space’
International Elsevier Conference on Recent Trends in Computing
2015
[5]. Zhang Qiu-yu, Lu Jun-chi, et.al, “Hand Gesture Segmentation
Method Based on YCbCr Color Space and K
International Journal of Signal Processing, Image Processing
Pattern RecognitionVol. 8, No. 5 (2015), pp. 105
[6]. Bahare Jalilian, Abdolah Chalechale , ‘Face and Hand Shape
Segmentation Using Statistical Skin Detection for Sign Language
Recognition’, Computer Science and Information Technology
.DOI: 10.13189/csit.2013.010305
[7]. Hsiang-Yueh. Lai 1, Han-Jheng. Lai, “Real
Gesture Recognition”, IEEE DOI 10.1109/IS3C.2014.177
Published in: Computer, Consumer and Control (
International Symposium
[8]. Nasser H. Dardas and Emil M. Petriu,” Hand Gesture Detection and
Recognition Using Principal Component”, IEEE International
Conferenc Computational Intelligence for Measurement Systems
and Applications (CIMSA), 2011
[9]. Zhi-hua Chen,1 Jung-Tae Kim,1 Jianning Lian
Gesture Recognition Using Finger Segmentation” The Scientific
World Journal Volume 2014 (2014), Article ID
http://dx.doi.org/10.1155/2014/267872
[10]. Wei Ren Tan, et.al., “A Fusion Approach for Efficient Human Skin
Detection”, IEEE transaction on Industrial informatics Vol8(1)
PP138-147
[11]. Bishesh Khanal, et.al “Efficient Skin Detection Under Severe
Illumination Changes and Shadows”. ICIRA 2011
International Conference on Intelligent Rob otics and Applications,
Dec 2011, Aachen, Germany. pp.1-8, 2011.
[12]. Luo et.al. “Illumination Normalization Method Based on Mean
Estimation for A Robust Face Recognition” Research article.
[13]. Hsu, R., Ab del-Mottaleb, M., Jain, A. K.: Face detecting in color
images. IEEE Trans. on Pattern Analysis and Machine Intelligence
24, 696-706 (2002)
[14]. Mohamed Alsheakhali, Ahmed Skaik “Hand Gesture Recognition
System”, International Conference on Information and
Communication Systems (ICICS 2011)
[15]. S. Marcel, “Hand posture recognition in a body
space,” in Proc. Conf. Human Factors Comput. Syst. (CHI), 1999,
pp. 302Processing, 2005, vol. 2, pp. 954 – 957
[16]. T-K. Kim et.al, “ Tensor Canonical Correlation Analysis for Ac
Classification “,In Proc. of IEEE Conference on Computer Vision
and Pattern Recognition (CVPR), Minneapolis, MN, 2007.
[17]. Pramod Kumar Pisharady, Prahlad Vadakkepat, Ai Poh Loh,
"Attention Based Detection and Recognition of Hand
Postures Against Complex Backgrounds", International Journal of
Computer Vision, vol.101, no.3, pp.403-419, February, 2013
[18]. Mrs.A. R.Patil, Dr. S. Subbaraman, “Static Hand Gesture Detecti
and Classification using Contour based Fourier Descriptor”, 9th
World Congress International conference on Science and
Engineering Technology at GOA. Jan. 2016
://iferp.in/digital_object_identifier/WCASETG9718
[19]. Agn`es Just et.al.,’ Hand Posture Classification and Recognition
using the Modified Census Transform’, 7th IEEE International
Conference on Automatic Face and Gesture Recognition (FGR’06)
0-7695-2503-2/© 2006 IEEE
[20]. Q. Chen, N. Georganas, and E. Petriu, “Real
hand gesture recognition using Haar-like features,” in Proc. IEEE
IMTC, 2007,pp. 1–6.
[21]. Y. Fang, K. Wang, J. Cheng, and H. Lu, “A real
recognition method,” in Proc. IEEE Int. Conf. ultimedia Expo,
2007, pp. 995–998.
Terrillon, J., Shirazi, M. N., Fukamachi, H., Akamatsu, S,
‘Comparative performance of different skin chrominance models
for the automatic detection of human faces
in color images’, IEEE International Conference on Face and
Khamar Basha Shaika et.al ‘Comparative Study of Skin Color
Detection and Segmentation in HSV and YCbCr Color Space’ , 3rd
International Elsevier Conference on Recent Trends in Computing
chi, et.al, “Hand Gesture Segmentation
Method Based on YCbCr Color Space and K-Means Clustering”,
International Journal of Signal Processing, Image Processing and
Pattern RecognitionVol. 8, No. 5 (2015), pp. 105-1 16
Bahare Jalilian, Abdolah Chalechale , ‘Face and Hand Shape
Skin Detection for Sign Language
Recognition’, Computer Science and Information Technology
Jheng. Lai, “Real-Time Dynamic Hand
DOI 10.1109/IS3C.2014.177
Computer, Consumer and Control (IS3C), 2014
Nasser H. Dardas and Emil M. Petriu,” Hand Gesture Detection and
Recognition Using Principal Component”, IEEE International
putational Intelligence for Measurement Systems
Jianning Lian “Real-Time Hand
Gesture Recognition Using Finger Segmentation” The Scientific
(2014), Article ID 267872,
Wei Ren Tan, et.al., “A Fusion Approach for Efficient Human Skin
n”, IEEE transaction on Industrial informatics Vol8(1)
Bishesh Khanal, et.al “Efficient Skin Detection Under Severe
Illumination Changes and Shadows”. ICIRA 2011 - 4th
International Conference on Intelligent Rob otics and Applications,
Luo et.al. “Illumination Normalization Method Based on Mean
Estimation for A Robust Face Recognition” Research article.
Mottaleb, M., Jain, A. K.: Face detecting in color
s and Machine Intelligence
Mohamed Alsheakhali, Ahmed Skaik “Hand Gesture Recognition
International Conference on Information and
S. Marcel, “Hand posture recognition in a body-face centered
pace,” in Proc. Conf. Human Factors Comput. Syst. (CHI), 1999,
957
Tensor Canonical Correlation Analysis for Action
“,In Proc. of IEEE Conference on Computer Vision
and Pattern Recognition (CVPR), Minneapolis, MN, 2007.
Pisharady, Prahlad Vadakkepat, Ai Poh Loh,
Attention Based Detection and Recognition of Hand
", International Journal of
419, February, 2013
Mrs.A. R.Patil, Dr. S. Subbaraman, “Static Hand Gesture Detection
and Classification using Contour based Fourier Descriptor”, 9th
World Congress International conference on Science and
Engineering Technology at GOA. Jan. 2016 http
WCASETG9718
Hand Posture Classification and Recognition
, 7th IEEE International
Conference on Automatic Face and Gesture Recognition (FGR’06)
and E. Petriu, “Real-time vision-based
like features,” in Proc. IEEE
Y. Fang, K. Wang, J. Cheng, and H. Lu, “A real-time hand gesture
recognition method,” in Proc. IEEE Int. Conf. ultimedia Expo,
[22]. W. Chung, X. Wu, and Y. Xu, “A real time hand gesture
recognition based on Haar wavelet representation,” in Proc. IEEE
Int. Conf. Robot. Biomimetics, 2009, pp. 336
[23]. Nasser H. Dardas and Nicolas D. Georganas, “Real
Gesture Detection and Recognition Using Bag
Support Vector Machine Techniques”. IEEE Transactions on
Instrumentation and Measurement, vol.60, no.11, November 2011
[24]. Y. Ren and C. Gu, “Real-time hand gesture recognition based on
vision,” in Proc. Edutainment, 2010, pp. 468
[25]. Serre, T., et.al.(2007), ‘Robust object recognition with cortex
mechanisms’, IEEE Transactions on Pattern Analysis and Machine
Intelligence, 29(3), 411–426
[26]. Uriel Haile Hernandez-Belmonte and Victor Ayala
Time Hand Posture Recognition for Human
Tasks’, Sensors 2016, 16, 36;
www.mdpi.com/journal/sensors
[27]. P. Pramod Kumar et.al,’ Hand Posture And Face Recognition Using
A Fuzzy- Rough Approach
Robotics Vol . 7, N o. 3 (2010) 331
Publishing Company DOI: 10.1142/S0219843610002180
[28]. Pramod Kumar Pisharady et.al,
Recognition of Hand Postures
Against Complex Backgrounds
10.1007/s11263-012-0560-5 published by springerlink
[29]. Shu-Fai Wong and Roberto Cipolla, ‘ Real
Hand Motions using a Sparse Bayesian Classifier on Motion
Gradient Orientation Images’
[30]. T.-K. Kim and R. Cipolla. Canonical correlation analysis of video
volume tensors for action categorization and detection. IEEE Trans.
Pattern Analysis and Machine Intelligence, 31(8):1415
[31]. Masoud Faraki Mehrtash T. Harandi Fatih Porikli ,
Classification by Symmetric Positive Semi
[32]. Heba M.Gamal et.al, ‘ Hand Gesture Recognition using Fourier
Descriptors’, 978-1-4799-008 274 0
Authors’ Profiles
Mrs.Anjali R Patil have completed BE
(2002)and ME(2009) from Shivaji
University, Kolhapur. Currently she is
Pursuing PhD in Electronics Engineering
from Shivaji University Kolhapur. Her
research interest includes image
processing, Pattern rec
computing etc.
Dr. Mrs. Shaila Subbaraman, Ph. D. from
I.I.T., Bombay (1999) and M. Tech. from
I.I.Sc., Bangalore (1975) has a vast
experience in industry in the capacity of R
& D engineer and Quality Assurance
Manager in the field of man
semiconductor devices and ICs. She also has more than 27 years
of teaching experience at both UG and PG level for the courses
in Electronics Engineering. Her specialization is in Micro
electronics and VLSI Design. She has more than fifty
publications to her credit. She retired as Dean Academics of
autonomous Walchand College of Engineering, Sangli and
currently she is working as Professor (PG) in the same college.
Additionally she works as a NBA expert for evaluating
engineering programs in India
Accord. Recently she has been felicitated by “Pillars of
Hindustani Society” award instituted by Chamber of Commerce,
Mumbai for contribution to Higher Education in Western
Maharashtra.
W. Chung, X. Wu, and Y. Xu, “A real time hand gesture
based on Haar wavelet representation,” in Proc. IEEE
Biomimetics, 2009, pp. 336–341
Nasser H. Dardas and Nicolas D. Georganas, “Real-Time Hand
tion and Recognition Using Bag-of-Features and
Support Vector Machine Techniques”. IEEE Transactions on
Instrumentation and Measurement, vol.60, no.11, November 2011
time hand gesture recognition based on
nt, 2010, pp. 468–475
Robust object recognition with cortex-like
, IEEE Transactions on Pattern Analysis and Machine
426
Belmonte and Victor Ayala-Ramirez ,’Real-
Posture Recognition for Human-Robot Interaction
, Sensors 2016, 16, 36; doi:10.3390/s16010036
www.mdpi.com/journal/sensors
Hand Posture And Face Recognition Using
Rough Approach’, International Journal of Humanoid
Vol . 7, N o. 3 (2010) 331–356 World Scientific
Publishing Company DOI: 10.1142/S0219843610002180
Pramod Kumar Pisharady et.al, ‘Attention Based Detection and
Recognition of Hand Postures
t Complex Backgrounds’,Journal Computer Vision DOI
5 published by springerlink
Fai Wong and Roberto Cipolla, ‘ Real-time Interpretation of
Hand Motions using a Sparse Bayesian Classifier on Motion
Gradient Orientation Images’, BMVC 2005 doi:10.5244/C.19.41
K. Kim and R. Cipolla. Canonical correlation analysis of video
volume tensors for action categorization and detection. IEEE Trans.
Pattern Analysis and Machine Intelligence, 31(8):1415–1428, 2009.
Masoud Faraki Mehrtash T. Harandi Fatih Porikli , ‘Image Set
Classification by Symmetric Positive Semi-Definite Matrices’
Hand Gesture Recognition using Fourier
008 274 0-0/13/$31.00 ©2013 IEEE
Mrs.Anjali R Patil have completed BE
(2002)and ME(2009) from Shivaji
University, Kolhapur. Currently she is
Pursuing PhD in Electronics Engineering
from Shivaji University Kolhapur. Her
research interest includes image
processing, Pattern recognition, Soft
Dr. Mrs. Shaila Subbaraman, Ph. D. from
I.I.T., Bombay (1999) and M. Tech. from
I.I.Sc., Bangalore (1975) has a vast
experience in industry in the capacity of R
& D engineer and Quality Assurance
Manager in the field of manufacturing
semiconductor devices and ICs. She also has more than 27 years
of teaching experience at both UG and PG level for the courses
in Electronics Engineering. Her specialization is in Micro-
electronics and VLSI Design. She has more than fifty
tions to her credit. She retired as Dean Academics of
autonomous Walchand College of Engineering, Sangli and
currently she is working as Professor (PG) in the same college.
Additionally she works as a NBA expert for evaluating
engineering programs in India in accordance with Washington
Accord. Recently she has been felicitated by “Pillars of
Hindustani Society” award instituted by Chamber of Commerce,
Mumbai for contribution to Higher Education in Western
International Journal of Computer Science and Information Security (IJCSIS),
Vol. 16, No. 3, March 2018
70 https://sites.google.com/site/ijcsis/
ISSN 1947-5500

More Related Content

PDF
Detection of skin diasease using curvlets
PDF
Gesture Recognition Review: A Survey of Various Gesture Recognition Algorithms
PDF
C10162
PDF
50Combining Color Spaces for Human Skin Detection in Color Images using Skin ...
PDF
IRJET-Skin Lesion Classification using 3D Reconstruction
PDF
Ga3111671172
PDF
A Methodology for Extracting Standing Human Bodies from Single Images
PDF
HSV Brightness Factor Matching for Gesture Recognition System
Detection of skin diasease using curvlets
Gesture Recognition Review: A Survey of Various Gesture Recognition Algorithms
C10162
50Combining Color Spaces for Human Skin Detection in Color Images using Skin ...
IRJET-Skin Lesion Classification using 3D Reconstruction
Ga3111671172
A Methodology for Extracting Standing Human Bodies from Single Images
HSV Brightness Factor Matching for Gesture Recognition System

What's hot (19)

PDF
Hand and wrist localization approach: sign language recognition
PDF
ttA sign language recognition approach for
PDF
ICPR Workshop ETCHB 2010
PDF
A Deep Neural Framework for Continuous Sign Language Recognition by Iterative...
PDF
Automatic Isolated word sign language recognition
PDF
MultiModal Identification System in Monozygotic Twins
PDF
Face detection for video summary using enhancement based fusion strategy
PDF
Using Watershed Transform for Vision-based Two-Hand Occlusion in an Interacti...
PDF
Palmprint and Handgeometry Recognition using FAST features and Region properties
PDF
Region based elimination of noise pixels towards optimized classifier models ...
PDF
I045034956
PDF
PDF
Automatic Segmentation of scaling in 2-D psoriasis skin images using a semi ...
PDF
Extended Fuzzy Hyperline Segment Neural Network for Fingerprint Recognition
PDF
ENHANCED SKIN COLOUR CLASSIFIER USING RGB RATIO MODEL
PDF
Importance of Mean Shift in Remote Sensing Segmentation
PDF
WAVELET PACKET BASED IRIS TEXTURE ANALYSIS FOR PERSON AUTHENTICATION
PDF
HVDLP : HORIZONTAL VERTICAL DIAGONAL LOCAL PATTERN BASED FACE RECOGNITION
PDF
SELF-LEARNING AI FRAMEWORK FOR SKIN LESION IMAGE SEGMENTATION AND CLASSIFICATION
Hand and wrist localization approach: sign language recognition
ttA sign language recognition approach for
ICPR Workshop ETCHB 2010
A Deep Neural Framework for Continuous Sign Language Recognition by Iterative...
Automatic Isolated word sign language recognition
MultiModal Identification System in Monozygotic Twins
Face detection for video summary using enhancement based fusion strategy
Using Watershed Transform for Vision-based Two-Hand Occlusion in an Interacti...
Palmprint and Handgeometry Recognition using FAST features and Region properties
Region based elimination of noise pixels towards optimized classifier models ...
I045034956
Automatic Segmentation of scaling in 2-D psoriasis skin images using a semi ...
Extended Fuzzy Hyperline Segment Neural Network for Fingerprint Recognition
ENHANCED SKIN COLOUR CLASSIFIER USING RGB RATIO MODEL
Importance of Mean Shift in Remote Sensing Segmentation
WAVELET PACKET BASED IRIS TEXTURE ANALYSIS FOR PERSON AUTHENTICATION
HVDLP : HORIZONTAL VERTICAL DIAGONAL LOCAL PATTERN BASED FACE RECOGNITION
SELF-LEARNING AI FRAMEWORK FOR SKIN LESION IMAGE SEGMENTATION AND CLASSIFICATION
Ad

Similar to Illumination Invariant Hand Gesture Classification against Complex Background using Combinational Features (20)

PDF
Skin Detection Based on Color Model and Low Level Features Combined with Expl...
PDF
Color Constancy For Improving Skin Detection
PDF
Human Skin Cancer Recognition and Classification by Unified Skin Texture and ...
PDF
Hand Segmentation Techniques to Hand Gesture Recognition for Natural Human Co...
PDF
Analysis and Classification of Skin Lesions Using 3D Volume Reconstruction
PDF
IRJET- Detection and Classification of Skin Diseases using Different Colo...
PDF
A New Algorithm for Human Face Detection Using Skin Color Tone
DOC
OPTIMIZED FINGERPRINT COMPRESSION WITHOUT LOSS OF DATAProposed workblessy up...
PDF
A New Skin Color Based Face Detection Algorithm by Combining Three Color Mode...
PDF
B017310612
PDF
DETECTION OF LESION USING SVM
PDF
Segmentation and Classification of Skin Lesions Based on Texture Features
PDF
Gesture Recognition using Principle Component Analysis & Viola-Jones Algorithm
PDF
IRJET- Detection of Static Body Poses using Skin Detection Techniques to Asse...
PDF
Real time Myanmar Sign Language Recognition System using PCA and SVM
PDF
Human Re-identification with Global and Local Siamese Convolution Neural Network
PDF
A SIGN LANGUAGE RECOGNITION APPROACH FOR HUMAN-ROBOT SYMBIOSIS
PDF
A SIGN LANGUAGE RECOGNITION APPROACH FOR HUMAN-ROBOT SYMBIOSIS
PDF
Paper id 21201419
PDF
Gesture recognition system
Skin Detection Based on Color Model and Low Level Features Combined with Expl...
Color Constancy For Improving Skin Detection
Human Skin Cancer Recognition and Classification by Unified Skin Texture and ...
Hand Segmentation Techniques to Hand Gesture Recognition for Natural Human Co...
Analysis and Classification of Skin Lesions Using 3D Volume Reconstruction
IRJET- Detection and Classification of Skin Diseases using Different Colo...
A New Algorithm for Human Face Detection Using Skin Color Tone
OPTIMIZED FINGERPRINT COMPRESSION WITHOUT LOSS OF DATAProposed workblessy up...
A New Skin Color Based Face Detection Algorithm by Combining Three Color Mode...
B017310612
DETECTION OF LESION USING SVM
Segmentation and Classification of Skin Lesions Based on Texture Features
Gesture Recognition using Principle Component Analysis & Viola-Jones Algorithm
IRJET- Detection of Static Body Poses using Skin Detection Techniques to Asse...
Real time Myanmar Sign Language Recognition System using PCA and SVM
Human Re-identification with Global and Local Siamese Convolution Neural Network
A SIGN LANGUAGE RECOGNITION APPROACH FOR HUMAN-ROBOT SYMBIOSIS
A SIGN LANGUAGE RECOGNITION APPROACH FOR HUMAN-ROBOT SYMBIOSIS
Paper id 21201419
Gesture recognition system
Ad

Recently uploaded (20)

PPTX
Digital-Transformation-Roadmap-for-Companies.pptx
PPT
Teaching material agriculture food technology
PPTX
OMC Textile Division Presentation 2021.pptx
PPTX
A Presentation on Artificial Intelligence
PPTX
Machine Learning_overview_presentation.pptx
PDF
Build a system with the filesystem maintained by OSTree @ COSCUP 2025
PDF
Agricultural_Statistics_at_a_Glance_2022_0.pdf
PPTX
TLE Review Electricity (Electricity).pptx
PPTX
KOM of Painting work and Equipment Insulation REV00 update 25-dec.pptx
PDF
A comparative study of natural language inference in Swahili using monolingua...
PPTX
1. Introduction to Computer Programming.pptx
PPTX
Spectroscopy.pptx food analysis technology
PDF
Mushroom cultivation and it's methods.pdf
PPTX
TechTalks-8-2019-Service-Management-ITIL-Refresh-ITIL-4-Framework-Supports-Ou...
PDF
Building Integrated photovoltaic BIPV_UPV.pdf
PDF
Unlocking AI with Model Context Protocol (MCP)
PDF
Spectral efficient network and resource selection model in 5G networks
PDF
Encapsulation_ Review paper, used for researhc scholars
PDF
Assigned Numbers - 2025 - Bluetooth® Document
PDF
Mobile App Security Testing_ A Comprehensive Guide.pdf
Digital-Transformation-Roadmap-for-Companies.pptx
Teaching material agriculture food technology
OMC Textile Division Presentation 2021.pptx
A Presentation on Artificial Intelligence
Machine Learning_overview_presentation.pptx
Build a system with the filesystem maintained by OSTree @ COSCUP 2025
Agricultural_Statistics_at_a_Glance_2022_0.pdf
TLE Review Electricity (Electricity).pptx
KOM of Painting work and Equipment Insulation REV00 update 25-dec.pptx
A comparative study of natural language inference in Swahili using monolingua...
1. Introduction to Computer Programming.pptx
Spectroscopy.pptx food analysis technology
Mushroom cultivation and it's methods.pdf
TechTalks-8-2019-Service-Management-ITIL-Refresh-ITIL-4-Framework-Supports-Ou...
Building Integrated photovoltaic BIPV_UPV.pdf
Unlocking AI with Model Context Protocol (MCP)
Spectral efficient network and resource selection model in 5G networks
Encapsulation_ Review paper, used for researhc scholars
Assigned Numbers - 2025 - Bluetooth® Document
Mobile App Security Testing_ A Comprehensive Guide.pdf

Illumination Invariant Hand Gesture Classification against Complex Background using Combinational Features

  • 1. Illumination Invariant Hand Gesture Classification against Complex Background using Combinational Features. Anjali Patil Electronics Engineering DKTE TEI, Ichalkaranji Maharashtra, India anjalirpatil@gmail.com Dr. Mrs. S. Subbaraman Professor, Electronics Department Walchand College of Engineering, Sangli. Maharashtra, India s.subbaraman@gmail.com Abstract: Hand gesture classification is popularly used in wide applications like Human-Machine Interface, Virtual Reality, Sign Language Recognition, Animations etc. The classification accuracy of static gestures depends on the technique used to extract the features as well as the classifier used in the system. To achieve the invariance to illumination against complex background, experimentation has been carried out to generate a feature vector based on skin color detection by fusing the Fourier descriptors of the image with its geometrical features. Such feature vectors are then used in Neural Network environment implementing Back Propagation algorithm to classify the hand gestures. The set of images for the hand gestures used in the proposed research work are collected from the standard databases viz. Sebastien Marcel Database, Cambridge Hand Gesture Data set and NUS Hand Posture dataset. An average classification accuracy of 95.25% has been observed which is on par with that reported in the literature by the earlier researchers. Index Terms: Back-propagation, Combinational Features, Fourier Descriptor, Neural Network, Skin color, Static hand gesture I. INTRODUCTION Hand gesture recognition plays an important role in the areas covering the applications from virtual reality to sign language recognition. The images captured for hand gestures fall into two categories viz. glove based images and non-glove based images. Hand gestures recognition also is correspondingly classified as glove based recognition and non-glove based i.e. vision based recognition. Fig.1 Steps involved in proposed Hand Gesture Classification In glove based approach, users have to wear cumbersome wires which may hinder the ease and naturalness with which the user interacts with computers or machines. The awkwardness in using gloves and other devices can be overcome by using vision based systems that means video based interactive systems. This technique uses cameras and computer vision techniques to recognize the gestures in a much simpler way. [1] [2]. Vision based approaches are further classified as 3D model (which is exact representation of shape but is computationally expensive) and appearance based 2D model which is projection of 3-D object onto 2-D plane and is economical computationally. This paper focuses on appearance based methods for recognition of hand postures. As shown in Figure 1, after capturing the image of hand gesture, segmentation is done based on the skin color. In the skin color detection process the RGB color model is first transformed to appropriate color space and a skin classifier is used to find a pixel is skin pixel or non skin pixel. Skin color is the low level features extraction technique which is robust to scale, geometric transformations, occlusions etc. By the skin classification the region of interest is observed which then is used to find the boundary of the hand. After extracting the hand contour, the Fourier Descriptors (FDs) are extracted and combined with the geometrical features. The feature vectors, thus formed, are given to artificial neural network used as a classifier to classify the hand gestures. Skin Color Detection Fourier Descriptor Geometrical Features Combination of features Classification using ANN Image Input Gesture Output International Journal of Computer Science and Information Security (IJCSIS), Vol. 16, No. 3, March 2018 63 https://sites.google.com/site/ijcsis/ ISSN 1947-5500
  • 2. The detailed implementation is explained in the successive sessions. The main objective of this paper is to present the contribution of the work done in direction to classify the hand gestures with the help of skin color correctly from images captured under different illumination conditions. The system which will be robust against variation in illumination and hence can be called as illumination invariant. The rest of this paper is organized as follows: Section 2 presents the literature review on illumination normalization and skin color detection. Experimental work is discussed in Section 3. Detailed results are presented in Section 4 followed by conclusions and future scope in Section 5. II. RELATED WORK Detection of skin color in an image is sensitive to several factors such as illumination conditions, camera characteristics, background, shadows, motions besides person dependent characteristics such as gender, ethnicity, makeup etc. A good skin color detector must be robust against illumination variations and must be able to cope up with the great variability of skin color between ethnic groups. Another challenge in detecting human skin color is the fact that the objects in the real world which may be in the background of the image can have skin tone colors, for example, leather, wood, skin-colored clothing, hairs etc. The systems not taking care of this aspect may have false detections. The purpose of the research work is to identify and classify the hand gestures with this type of uncontrolled environment. Image is represented in different color spaces including RGB, normalized RGB, HSV, YCbCr, YUV, YIQ, etc. Color spaces efficiently separating the chromaticity from the luminance components of color are typically considered preferable (Luma-Chroma model). This is due to the fact that by employing chromaticity- dependent components of color only, some degree of robustness to illumination changes can be achieved. Different Skin color models with comparison of their performance have been presented by Terrillon et.al. in [3]. The detection and segmentation of skin pixels using HSV and YCbCr color space has been explained by Khamar et. al. in [4] wherein an approach to discriminate color and intensity information under uneven illumination conditions is highlighted. The threshold based on histograms of the Hue, Saturation and Value (HSV) has been to classify the pixels into skin or non-skin category. The typical values of threshold applied to the chrominance components followed the limits as 150 <Cr<200 && 100<Cb<150. Chromacity clustering using k means of YCbCr color space to segment the hand against the uneven illumination and complex background has been implemented in [5] by Zhang Qiu et.al . The different experiments performed on the Jochen Triesch Static Hand Posture Database II were reported with comparison in terms of time consumed. Bahare Jalilian et.al. detected regions of face and hands in complex background and non-uniform illumination in [6]. The steps involved in their approach were skin color detection based on YCbCr color space, application of single Gaussian model followed by Bayes rule and morphological operations. Recognition accuracy for images with complex background reported was 95%. YCbCr color space was used in [7] by Hsiang et.al. to detect hand contour based on skin color against the complex background. Convex hull was calculated and the angle between finger spacing and the finger tip positions were derived to classify the hand gesture. The accuracy of the recognition rate reported was more than 95.1%. HSV based skin color detection was implemented by Nasser Dardas et.al in [8], The method has been reported to have real time performance and is robust against rotations, scaling and lighting conditions. Additionally it can tolerate occlusion well. The thresholding proposed was H between 0o to 20o and S between 75 and 190. The segmenting resulted in giving the hand contour which was subsequently compared with the templates of the contours of the hand postures. Four gestures were tested by the authors which indicated an average accuracy of more than 90%. HSV based hand skin color segmentation was used by Zhi-hua et.al in [9]. They presented an efficient and effective method for hand gesture recognition. The hand region is detected using HSV color model whrein they applied the thresholds as 315, 94, and 37 on H, S, V respectively through the background subtraction method. After hand detection, segmentation was carried out to separate out palm and fingers. Fingers and thumb were counted to recognize the gesture. The total classification accuracy of 1300 images tested by them has been reported was 96.69%. However the system failed to work satisfactorily in case of complex background. Wei Ren Tan et.al [10] proposed a novel human skin detection approach that combined a smoothened 2-D histogram and Gaussian model, for automatic human skin detection in color image(s). In their approach, an eye detector was used to refine the skin model for a specific person. This approach drastically reduced the computational costs as no training was required, and it improved the accuracy of skin detection to 90.39% despite wide variation in ethnicity and illumination. Log Chromaticity Color Space (LCCS) was proposed in [11] by Bishesh Khanal et.al. which gave illumination invariant representation of image. LCCS resulted into an overall classification rate (CR) of about 85%. A better CR (90.45%) was obtained when LCCS was calculated as against only luminance. In [12] , Yong Luo et.al. removed illumination component by subtracting the International Journal of Computer Science and Information Security (IJCSIS), Vol. 16, No. 3, March 2018 64 https://sites.google.com/site/ijcsis/ ISSN 1947-5500
  • 3. mean estimation from the original image. To make the standardization of the overall gray values of the different face images, ration matrix and modulus mean was calculated and used as features. The reported recognition rate using PCA was 92% for Yale B+ face database and using LDA 94.28%. Hsu et. al. addressed the issue of illumination changes, by first normalizing the image using the geometric mean followed by a natural log of the normalized image.[13]. The false rejection and false acceptance ratios reported by them were as low as 0.47% and 0% respectively. Mohmed Alshekhali et.al. in [14] proposed the technique for detection of hand and determination of its center, tracking the hands trajectory and analyzing the variations in the hand locations, and finally recognizing the gesture. Their technique resulted in overcoming the background complexity and gave satisfactory results for the camera located up to 2.5 meters from the object of interest. Experimental results indicate that this technique could recognize 12 gestures with more than 94% recognition accuracy. Extensive literature review reveals that the Luminance-Chrominance color model can be used to detect the skin color which provides robustness against illumination variation. Chroma (Chrominance) sampling is the key for color based segmentation in real time environment. YCbCr found to be promising for complex background while HSV indicates its robustness against the variation in the intensity of illumination while capturing the images. In order to achieve the benefits of both YCbCr and HSV, an approach based on the combination/fusion of two viz. YUV (variant of YCbCr) and HSV color space is proposed in this paper to detect the skin color. YUV color space which was initially coded for PAL analog video, now is also used in the CCIR 601 standard for digital video. The detailed implementation of this fusion and the results thereof are discussed in section III. III. EXPERIMENTAL WORK As discussed in section II the first clue to segment the hand from the image is skin color. For this purpose Luminance-Chrominance color model is used. Pure color space (chrominance value) is used to model the skin color; for instance UV space in YUV and SV space in HSV color space. But under varying illumination conditions, the skin color of the hands from different databases, either different persons or even same person, may vary. The sample images of hand gestures captured under varying illumination conditions used in this paper are shown in the Figure 2. These are available online for research purpose and are from Sebestien Marcel database (Figure 2.a) [15], Cambridge Hand Gesture database (Figure 2.b) [16] and NUS Hand Posture database II (Figure 2.c) [17]. To reduce the effects of illumination variation effects, a normalized color space is used. Normalization is achieved by combining YUV and HSV color spaces. For this firstly the RGB image is converted into the YUV and HSV color spaces using (1) to (6). This separates the luminance and chrominance components from the image. Separation of the chrominance approximates the “chromaticity” of skin (or, in essence, its absorption spectrum) rather than its apparent color value thereby increasing the robustness against variation in illumination. In this process, typically the luminance component is eliminated to remove the effect of shadows, variations in illumination etc. Fig. 2: Images of Hand Gestures with Variation in Illumination a) ‘Five’ from Sebestien Marcel database [15] b) ‘Flat’ from Cambridge Hand Gesture database [16] and c) ‘B’ from NUS Hand Posture database II [17]. YUV is an orthogonal color space in which the color is represented with statistically independent components. The luminance (Y) component is computed as a weighted sum of RGB values while the chrominance (U and V) components are computed by subtracting the luminance component from B and R values respectively. Mathematically this conversion is given by the following equations: = + 2 + (1) = − (2) = − (3) HSV is invariant to dull surfaces and lighting. HSV approximates the way humans perceive and interpret color. The research shows that the luminance may vary due to ambient lighting conditions and is not reliable measure to detect the skin pixels. Saturation and Value V (brightness) can be used in order to minimize the influence of shadow International Journal of Computer Science and Information Security (IJCSIS), Vol. 16, No. 3, March 2018 65 https://sites.google.com/site/ijcsis/ ISSN 1947-5500
  • 4. and uneven lighting. Conversion from RGB to HSV color space is done using following equations: H= ⌊( ) ( )⌋ (( ) ) ( )( ) (4) = 1 − ( , , ) (5) = ( + + ) (6) The same algorithm mentioned in [4] is used for human skin detection from YUV and HSV color spaces. Histogram is used for deciding the threshold level for discriminating the skin and non skin pixels. The output will be image with only skin pixels. The largest blob is detected as hand. Arm removal algorithm is implemented to segment the palm for further processing. After segmenting the hand using skin color, the boundary of the hand is detected. The object is generally described by its boundary in a meaningful manner. Since each boundary is composition of collection of all connected curves, the concentration is upon the description of connected curves. In hand gesture recognition, the techniques which provides unique features that are used primarily for shape representation as well as its time complexity is less, is chosen so that the recognition of static hand gestures can be done in real time. It is also expected that the technique used should be invariant to translation, rotation, and scaling. Different methods in the literature include the use of eccentricity, scale space and Fourier descriptors for shape detection. 2-D Fourier transformation is extensively used for shape representation and analysis. The details of the literature review describing the use of Fourier descriptors for 2-D shape detection and hand shape detection and its implementation can be found in [18]. The coefficients calculated by applying Fourier transform on the input image forms the Fourier descriptors of the shape. These descriptors generally represent the shape in a frequency domain. The global features of the shape are given by the low frequency descriptors and finer details of the shape are given by the higher frequency descriptors. The number of coefficients obtained after transformation are generally large, some of them are sufficient to properly define the overall features of the shape. High frequency descriptors that are generally used to provide the finer details of the shapes are not used for discrimination of the shape, so they can be ignored. By doing this, the dimensions of the Fourier descriptors used for capturing shapes are significantly reduced and the size of feature vector is also reduced. As shape is connected object and is described using a closed contour that can be represented as a collection of the pixel coordinates in x and y direction. The coordinates can be considered to be sampling values. Suppose that the boundary of a particular shape has P pixels numbered from 0 to P - 1. The pth pixel along boundary of the contour has position (xp, yp). The contour can be described using two parametric equations: x(p)= xp y(p)= yp (7) The Cartesian coordinates of the boundary pixel is not considered as Cartesian coordinates instead they are converted to the complex plane by using the following equation: s(p)= x(p)+ iy(p) (8) The above equation means that the x-axis is treated as real axis and y-axis as imaginary axis of a sequence of complex numbers. Although the interpretation of the sequence was recast, the nature of the boundary itself was not changed. Of course this representation has one great advantage: It reduces a 2-D to 1-D problem. The Discrete Fourier Transform of this function is taken and frequency spectra are obtained. Discrete Fourier transform of s(p) is given by (9) Where u= 0, 1,2, ….P-1. The complex coefficients a(u) are called the Fourier descriptors of the boundary. The inverse Fourier transform of these coefficients restores s(P) and given by the following equation: (10) where p=0, 1, 2, ….P-1 To increase the robustness of the system the geometrical features like eccentricity, aspect ratio of the area and perimeter of the closed contour are also calculated from the properties of the region of the hand contour. The feature vector is formed combining the skin color based shape features and geometrical features. The complete algorithm of feature vector formation and classification is represented in the following algorithm      1 0 /2 )( 1 )( p k Pupj eps P ua       1 0 /2 )( 1 )( p u Ppj eua P PS  International Journal of Computer Science and Information Security (IJCSIS), Vol. 16, No. 3, March 2018 66 https://sites.google.com/site/ijcsis/ ISSN 1947-5500
  • 5. Algorithm: a) Read RGB image b) Convert RGB to HSV and YUV c) Apply skin detector algorithm based on the threshold on S and U. d) Perform the morphological operations. e) Find the largest blob f) Detect the palm by using arm removal algorithm. g) Extract the boundary co-ordinates of the contour h) Apply Fast Fourier Transform and calculate Fourier descriptors i) Calculate the geometrical properties of the blob j) Combine the features (Skin color + Fourier + Geometrical Features.) to form the feature vector. k) Repeat the procedure for all the images in the training and testing database. l) Train the Backpropogation neural network to classify the gestures. m) Test the network and find out the accuracy. IV.RESULTS AND DISCUSSION As mentioned in Section III, the performance of the system is tested using three different datasets with details as given below. 1.Sebestien Marcel dataset consists of total 6 postures viz. A, B, C, Point, Five and V of 10 persons in 3 different backgrounds (light, dark and complex). 2.Cambridge Hand Gesture consists of 900 image sequences of 9 gesture classes. Each class has 100 image sequences performed by 2 subjects, captured under 5 different illuminations and 10 arbitrary motions. The 9 classes are defined by three primitive hand shapes and three primitive motions. For the experimentation we are focusing on the hand shapes in different illumination conditions. 3.NUS Hand Posture database consists of the postures by 40 subjects, with different ethnicities against different complex backgrounds. The database used in this experimentation consists of 4 hand postures repeated five times by each of the subjects. Hand posture images of size 160x120. 100 images are used for training and 100 for testing. The skin detector is first applied to extract skin regions in the images from the three databases using fusion of HSV and YUV color space and applying the threshold. The results of the skin detector algorithm on the images of three sets are presented in Fig. 4,5 and 6. Hand posture shown in these figures are number ‘Five’ from Sebastian Marcel dataset II, ‘B’ from NUS and ‘Flat’ from Cambridge hand gesture dataset. The purpose of presenting the same hand shape for all the database is to show that the proposed system works better for complex background and illuminations conditions. The fig. 4 shows that the algorithm and works quiet better for Sebastian Marcel dataset. Fig.5 represents empirical results that show the detection of the hand region is not up to the mark for Cambridge hand gesture database with the 5th illumination conditions as can be seen from the Fig. 2b. Fig. 6 interprets the result of the skin detection on the NUS dataset. After detecting the skin, morphological operations were performed to get the closed contour of the hand. As explained in the algorithm in section III, the Fourier descriptors were chosen as features and hence were calculated from the closed contour. The descriptors were then normalized by nullifying the 0th Fourier descriptor to get the invariance to the translation. Scale invariance was obtained by dividing all Fourier descriptors by the magnitude of the 1st Fourier descriptor. Rotation invariance is achieved by only considering the magnitude of the Fourier coefficients. The feature vector was formed by considering 20 coefficients of Fourier descriptors (which are invariant to scale, rotation and translation) and two geometrical features viz. area to perimeter aspect ratio and eccentricity, thus making a total of 22 features. Geometrical features were calculated from the closed contour of the segmented hand. The feature vectors thus formed were then used to train and test the multilayer feed forward neural network to classify the hand gesture. For learning the network, Back propagation algorithm with Levenberg-Marquardt algorithm has been used to train the network. The activation function used is “Sigmoid”. Fig. 3 shows the architecture of the NN used in this experiment. Fig. 3. Neural Network Architecture The same experiment is tested for NUS Hand Posture dataset I which consists of 10 classes of postures with 24 samples of each. As there is uniform background the classification accuracy observed is 100%. International Journal of Computer Science and Information Security (IJCSIS), Vol. 16, No. 3, March 2018 67 https://sites.google.com/site/ijcsis/ ISSN 1947-5500
  • 6. Fig. 4. Results of skin detector -Sebastian Marcel dataset II. Fig. 5. Results of skin detector for posture ‘flat’ from the Cambridge dataset Fig . 6. Result of skin detector for posture ‘B’ from NUS Hand Posture Dataset II. The results of the proposed work are presented in the following tables. For the experimentation six postures ‘A’, ‘B’, ‘C’, ‘point’, ‘V’, ‘Five’ from Sebastien Marcel database has been used. Table 1 describes the individual accuracy for each of these postures. The average accuracy achieved is 96%. TABLE 1: CLASSIFICATION ACCURACY FOR SEBESTIEN MARCEL DATABASE Static gesture No. of gesture samples Correct Incorrect Classification Accuracy % A 100 96 04 96 B 100 94 6 94 C 100 95 5 95 Point 100 98 2 98 V 100 96 4 96 Five 100 97 3 97 Avearage Classification Accuracy 96 The proposed work is compared with the existing state of art techniques on the same benchmark dataset in Table 2 which expose that the experiment conducted in this paper is comparable with those of existing techniques. TABLE 2: COMPARISON WITH EXISTING STATE OF ART TECHNIQUES FOR SEBASTIAN MARCEL DATABASE Paper No. Features Classifier Accuracy (%) [19] Modified Cesnsus Transform AdaBoost 81.25 [20] Haar like features AdaBoost 90.0 [21] Haar wavelets Penalty score 94.89 [22] Scale space features AdaBoost 93.8 [23] Bag of features Support Vector Machine 96.23 [24] Normalized Moment of Inertia (NMI) and Hu invariant moments Support Vector Machine 96.9 Proposed method Skin color and Fourier Descriptor Artificial Neural Network 96 Four hand gestures ‘A’, ‘B’, ‘C’, ‘D’ are used for the experiments from the NUS Hand Posture dataset. 100 samples for each posture are used for training and 100 are used for testing. The results of the experiment are presented in Table 3. The average accuracy of 95.25% is achieved. International Journal of Computer Science and Information Security (IJCSIS), Vol. 16, No. 3, March 2018 68 https://sites.google.com/site/ijcsis/ ISSN 1947-5500
  • 7. TABLE 3. CLASSIFICATION ACCURACY FOR NUS HAND POSTURE DATABASE II Static gesture No. of gesture samples Correct Incorrect Classification Accuracy % A 100 96 4 94 B 100 94 6 94 C 100 97 3 97 D 100 96 4 96 Avearage Classification Accuracy 95.25 The results obtained through this experimentation are compared with the state of art techniques. The comparison reveals that the proposed method is better than the existing methods. The details of this are given in the table 4. TABLE 4. COMPARISON WITH EXISTING STATE OF ART TECHNIQUES FOR NUS HAND POSTURE DATABASE II Paper No. Features Classifier Accuracy [25] Shape based and texture based features GentleBoost 75.71 [26] Viola jones Real Time Deformable Detetctor 90.66 [27] NUS standard model features (SMFs) Fuzzy Rule Classifier Support Vector Machine 93.33 92.50 [28] Shape texture color Support Vector Machine 94.36 Propos ed method Skin color and Fourier Descriptor Artificial Neural Network 95.25 Three primitive hand shapes ‘flat’, ‘Spread’ and ‘V’ from Cambridge Hand gesture database are used for testing the proposed algorithm. The results of the proposed work are presented in the Table 5. The experiment is carried out for each set of database and reported in the table. The average accuracy is 93.67% . The results obtained through this experimentation are compared with the state of art techniques and reported in Table 6. V.CONCLUSION AND FUTURE SCOPE The paper proposed a system for hand segmentation and classification. The main component of the system is to track the hand based on skin color under different illumination conditions and with complex background. Fusion of HSV and YUV color space to detect the skin color gave the invariance to the illumination even in the complex background. The closed contour of the segmented hand is used to detect the shape of the hand gesture. Fourier descriptors are calculated as shape descriptors. To improve the robustness for the shape detection, the geometrical features are added in the feature vector. The feature vector thus achieved by combining the shape features and geometrical features are given to the artificial neural network for classification. The average classification accuracy of 95.25% is achieved for all the three databases. The hand postures in the databases have the different viewing angle. So the classification accuracy can be further increased by extracting the view invariant features from the images. This lays a direction for further reseach in this area. TABLE 5. CLASSIFICATION ACCURACY FOR CAMBRIDGE HAND GESTURE DATABASE Static gesture No. of gesture samples Set 1 Set2 Set 3 Set 4 Set 5 Flat 100 93 96 96 92 94 Spread 100 94 95 93 93 93 V 100 95 96 95 92 94 Avearage Classification Accuracy 94 95.67 94.67 92.33 93.67 TABLE 6. COMPARISON WITH EXISTING STATE OF ART TECHNIQUES FOR CAMBRIDGE HAND GESTURE DATABASE Paper No. Features Classifier Accuracy (%) [29] PCA on Motion gradient orientation Sparse Bayesian Classifier 80 [30] Canonical Correlation Analysis (CCA) + SIFT Support Vector Machine 85 [31] Concatenated HOG Kernel Diceriminant analysis with RBF kernel 91.1 [32] Fourier Descriptors (Static postures 4 shapes) Support Vector Machine 92.5 Proposed method Skin color and Fourier Descriptor ANN 94.50 REFERENCES [1]. Vladimir I. et.al. ‘Visual Interpretation of Hand Gestures for Human-Computer Interaction: A Review’, IEEE transactions on Pattern Analysis and Machine Intelligence, vol. 19, no. 7, July 1997. [2]. Sushmita Mitra, Tinku Acharya, ‘Gesture Recognition: A Survey’ IEEE Transactions on Systems, Man, and Cybernetics—Part C: Applications and Reviews, Vol. 37, No. 3, MAY 2007. International Journal of Computer Science and Information Security (IJCSIS), Vol. 16, No. 3, March 2018 69 https://sites.google.com/site/ijcsis/ ISSN 1947-5500
  • 8. [3]. Terrillon, J., Shirazi, M. N., Fukamachi, H., Akamatsu, S, ‘Comparative performance of different skin chrominance models and chrominance spaces for the automatic detection of human faces in color images’, IEEE International Conference on Face and Gesture Recognition, pp. 54-61 (2000) [4]. Khamar Basha Shaika et.al ‘Comparative Study of Skin Color Detection and Segmentation in HSV and YCbCr Color Space’ International Elsevier Conference on Recent Trends in Computing 2015 [5]. Zhang Qiu-yu, Lu Jun-chi, et.al, “Hand Gesture Segmentation Method Based on YCbCr Color Space and K International Journal of Signal Processing, Image Processing Pattern RecognitionVol. 8, No. 5 (2015), pp. 105 [6]. Bahare Jalilian, Abdolah Chalechale , ‘Face and Hand Shape Segmentation Using Statistical Skin Detection for Sign Language Recognition’, Computer Science and Information Technology .DOI: 10.13189/csit.2013.010305 [7]. Hsiang-Yueh. Lai 1, Han-Jheng. Lai, “Real Gesture Recognition”, IEEE DOI 10.1109/IS3C.2014.177 Published in: Computer, Consumer and Control ( International Symposium [8]. Nasser H. Dardas and Emil M. Petriu,” Hand Gesture Detection and Recognition Using Principal Component”, IEEE International Conferenc Computational Intelligence for Measurement Systems and Applications (CIMSA), 2011 [9]. Zhi-hua Chen,1 Jung-Tae Kim,1 Jianning Lian Gesture Recognition Using Finger Segmentation” The Scientific World Journal Volume 2014 (2014), Article ID http://dx.doi.org/10.1155/2014/267872 [10]. Wei Ren Tan, et.al., “A Fusion Approach for Efficient Human Skin Detection”, IEEE transaction on Industrial informatics Vol8(1) PP138-147 [11]. Bishesh Khanal, et.al “Efficient Skin Detection Under Severe Illumination Changes and Shadows”. ICIRA 2011 International Conference on Intelligent Rob otics and Applications, Dec 2011, Aachen, Germany. pp.1-8, 2011. [12]. Luo et.al. “Illumination Normalization Method Based on Mean Estimation for A Robust Face Recognition” Research article. [13]. Hsu, R., Ab del-Mottaleb, M., Jain, A. K.: Face detecting in color images. IEEE Trans. on Pattern Analysis and Machine Intelligence 24, 696-706 (2002) [14]. Mohamed Alsheakhali, Ahmed Skaik “Hand Gesture Recognition System”, International Conference on Information and Communication Systems (ICICS 2011) [15]. S. Marcel, “Hand posture recognition in a body space,” in Proc. Conf. Human Factors Comput. Syst. (CHI), 1999, pp. 302Processing, 2005, vol. 2, pp. 954 – 957 [16]. T-K. Kim et.al, “ Tensor Canonical Correlation Analysis for Ac Classification “,In Proc. of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Minneapolis, MN, 2007. [17]. Pramod Kumar Pisharady, Prahlad Vadakkepat, Ai Poh Loh, "Attention Based Detection and Recognition of Hand Postures Against Complex Backgrounds", International Journal of Computer Vision, vol.101, no.3, pp.403-419, February, 2013 [18]. Mrs.A. R.Patil, Dr. S. Subbaraman, “Static Hand Gesture Detecti and Classification using Contour based Fourier Descriptor”, 9th World Congress International conference on Science and Engineering Technology at GOA. Jan. 2016 ://iferp.in/digital_object_identifier/WCASETG9718 [19]. Agn`es Just et.al.,’ Hand Posture Classification and Recognition using the Modified Census Transform’, 7th IEEE International Conference on Automatic Face and Gesture Recognition (FGR’06) 0-7695-2503-2/© 2006 IEEE [20]. Q. Chen, N. Georganas, and E. Petriu, “Real hand gesture recognition using Haar-like features,” in Proc. IEEE IMTC, 2007,pp. 1–6. [21]. Y. Fang, K. Wang, J. Cheng, and H. Lu, “A real recognition method,” in Proc. IEEE Int. Conf. ultimedia Expo, 2007, pp. 995–998. Terrillon, J., Shirazi, M. N., Fukamachi, H., Akamatsu, S, ‘Comparative performance of different skin chrominance models for the automatic detection of human faces in color images’, IEEE International Conference on Face and Khamar Basha Shaika et.al ‘Comparative Study of Skin Color Detection and Segmentation in HSV and YCbCr Color Space’ , 3rd International Elsevier Conference on Recent Trends in Computing chi, et.al, “Hand Gesture Segmentation Method Based on YCbCr Color Space and K-Means Clustering”, International Journal of Signal Processing, Image Processing and Pattern RecognitionVol. 8, No. 5 (2015), pp. 105-1 16 Bahare Jalilian, Abdolah Chalechale , ‘Face and Hand Shape Skin Detection for Sign Language Recognition’, Computer Science and Information Technology Jheng. Lai, “Real-Time Dynamic Hand DOI 10.1109/IS3C.2014.177 Computer, Consumer and Control (IS3C), 2014 Nasser H. Dardas and Emil M. Petriu,” Hand Gesture Detection and Recognition Using Principal Component”, IEEE International putational Intelligence for Measurement Systems Jianning Lian “Real-Time Hand Gesture Recognition Using Finger Segmentation” The Scientific (2014), Article ID 267872, Wei Ren Tan, et.al., “A Fusion Approach for Efficient Human Skin n”, IEEE transaction on Industrial informatics Vol8(1) Bishesh Khanal, et.al “Efficient Skin Detection Under Severe Illumination Changes and Shadows”. ICIRA 2011 - 4th International Conference on Intelligent Rob otics and Applications, Luo et.al. “Illumination Normalization Method Based on Mean Estimation for A Robust Face Recognition” Research article. Mottaleb, M., Jain, A. K.: Face detecting in color s and Machine Intelligence Mohamed Alsheakhali, Ahmed Skaik “Hand Gesture Recognition International Conference on Information and S. Marcel, “Hand posture recognition in a body-face centered pace,” in Proc. Conf. Human Factors Comput. Syst. (CHI), 1999, 957 Tensor Canonical Correlation Analysis for Action “,In Proc. of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Minneapolis, MN, 2007. Pisharady, Prahlad Vadakkepat, Ai Poh Loh, Attention Based Detection and Recognition of Hand ", International Journal of 419, February, 2013 Mrs.A. R.Patil, Dr. S. Subbaraman, “Static Hand Gesture Detection and Classification using Contour based Fourier Descriptor”, 9th World Congress International conference on Science and Engineering Technology at GOA. Jan. 2016 http WCASETG9718 Hand Posture Classification and Recognition , 7th IEEE International Conference on Automatic Face and Gesture Recognition (FGR’06) and E. Petriu, “Real-time vision-based like features,” in Proc. IEEE Y. Fang, K. Wang, J. Cheng, and H. Lu, “A real-time hand gesture recognition method,” in Proc. IEEE Int. Conf. ultimedia Expo, [22]. W. Chung, X. Wu, and Y. Xu, “A real time hand gesture recognition based on Haar wavelet representation,” in Proc. IEEE Int. Conf. Robot. Biomimetics, 2009, pp. 336 [23]. Nasser H. Dardas and Nicolas D. Georganas, “Real Gesture Detection and Recognition Using Bag Support Vector Machine Techniques”. IEEE Transactions on Instrumentation and Measurement, vol.60, no.11, November 2011 [24]. Y. Ren and C. Gu, “Real-time hand gesture recognition based on vision,” in Proc. Edutainment, 2010, pp. 468 [25]. Serre, T., et.al.(2007), ‘Robust object recognition with cortex mechanisms’, IEEE Transactions on Pattern Analysis and Machine Intelligence, 29(3), 411–426 [26]. Uriel Haile Hernandez-Belmonte and Victor Ayala Time Hand Posture Recognition for Human Tasks’, Sensors 2016, 16, 36; www.mdpi.com/journal/sensors [27]. P. Pramod Kumar et.al,’ Hand Posture And Face Recognition Using A Fuzzy- Rough Approach Robotics Vol . 7, N o. 3 (2010) 331 Publishing Company DOI: 10.1142/S0219843610002180 [28]. Pramod Kumar Pisharady et.al, Recognition of Hand Postures Against Complex Backgrounds 10.1007/s11263-012-0560-5 published by springerlink [29]. Shu-Fai Wong and Roberto Cipolla, ‘ Real Hand Motions using a Sparse Bayesian Classifier on Motion Gradient Orientation Images’ [30]. T.-K. Kim and R. Cipolla. Canonical correlation analysis of video volume tensors for action categorization and detection. IEEE Trans. Pattern Analysis and Machine Intelligence, 31(8):1415 [31]. Masoud Faraki Mehrtash T. Harandi Fatih Porikli , Classification by Symmetric Positive Semi [32]. Heba M.Gamal et.al, ‘ Hand Gesture Recognition using Fourier Descriptors’, 978-1-4799-008 274 0 Authors’ Profiles Mrs.Anjali R Patil have completed BE (2002)and ME(2009) from Shivaji University, Kolhapur. Currently she is Pursuing PhD in Electronics Engineering from Shivaji University Kolhapur. Her research interest includes image processing, Pattern rec computing etc. Dr. Mrs. Shaila Subbaraman, Ph. D. from I.I.T., Bombay (1999) and M. Tech. from I.I.Sc., Bangalore (1975) has a vast experience in industry in the capacity of R & D engineer and Quality Assurance Manager in the field of man semiconductor devices and ICs. She also has more than 27 years of teaching experience at both UG and PG level for the courses in Electronics Engineering. Her specialization is in Micro electronics and VLSI Design. She has more than fifty publications to her credit. She retired as Dean Academics of autonomous Walchand College of Engineering, Sangli and currently she is working as Professor (PG) in the same college. Additionally she works as a NBA expert for evaluating engineering programs in India Accord. Recently she has been felicitated by “Pillars of Hindustani Society” award instituted by Chamber of Commerce, Mumbai for contribution to Higher Education in Western Maharashtra. W. Chung, X. Wu, and Y. Xu, “A real time hand gesture based on Haar wavelet representation,” in Proc. IEEE Biomimetics, 2009, pp. 336–341 Nasser H. Dardas and Nicolas D. Georganas, “Real-Time Hand tion and Recognition Using Bag-of-Features and Support Vector Machine Techniques”. IEEE Transactions on Instrumentation and Measurement, vol.60, no.11, November 2011 time hand gesture recognition based on nt, 2010, pp. 468–475 Robust object recognition with cortex-like , IEEE Transactions on Pattern Analysis and Machine 426 Belmonte and Victor Ayala-Ramirez ,’Real- Posture Recognition for Human-Robot Interaction , Sensors 2016, 16, 36; doi:10.3390/s16010036 www.mdpi.com/journal/sensors Hand Posture And Face Recognition Using Rough Approach’, International Journal of Humanoid Vol . 7, N o. 3 (2010) 331–356 World Scientific Publishing Company DOI: 10.1142/S0219843610002180 Pramod Kumar Pisharady et.al, ‘Attention Based Detection and Recognition of Hand Postures t Complex Backgrounds’,Journal Computer Vision DOI 5 published by springerlink Fai Wong and Roberto Cipolla, ‘ Real-time Interpretation of Hand Motions using a Sparse Bayesian Classifier on Motion Gradient Orientation Images’, BMVC 2005 doi:10.5244/C.19.41 K. Kim and R. Cipolla. Canonical correlation analysis of video volume tensors for action categorization and detection. IEEE Trans. Pattern Analysis and Machine Intelligence, 31(8):1415–1428, 2009. Masoud Faraki Mehrtash T. Harandi Fatih Porikli , ‘Image Set Classification by Symmetric Positive Semi-Definite Matrices’ Hand Gesture Recognition using Fourier 008 274 0-0/13/$31.00 ©2013 IEEE Mrs.Anjali R Patil have completed BE (2002)and ME(2009) from Shivaji University, Kolhapur. Currently she is Pursuing PhD in Electronics Engineering from Shivaji University Kolhapur. Her research interest includes image processing, Pattern recognition, Soft Dr. Mrs. Shaila Subbaraman, Ph. D. from I.I.T., Bombay (1999) and M. Tech. from I.I.Sc., Bangalore (1975) has a vast experience in industry in the capacity of R & D engineer and Quality Assurance Manager in the field of manufacturing semiconductor devices and ICs. She also has more than 27 years of teaching experience at both UG and PG level for the courses in Electronics Engineering. Her specialization is in Micro- electronics and VLSI Design. She has more than fifty tions to her credit. She retired as Dean Academics of autonomous Walchand College of Engineering, Sangli and currently she is working as Professor (PG) in the same college. Additionally she works as a NBA expert for evaluating engineering programs in India in accordance with Washington Accord. Recently she has been felicitated by “Pillars of Hindustani Society” award instituted by Chamber of Commerce, Mumbai for contribution to Higher Education in Western International Journal of Computer Science and Information Security (IJCSIS), Vol. 16, No. 3, March 2018 70 https://sites.google.com/site/ijcsis/ ISSN 1947-5500