SlideShare a Scribd company logo
© 2014, IJCSE All Rights Reserved
International Journal of Computer Sciences and Engineering Open
Access
Research Paper Volume-2, Issue-4 E-ISSN: 2347-
2693
Gesture Recognition Using Artificial Neural Network
Khushboo Arora, Shrutika Suri, Divya Arora and Vaishali Pandey
A.P, CSE, MRIU, India
www.ijcaonline.org
Published: 30/04/2014
Abstract- Information communication between two people can be done using various medium. These may be linguistic or
gestures. Gestures recognition means identification and recognition of gestures originating from any type of body motion but
commonly originate from face or hand. It is a process by which gestures made by users are used to convey the information.
It provides important aspects of human interaction, both interpersonally and in the context of human - computer interfaces.
There are several approaches available for recognizing gesture, some of them being MATLAB, Artificial Neural Networks,
etc. This paper is a comprehensive evaluation of how gesture can be recognized in a more natural way using neural networks.
It consists of 3 stages: image acquisition, feature extraction and recognition. In first stage the image is captured using a
webcam, digital camera in approximate frame rate. In the second stage features are extracted using input image. The features
may be angle made between fingers, no of fingers that are opened or closed or semi closed and identification of each finger.
Finally neural network is used for recognition of the image.
I.INTRODUCTION
In the present day framework Communication is defined as exchange of thoughts and messages either by speech or
visuals, signals or behavior. But people who are impaired (i.e. deaf-dumb) have problem in communicating with anyone. So
deaf and dumb people use their hands to express their ideas. They make different gestures to communicate with people. The
gestures include the formation of English alphabets. This is called sign language. When they communicate through the
computer; the gestures may not be comfortable for the person on the other side. Therefore for them to understand easily, these
gestures can be converted to messages and also to spoken words. Gesture Recognition means identification and recognition
of gestures originates from any type of body motion but commonly originate from face or hand. Gesture language
identification is one of the areas being explored to help the deaf integrate into the community and has high applicability.
Gesture recognition is the process by which gestures made by the user are used to convey the information or for
device control. In everyday life, physical gestures are a powerful means of communication. A set of physical gestures may
constitute an entire language, as in sign languages. They can economically convey a rich set of facts and feelings. This
seminar makes the modest suggestion that gesture-based input is such a beneficial technique to convey the information or for
device control with the help of identification of specific human gestures.
Varies approaches for hand gesture recognition have been proposed, ranging from mathematical Hidden Markov
chains to tools, or approaches based on soft computing. Sebastiean Marcel, Oliver Bernier, Jean Emmanuel Viallet and
Danieal Collobert have proposed the same using Input-output Hidden Markov Models. Jianjie Zhang et al. proposed a new
complexion model has been proposed to extract hand regions under a variety of lighting conditions for hand detection, many
approaches uses color or motion information. Attila Licsar and Tamas Sziranyi have developed a hand gesture recognition
system based on the shape Analysis of the static gesture. Another method is proposed by E. Stergiopoulou and N. Papamarkos
which says that detection of the hand region can be achieved through color segmentation. Byung-Woo Min, Ho-Sub Yoon,
Jung Soh, Yun-Mo Yangc and ToskiakiEjima have suggested the method of Hand Gesture Recognition using Hidden Markov
models.Huang et al. use 3D neural network method to develop a Taiwanese Sign Language (TSL) recognition system to
recognize 15 different gestures.
Different from, this paper introduces a hand gesture recognition through Neural Network, which captures gesture
from the webcam and produce a text and speech out of it.
International Journal of Computer Sciences and Engineering Vol.-2(4), pp (185-189) April 2014, E-ISSN: 2347-2693
© 2014, IJCSE All Rights Reserved
A. TYPES OF GESTURE RECOGNITION
Gesture Recognition is the act of interpreting motions to determine such intent. The specific human gestures can
identify using the gesture recognition technology and used to convey the various information or for various applications by
controlling devices. It is the mathematical Interpretation of a human motion by a computing device. Gesture recognition,
along with facial recognition, voice recognition, eye tracking and lip movement recognition are components of what
developers refer to as a perceptual user interface. Gestures can be static (the user assumes a certain pose or configuration) or
dynamic (with pre-stroke, stroke, and post-stroke phases). Gesture recognition can broadly of following types:
Hand and Arm Gesture Recognition: Hand gesture recognition consists of hand poses and sign languages. It acts as a highly
adaptive interface machines and their users. For example waving hand in front of camera. Hand gesture technology allows
operations of complex machines using only a series of fingers and hand movements, eliminating the need for physical contract
between operator and machine.
Head and Face Gesture Recognition: Face gesture recognition creates an effective non-contact interface between users and
their machines. They are direct, naturally preeminent means for humans to communicate their emotions.
The goal of face gesture recognition is to make machines effectively understand human emotion, regardless of the countless
physical differences between individuals. Some examples are a) nodding or shaking of head, b) raising of eyebrows, c) lip
movement, d) emotions such as happy, sadness, shocked, anger, fear and etc. Facial expressions involve extracting sensitive
features from facial landmarks such as regions surrounding the mouth, nose, and eyes of a normalized image. Like hand
gesture recognition, this technology faces its own set of unique problems, caused by physical differences in human faces.
Fig 1. Types of Face gesture
Body Gesture Recognition: Body gesture involves of full body motion recognizing body gestures, and recognizing human
activity. Recognizing body gestures, and recognizing human activity. Such as a) tracking movement of two people interacting
outdoors, b) recognizing human gaits for medical rehabilitation and athletic training.
Hand Gesture Recognition uses two techniques:
a) Glove - based hand gesture recognition
b) Vision - based hand gesture recognition
A glove-based system requires the user to be connected to the computer. It require the user to wear a cumbersome device and
carry a load of cables connecting the device to a computer. The user has to wear a glove to and make gestures in front of the
camera. This hinders makes the user’s interaction with the computer very easily and natural.
International Journal of Computer Sciences and Engineering Vol.-2(4), pp (185-189) April 2014, E-ISSN: 2347-2693
© 2014, IJCSE All Rights Reserved
Vision-based system uses one or more camera to record images of human hand gestures and lighting conditions that enhance
gesture classification accuracy. It is fast and can easily detect movements of the fingers when the user’s hand is moving. A
vision-based device can handle properties like texture and color of gestures. It depends upon the:
a) Number of camera used;
b) Their Speed and latency;
c) 2-D or 3-D
d) User requirement;
e) Time
A vision-based system will at best get a general sense of the type of finger motion. In order to create the database for gesture
system, the gestures should be selected with their relevant meaning and each gesture may contain multi-samples for increasing
the accuracy of the system.
Fig 2. Examples of Data Glove and Vision-based
II. ARTIFICAL NEURAL NETWORK
The concept of ANN is basically introduced from the subject of biology where neural network plays an important and key
role in human body. All the work in our body is done through neural networks which consists of billions of neurons connected
to each other that work in parallel. Each neuron receives inputs from other neurons in the form of tiny electrical signals and,
likewise, it also outputs electrical signals to other neurons. These outputs are weighted in the sense that the neuron does not
‘fire’ any output unless a certain threshold/bias is reached. These weights can be altered through learning experiences.
Similar to human brain artificial neural networks consist of artificial neurons called perceptrons that receive numerical value
and then after the inputs are weighted and added, the result is then transformed by a transfer function into the output.
The transfer function may be anything like Sigmoid, hyperbolic tangent functions.
Fig 3. Representation of an Artificial Neuron
Ƒ=Ʃ(Xi * Wi)+bias
Today neural networks can be trained to solve problems that are difficult for conventional computers or human beings. The
supervised training methods are commonly used, but other networks can be obtained from unsupervised training techniques
or from direct design methods
a) Data Glove b) Vision-based
International Journal of Computer Sciences and Engineering Vol.-2(4), pp (185-189) April 2014, E-ISSN: 2347-2693
© 2014, IJCSE All Rights Reserved
A. BACKPROPOGATION LEARNING ALGORITHM :
Backpropagation, first described by Paul Werbos in 1974, and further developed by David E. Rumelhart, Geoffrey E. Hinton
and Ronald J. Williams in 1986, is a supervised learning technique. It requires a differentiable activation function.
Backpropogation neural networks use Delta Rule for learning. With delta rule, ‘learning' is a supervised process that occurs
with each cycle or 'epoch' ,through a forward activation flow of outputs, and the backwards error propagation of weight
adjustments. More simply, when a neural network is initially presented with a pattern it makes a random 'guess' as to what it
might be. It then sees how far its answer was from the actual one and makes an appropriate adjustment to its connection
weights. Backpropogation works well in a multilayer feed forward network.
B. FFED-FORWARD MULTILAYER NEURAL NETWORK
Feed-forward ANN allow signals to travel one way only; from input to output. There is no feedback (loops) i.e. the output of
any layer does not affect that same layer. Feed forward ANNs tend to be straight forward networks that associate inputs with
outputs. They are extensively used in pattern recognition. In computing, feed-forward normally refers to a multi-layer
perceptron network in which the outputs from all neurons go to following but not preceding layers, so there are no feedback
loops.[2]
Fig 5. Feedforward Multilayer Network
III. METHODOLOGY
In this paper, a new method for gesture recognition is defined. The presented system is based on one powerful hand feature
in combination with a multi-layer neural network based classifier. Flow diagram below explains clearly about the phases
involved in the process. The webcam is used to capture the gesture made by the person in front of the computer. The input
video is converted into frames and segmentation is applied on each frame. After segmentation, a contour of hand image is
used as a feature that describe the hand shape. After the phase of extraction, is the recognition phase where the extracted
features are fed into the neural network to recognize the particular character.
Generally speaking this method contains 4 main steps:
1. Gesture modeling
2. Segmentation
3. Feature Extraction
4. A classification
Fig 4. Backpropogation network
International Journal of Computer Sciences and Engineering Vol.-2(4), pp (185-189) April 2014, E-ISSN: 2347-2693
© 2014, IJCSE All Rights Reserved
Fig 6. Flow Diagram of Recognition System
A. GESTURE MODELLING
By gesture modeling, one means selection and formation of proper gesture. In Human-Computer Interaction, this forms an
essential aspect to best design appropriate gesture vocabulary for Human-Computer Interaction. One purpose of Human-
Computer Interaction is to make a computer tasks controlled by a set of commands in the form of hand gestures.
B. SEGMENTATION
Segmentation is based using the skin color. It is used to separate the skin area from the background. [11]The effect of
luminosity should be segregated from the color components. This makes HSI color model a better choice than RGB.
The input RGB gesture is converted to HIS form to reduce the burden on the network and also for accuracy. After segmenting,
the hand region is assigned a white color and other areas are assigned black.
C. FEATURE EXTRACTION
The feature extraction aspect of image analysis seeks to identify inherent characteristics, or features of objects found within
an image. These characteristics are used to describe the object, or attribute of the object. Feature extraction produces a list of
descriptions or a ‗feature vector. For static hand gestures features such as fingertips, finger directions and hand‘s contours
can be extracted. [12] Feature extraction is a complex problem, and often the whole image or transformed image is taken as
input. Features are thus selected implicitly and automatically
IV. CONCLUSION AND FUTURE SCOPE
Gesture Recognition provides the most important means for non-verbal interaction among people especially for impaired
people (i.e. deaf-dumb). In this paper we have presented an idea of hand gesture recognition using Neural Networks, one of
the most effective technique of software computing for hand gesture recognition problem.
Neural Network is efficient as long as the data sets are small and not further improvement is expected. Another advantage of
using neural networks in our research is that you can draw conclusions from the network output. In this paper we have also
used Back propagation algorithm and
Feed forward network.
Gesture could be identified from the input hand gesture video by identifying the fingers and their postures. The segmentation
of the hand and the fingers play a crucial role in such process. Accuracy was increased when neural networks were used. In
our paper we have proposed neural networks with sigmoid transfer function which gave better results compared to other
architectures. The detection capability of the system could be expanded to body gestures as well.
International Journal of Computer Sciences and Engineering Vol.-2(4), pp (185-189) April 2014, E-ISSN: 2347-2693
© 2014, IJCSE All Rights Reserved

More Related Content

PDF
Gesture recognition using artificial neural network,a technology for identify...
PPTX
Real time gesture recognition
PPTX
gesture recognition!
PPTX
PPTX
Movement Tracking in Real-time Hand Gesture Recognition
PPTX
Gesture recognition 2
PPTX
PPTX
Part 1 - Gesture Recognition Technology
Gesture recognition using artificial neural network,a technology for identify...
Real time gesture recognition
gesture recognition!
Movement Tracking in Real-time Hand Gesture Recognition
Gesture recognition 2
Part 1 - Gesture Recognition Technology

What's hot (20)

PDF
IRJET- Hand Gesture Recognition for Deaf and Dumb
PPTX
Gesture recognition technology
PPTX
Gesture recognition technology
PPTX
Gesture recognition adi
PDF
IRJET- Hand Gesture Recognition System using Convolutional Neural Networks
PPT
Gesture recognition PPPT
PPTX
Gesture recognition technology
PPT
GESTURE RECOGNITION TECHNOLOGY
PPTX
Part 2 - Gesture Recognition Technology
PPTX
A Dynamic hand gesture recognition for human computer interaction
PPTX
A Framework For Dynamic Hand Gesture Recognition Using Key Frames Extraction
PPTX
Hand Gesture Recognition Based on Shape Parameters
PDF
Final Year Project-Gesture Based Interaction and Image Processing
PPTX
Hand gesture recognition
PDF
Development of Sign Signal Translation System Based on Altera’s FPGA DE2 Board
PPTX
Gesture recognition technology
PDF
A Translation Device for the Vision Based Sign Language
PDF
IRJET- Survey Paper on Vision based Hand Gesture Recognition
PDF
Hand Gesture Recognition using Neural Network
PDF
Hand Gesture Recognition System for Human-Computer Interaction with Web-Cam
IRJET- Hand Gesture Recognition for Deaf and Dumb
Gesture recognition technology
Gesture recognition technology
Gesture recognition adi
IRJET- Hand Gesture Recognition System using Convolutional Neural Networks
Gesture recognition PPPT
Gesture recognition technology
GESTURE RECOGNITION TECHNOLOGY
Part 2 - Gesture Recognition Technology
A Dynamic hand gesture recognition for human computer interaction
A Framework For Dynamic Hand Gesture Recognition Using Key Frames Extraction
Hand Gesture Recognition Based on Shape Parameters
Final Year Project-Gesture Based Interaction and Image Processing
Hand gesture recognition
Development of Sign Signal Translation System Based on Altera’s FPGA DE2 Board
Gesture recognition technology
A Translation Device for the Vision Based Sign Language
IRJET- Survey Paper on Vision based Hand Gesture Recognition
Hand Gesture Recognition using Neural Network
Hand Gesture Recognition System for Human-Computer Interaction with Web-Cam
Ad

Similar to Gesture recognition using artificial neural network,a technology for identifying gestures commonly originating from hand and face. (20)

PDF
Vision Based Gesture Recognition Using Neural Networks Approaches: A Review
PPTX
ppt of gesture recognition
PDF
A Deep Neural Framework for Continuous Sign Language Recognition by Iterative...
PDF
40120140503005 2
PPTX
Gesture Recognition Technology
PPTX
Gesture recognition
PPTX
Gesturerecognition
PDF
Sign Language Identification based on Hand Gestures
PDF
IRJET- Survey on Sign Language and Gesture Recognition System
PDF
Natural Hand Gestures Recognition System for Intelligent HCI: A Survey
PPTX
Hand gesture recognition
PDF
Computer Based Human Gesture Recognition With Study Of Algorithms
PDF
Hand Gesture Recognition using OpenCV and Python
PDF
Gesture recognition document
PDF
Controlling Computer using Hand Gestures
PDF
Paper id 21201494
PDF
Media Control Using Hand Gesture Moments
PDF
Indian Sign Language Recognition using Vision Transformer based Convolutional...
PDF
Paper id 25201413
PDF
Ay4103315317
Vision Based Gesture Recognition Using Neural Networks Approaches: A Review
ppt of gesture recognition
A Deep Neural Framework for Continuous Sign Language Recognition by Iterative...
40120140503005 2
Gesture Recognition Technology
Gesture recognition
Gesturerecognition
Sign Language Identification based on Hand Gestures
IRJET- Survey on Sign Language and Gesture Recognition System
Natural Hand Gestures Recognition System for Intelligent HCI: A Survey
Hand gesture recognition
Computer Based Human Gesture Recognition With Study Of Algorithms
Hand Gesture Recognition using OpenCV and Python
Gesture recognition document
Controlling Computer using Hand Gestures
Paper id 21201494
Media Control Using Hand Gesture Moments
Indian Sign Language Recognition using Vision Transformer based Convolutional...
Paper id 25201413
Ay4103315317
Ad

Recently uploaded (20)

PPTX
CARTOGRAPHY AND GEOINFORMATION VISUALIZATION chapter1 NPTE (2).pptx
PDF
Digital Logic Computer Design lecture notes
PDF
composite construction of structures.pdf
PPTX
Strings in CPP - Strings in C++ are sequences of characters used to store and...
PPTX
MET 305 2019 SCHEME MODULE 2 COMPLETE.pptx
PDF
Embodied AI: Ushering in the Next Era of Intelligent Systems
PPTX
OOP with Java - Java Introduction (Basics)
PPTX
Foundation to blockchain - A guide to Blockchain Tech
DOCX
573137875-Attendance-Management-System-original
PPTX
CYBER-CRIMES AND SECURITY A guide to understanding
PPTX
M Tech Sem 1 Civil Engineering Environmental Sciences.pptx
PDF
July 2025 - Top 10 Read Articles in International Journal of Software Enginee...
PPTX
Infosys Presentation by1.Riyan Bagwan 2.Samadhan Naiknavare 3.Gaurav Shinde 4...
PDF
The CXO Playbook 2025 – Future-Ready Strategies for C-Suite Leaders Cerebrai...
DOCX
ASol_English-Language-Literature-Set-1-27-02-2023-converted.docx
PDF
SM_6th-Sem__Cse_Internet-of-Things.pdf IOT
PDF
BMEC211 - INTRODUCTION TO MECHATRONICS-1.pdf
PDF
Arduino robotics embedded978-1-4302-3184-4.pdf
PPT
Mechanical Engineering MATERIALS Selection
PPTX
MCN 401 KTU-2019-PPE KITS-MODULE 2.pptx
CARTOGRAPHY AND GEOINFORMATION VISUALIZATION chapter1 NPTE (2).pptx
Digital Logic Computer Design lecture notes
composite construction of structures.pdf
Strings in CPP - Strings in C++ are sequences of characters used to store and...
MET 305 2019 SCHEME MODULE 2 COMPLETE.pptx
Embodied AI: Ushering in the Next Era of Intelligent Systems
OOP with Java - Java Introduction (Basics)
Foundation to blockchain - A guide to Blockchain Tech
573137875-Attendance-Management-System-original
CYBER-CRIMES AND SECURITY A guide to understanding
M Tech Sem 1 Civil Engineering Environmental Sciences.pptx
July 2025 - Top 10 Read Articles in International Journal of Software Enginee...
Infosys Presentation by1.Riyan Bagwan 2.Samadhan Naiknavare 3.Gaurav Shinde 4...
The CXO Playbook 2025 – Future-Ready Strategies for C-Suite Leaders Cerebrai...
ASol_English-Language-Literature-Set-1-27-02-2023-converted.docx
SM_6th-Sem__Cse_Internet-of-Things.pdf IOT
BMEC211 - INTRODUCTION TO MECHATRONICS-1.pdf
Arduino robotics embedded978-1-4302-3184-4.pdf
Mechanical Engineering MATERIALS Selection
MCN 401 KTU-2019-PPE KITS-MODULE 2.pptx

Gesture recognition using artificial neural network,a technology for identifying gestures commonly originating from hand and face.

  • 1. © 2014, IJCSE All Rights Reserved International Journal of Computer Sciences and Engineering Open Access Research Paper Volume-2, Issue-4 E-ISSN: 2347- 2693 Gesture Recognition Using Artificial Neural Network Khushboo Arora, Shrutika Suri, Divya Arora and Vaishali Pandey A.P, CSE, MRIU, India www.ijcaonline.org Published: 30/04/2014 Abstract- Information communication between two people can be done using various medium. These may be linguistic or gestures. Gestures recognition means identification and recognition of gestures originating from any type of body motion but commonly originate from face or hand. It is a process by which gestures made by users are used to convey the information. It provides important aspects of human interaction, both interpersonally and in the context of human - computer interfaces. There are several approaches available for recognizing gesture, some of them being MATLAB, Artificial Neural Networks, etc. This paper is a comprehensive evaluation of how gesture can be recognized in a more natural way using neural networks. It consists of 3 stages: image acquisition, feature extraction and recognition. In first stage the image is captured using a webcam, digital camera in approximate frame rate. In the second stage features are extracted using input image. The features may be angle made between fingers, no of fingers that are opened or closed or semi closed and identification of each finger. Finally neural network is used for recognition of the image. I.INTRODUCTION In the present day framework Communication is defined as exchange of thoughts and messages either by speech or visuals, signals or behavior. But people who are impaired (i.e. deaf-dumb) have problem in communicating with anyone. So deaf and dumb people use their hands to express their ideas. They make different gestures to communicate with people. The gestures include the formation of English alphabets. This is called sign language. When they communicate through the computer; the gestures may not be comfortable for the person on the other side. Therefore for them to understand easily, these gestures can be converted to messages and also to spoken words. Gesture Recognition means identification and recognition of gestures originates from any type of body motion but commonly originate from face or hand. Gesture language identification is one of the areas being explored to help the deaf integrate into the community and has high applicability. Gesture recognition is the process by which gestures made by the user are used to convey the information or for device control. In everyday life, physical gestures are a powerful means of communication. A set of physical gestures may constitute an entire language, as in sign languages. They can economically convey a rich set of facts and feelings. This seminar makes the modest suggestion that gesture-based input is such a beneficial technique to convey the information or for device control with the help of identification of specific human gestures. Varies approaches for hand gesture recognition have been proposed, ranging from mathematical Hidden Markov chains to tools, or approaches based on soft computing. Sebastiean Marcel, Oliver Bernier, Jean Emmanuel Viallet and Danieal Collobert have proposed the same using Input-output Hidden Markov Models. Jianjie Zhang et al. proposed a new complexion model has been proposed to extract hand regions under a variety of lighting conditions for hand detection, many approaches uses color or motion information. Attila Licsar and Tamas Sziranyi have developed a hand gesture recognition system based on the shape Analysis of the static gesture. Another method is proposed by E. Stergiopoulou and N. Papamarkos which says that detection of the hand region can be achieved through color segmentation. Byung-Woo Min, Ho-Sub Yoon, Jung Soh, Yun-Mo Yangc and ToskiakiEjima have suggested the method of Hand Gesture Recognition using Hidden Markov models.Huang et al. use 3D neural network method to develop a Taiwanese Sign Language (TSL) recognition system to recognize 15 different gestures. Different from, this paper introduces a hand gesture recognition through Neural Network, which captures gesture from the webcam and produce a text and speech out of it.
  • 2. International Journal of Computer Sciences and Engineering Vol.-2(4), pp (185-189) April 2014, E-ISSN: 2347-2693 © 2014, IJCSE All Rights Reserved A. TYPES OF GESTURE RECOGNITION Gesture Recognition is the act of interpreting motions to determine such intent. The specific human gestures can identify using the gesture recognition technology and used to convey the various information or for various applications by controlling devices. It is the mathematical Interpretation of a human motion by a computing device. Gesture recognition, along with facial recognition, voice recognition, eye tracking and lip movement recognition are components of what developers refer to as a perceptual user interface. Gestures can be static (the user assumes a certain pose or configuration) or dynamic (with pre-stroke, stroke, and post-stroke phases). Gesture recognition can broadly of following types: Hand and Arm Gesture Recognition: Hand gesture recognition consists of hand poses and sign languages. It acts as a highly adaptive interface machines and their users. For example waving hand in front of camera. Hand gesture technology allows operations of complex machines using only a series of fingers and hand movements, eliminating the need for physical contract between operator and machine. Head and Face Gesture Recognition: Face gesture recognition creates an effective non-contact interface between users and their machines. They are direct, naturally preeminent means for humans to communicate their emotions. The goal of face gesture recognition is to make machines effectively understand human emotion, regardless of the countless physical differences between individuals. Some examples are a) nodding or shaking of head, b) raising of eyebrows, c) lip movement, d) emotions such as happy, sadness, shocked, anger, fear and etc. Facial expressions involve extracting sensitive features from facial landmarks such as regions surrounding the mouth, nose, and eyes of a normalized image. Like hand gesture recognition, this technology faces its own set of unique problems, caused by physical differences in human faces. Fig 1. Types of Face gesture Body Gesture Recognition: Body gesture involves of full body motion recognizing body gestures, and recognizing human activity. Recognizing body gestures, and recognizing human activity. Such as a) tracking movement of two people interacting outdoors, b) recognizing human gaits for medical rehabilitation and athletic training. Hand Gesture Recognition uses two techniques: a) Glove - based hand gesture recognition b) Vision - based hand gesture recognition A glove-based system requires the user to be connected to the computer. It require the user to wear a cumbersome device and carry a load of cables connecting the device to a computer. The user has to wear a glove to and make gestures in front of the camera. This hinders makes the user’s interaction with the computer very easily and natural.
  • 3. International Journal of Computer Sciences and Engineering Vol.-2(4), pp (185-189) April 2014, E-ISSN: 2347-2693 © 2014, IJCSE All Rights Reserved Vision-based system uses one or more camera to record images of human hand gestures and lighting conditions that enhance gesture classification accuracy. It is fast and can easily detect movements of the fingers when the user’s hand is moving. A vision-based device can handle properties like texture and color of gestures. It depends upon the: a) Number of camera used; b) Their Speed and latency; c) 2-D or 3-D d) User requirement; e) Time A vision-based system will at best get a general sense of the type of finger motion. In order to create the database for gesture system, the gestures should be selected with their relevant meaning and each gesture may contain multi-samples for increasing the accuracy of the system. Fig 2. Examples of Data Glove and Vision-based II. ARTIFICAL NEURAL NETWORK The concept of ANN is basically introduced from the subject of biology where neural network plays an important and key role in human body. All the work in our body is done through neural networks which consists of billions of neurons connected to each other that work in parallel. Each neuron receives inputs from other neurons in the form of tiny electrical signals and, likewise, it also outputs electrical signals to other neurons. These outputs are weighted in the sense that the neuron does not ‘fire’ any output unless a certain threshold/bias is reached. These weights can be altered through learning experiences. Similar to human brain artificial neural networks consist of artificial neurons called perceptrons that receive numerical value and then after the inputs are weighted and added, the result is then transformed by a transfer function into the output. The transfer function may be anything like Sigmoid, hyperbolic tangent functions. Fig 3. Representation of an Artificial Neuron Ƒ=Ʃ(Xi * Wi)+bias Today neural networks can be trained to solve problems that are difficult for conventional computers or human beings. The supervised training methods are commonly used, but other networks can be obtained from unsupervised training techniques or from direct design methods a) Data Glove b) Vision-based
  • 4. International Journal of Computer Sciences and Engineering Vol.-2(4), pp (185-189) April 2014, E-ISSN: 2347-2693 © 2014, IJCSE All Rights Reserved A. BACKPROPOGATION LEARNING ALGORITHM : Backpropagation, first described by Paul Werbos in 1974, and further developed by David E. Rumelhart, Geoffrey E. Hinton and Ronald J. Williams in 1986, is a supervised learning technique. It requires a differentiable activation function. Backpropogation neural networks use Delta Rule for learning. With delta rule, ‘learning' is a supervised process that occurs with each cycle or 'epoch' ,through a forward activation flow of outputs, and the backwards error propagation of weight adjustments. More simply, when a neural network is initially presented with a pattern it makes a random 'guess' as to what it might be. It then sees how far its answer was from the actual one and makes an appropriate adjustment to its connection weights. Backpropogation works well in a multilayer feed forward network. B. FFED-FORWARD MULTILAYER NEURAL NETWORK Feed-forward ANN allow signals to travel one way only; from input to output. There is no feedback (loops) i.e. the output of any layer does not affect that same layer. Feed forward ANNs tend to be straight forward networks that associate inputs with outputs. They are extensively used in pattern recognition. In computing, feed-forward normally refers to a multi-layer perceptron network in which the outputs from all neurons go to following but not preceding layers, so there are no feedback loops.[2] Fig 5. Feedforward Multilayer Network III. METHODOLOGY In this paper, a new method for gesture recognition is defined. The presented system is based on one powerful hand feature in combination with a multi-layer neural network based classifier. Flow diagram below explains clearly about the phases involved in the process. The webcam is used to capture the gesture made by the person in front of the computer. The input video is converted into frames and segmentation is applied on each frame. After segmentation, a contour of hand image is used as a feature that describe the hand shape. After the phase of extraction, is the recognition phase where the extracted features are fed into the neural network to recognize the particular character. Generally speaking this method contains 4 main steps: 1. Gesture modeling 2. Segmentation 3. Feature Extraction 4. A classification Fig 4. Backpropogation network
  • 5. International Journal of Computer Sciences and Engineering Vol.-2(4), pp (185-189) April 2014, E-ISSN: 2347-2693 © 2014, IJCSE All Rights Reserved Fig 6. Flow Diagram of Recognition System A. GESTURE MODELLING By gesture modeling, one means selection and formation of proper gesture. In Human-Computer Interaction, this forms an essential aspect to best design appropriate gesture vocabulary for Human-Computer Interaction. One purpose of Human- Computer Interaction is to make a computer tasks controlled by a set of commands in the form of hand gestures. B. SEGMENTATION Segmentation is based using the skin color. It is used to separate the skin area from the background. [11]The effect of luminosity should be segregated from the color components. This makes HSI color model a better choice than RGB. The input RGB gesture is converted to HIS form to reduce the burden on the network and also for accuracy. After segmenting, the hand region is assigned a white color and other areas are assigned black. C. FEATURE EXTRACTION The feature extraction aspect of image analysis seeks to identify inherent characteristics, or features of objects found within an image. These characteristics are used to describe the object, or attribute of the object. Feature extraction produces a list of descriptions or a ‗feature vector. For static hand gestures features such as fingertips, finger directions and hand‘s contours can be extracted. [12] Feature extraction is a complex problem, and often the whole image or transformed image is taken as input. Features are thus selected implicitly and automatically IV. CONCLUSION AND FUTURE SCOPE Gesture Recognition provides the most important means for non-verbal interaction among people especially for impaired people (i.e. deaf-dumb). In this paper we have presented an idea of hand gesture recognition using Neural Networks, one of the most effective technique of software computing for hand gesture recognition problem. Neural Network is efficient as long as the data sets are small and not further improvement is expected. Another advantage of using neural networks in our research is that you can draw conclusions from the network output. In this paper we have also used Back propagation algorithm and Feed forward network. Gesture could be identified from the input hand gesture video by identifying the fingers and their postures. The segmentation of the hand and the fingers play a crucial role in such process. Accuracy was increased when neural networks were used. In our paper we have proposed neural networks with sigmoid transfer function which gave better results compared to other architectures. The detection capability of the system could be expanded to body gestures as well.
  • 6. International Journal of Computer Sciences and Engineering Vol.-2(4), pp (185-189) April 2014, E-ISSN: 2347-2693 © 2014, IJCSE All Rights Reserved