SlideShare a Scribd company logo
4
Most read
16
Most read
17
Most read
Sign Language Detection using Deep
Learning
Conte
nt
➔ Introduction
➔ Literature
Review
➔ Objectives
➔ Methodology
➔ Results
➔ Working Model
➔ Conclusion
● Sign language is a language used as a manual communication method used by people who
are deaf , mute, etc.
● Hand gesture is one of the main methods used in this language for non-verbal
communication.
● In this project, we will be using the ISL (Indian Sign Language) dataset from Kaggle as well as
ISLTranslate, which is a dataset containing frames and videos containing sign language,
visual language, fingerspelling, and facial expressions in Indian Sign Language.
● The model will use a Deep Learning architecture that is efficient in Image recognition
(Convolutional Neural Network Architecture).
● Using this model, we will train the model to recognize hand gestures and movement of
hands with the dataset acquired.
● Once the model can successfully classify and recognize the images in real time, it will
generate English text according to Sign Language, which will make communication with mute
Introducti
on
Motivatio
n
➔ Sign language is a manual type of communication commonly used by deaf and mute
people.
➔ Our goal is to improve the communication between deaf/mute people from different
areas and those who cannot understand sign language.
Technical Concepts (Algorithms) used
Using a CNN (Convolutional Neural Network) model for the classification of Indian Sign Languages
involves several technical concepts and algorithms. CNN is a deep learning architecture known for its
ability to process grid-like data such as images and videos effectively. When applying CNN architecture
for Indian Sign Language classification, we encountered the following technical concepts and
algorithms:
● CNN Architecture: Convolutional Neural Network (CNN) architecture is designed for
processing and analysing computer vision tasks. It utilizes various layers to train very deep
neural networks effectively on image and video dataset.
● Convolutional Layers: These layers automatically extract features from hand gesture images.
They're crucial for recognizing patterns in images of the hand, helping the model identify signs
language and classify them correctly.
● Pooling layers: These layers downgrade the dimensions of the image input by reducing its size.
This helps in reducing the computational usage and focuses on important functions while
retaining its features.
● Fully Connected Layers: These layers connect to every neuron in one layer with another layer,
enabling the network to classify and predict based on the learned weights and features
Problem
Statement
Sign language is a manual type of communication commonly used by deaf and mute people.
It is not a universal language, so many deaf/mute people from different regions speak
different sign languages. So, this project aims to improve the communication between
deaf/mute people from different areas and those who cannot understand sign language. We
are using deep learning methods which can improve the classification accuracy of sign
language gestures.
● Human-Computer Interaction (HCI): By enabling people to connect with computers and other
electronic devices using natural hand gestures, hand gesture recognition systems transform
conventional input techniques and improve user experiences.
● Assistive technology: Hand gesture recognition systems can help people with mobility limitations use
devices, traverse interfaces, and communicate more successfully.
● Sign language recognition: Hand gesture recognition systems are essential for deciphering sign
language motions, enabling smooth communication between the deaf and hard of hearing people.
● VR and Gaming: Hand gesture recognition technologies allow users to interact with virtual worlds
and play games in a natural way, controlling avatars or characters with their hands.
● Home automation: Hand gesture recognition technologies can be incorporated into smart home
systems, giving consumers the ease and efficiency of simple hand gestures to operate lights,
appliances, and other IoT devices.
● Medical Applications: In order to improve efficiency and accuracy in medical procedures, surgeons
and other medical professionals use hand gesture recognition systems in operating rooms and
diagnostic settings.
Area of Applications
➔ Input Format:
Hand Gesture Image Data:
High-resolution Hand Gesture images should make up
the majority of the dataset. A variety of alphabets and
numbers which are labeled with hand gestures
averaging about 1200 images must be covered by
these photographs
Image Labels:
For the supervised learning approach to work, accurate
alphabets and numeric labels are essential. The precise
labels that each hand gesture image depicts will be
identified by a clear label. To enable model training, a
Dataset and input
format
Title: Hand Gesture Recognition Based on Computer Vision: A Review of Techniques
Journal: Journal of Imaging
Methodology:
● The study looks at studies that investigate hand gestures as a nonverbal communication
method in a variety of domains, including medical applications, robot control, human-
computer interface (HCI), home automation, and communication for the deaf and silent.
● It groups the literature according to several methods, such as computer vision and
instrumented sensor technology.
● Additionally, the article classifies hand gestures according to their posture,
dynamic/static nature, or hybrid forms.
Research Gap:
● The present study largely ignores real-world healthcare applications in favour of
computer applications, sign language, and virtual environment engagement.
● The majority of studies place more emphasis on developing algorithms and improving
frameworks than on actually implementing healthcare practices, which indicates a large
research vacuum in this area.
LITERATURE SURVEY -
1
Title: Hand Gesture Recognition: A Literature Review
Journal: International Journal of Artificial Intelligence & Applications
Methodology:
● The literature discusses techniques including orientation histogram for features
representation, fuzzy c-means clustering, neural networks (NN), and hidden markov
models (HMM).
● In particular, HMM techniques perform well in dynamic gestures, especially in robot
control scenarios.
● Neural networks play a crucial role in hand form recognition as classifiers. In gesture
recognition systems, methods for extracting features—such as algorithms for capturing
hand shape—are essential.
Research gap:
● Their uses have been well documented in the literature, there are still plenty of
unanswered questions regarding the real-world applications of these technologies,
particularly in healthcare settings.
● Moreover, although the article address current recognition methods, a thorough
assessment and comparison of these systems in practical healthcare settings is lacking.
LITERATURE SURVEY -
2
Title: An Exploration into Human–Computer Interaction: Hand Gesture Recognition
Management in a Challenging Environment
Journal: SpringLink
Methodology:
● The methodology entails a methodical examination of pertinent literature to pinpoint
important developments, strategies, and obstacles in hand gesture detection and
human-computer interaction.
● After that, an image dataset is chosen for analysis, and then picture enhancement and
segmentation procedures are carried out.
● By isolating the main image from its backdrop, converting colour spaces, and lowering
background noise, these methods help to improve the quality of raw photographs.
● The next phase is using machine learning methods, namely Convolutional Neural
Networks (CNN), to recognize hand gestures and learn attributes.
Research Gap:
● In order to provide a fair and impartial model, the study emphasises the significance of
contrasting analytical and discriminatory inclinations.
LITERATURE SURVEY - 3
LITERATURE SURVEY - 4
Title: Real-Time Hand Gesture Recognition Using Fine-Tuned Convolutional Neural Network
Journal: Sensors
Methodology:
● The research paper highlights the benefits and drawbacks of different sensors for the
development of HGR systems through a methodological comparison.
● Hand regions are identified and resized using image enhancement and segmentation
algorithms to match the input sizes of pre-trained Convolutional Neural Networks
(CNNs).
● Hand region segmentation is achieved by using maximum-area-based filtering
algorithms and depth thresholding.
● Additionally, the study uses a score-level fusion strategy that combines, using a sum-
ruled-based fusion procedure, the normalised score vectors from two fine-tuned CNNs.
Research Gap:
● There are important concerns, aspects like user comfort, real-time performance, and
adaptation to changing contexts are not fully covered.
● Furthermore, the research leaves gaps in comprehending the larger ramifications and
possible social repercussions of HGR systems because it focuses exclusively on technical
techniques. To close these gaps and create all-encompassing HGR systems that take into
account both real-world usability issues and technological constraints, more study is
Objecti
ve
Main Objective
❖ To detect and classify the hand gesture used for sign language with high accuracy and
precision.
Sub Objective
❖ To use the trained model to detect and classify in real time.
Methodology
Reference Software
Model
Steps:-
Results
● https://www.google.com/search?q=hand+gesture+recognition+cnn+image&tbm=isch&hl=en-
GB&tbs=rimg
%3ACdJN_1KyjlLpaYcaramXggfJWsgIRCgIIABAAOgQIARAAVfCqAD_1AAgDYAgDgAgA#imgrc=AUONS
3HNoRhd3M
● Oudah, Munir, Ali Al-Naji, and Javaan Chahl. 2020. "Hand Gesture Recognition Based on Computer
Vision: A Review of Techniques" Journal of Imaging 6, no. 8: 73. Available at:
https://www.mdpi.com/2313-433X/6/8/73 [1]
● Chang, V., Eniola, R.O., Golightly, L. et al. An Exploration into Human–Computer Interaction: Hand
Gesture Recognition Management in a Challenging Environment. SN COMPUT. SCI. 4, 441 (2023).
https://doi.org/10.1007/s42979-023-01751-y [3]
● Sahoo JP, Prakash AJ, Pławiak P, Samantray S. Real-Time Hand Gesture Recognition Using Fine-Tuned
Convolutional Neural Network. Sensors (Basel). 2022 Jan 18;22(3):706. doi:10.3390/s22030706.
PMID: 35161453; PMCID: PMC8840381. [4]
● Oudah M, Al-Naji A, Chahl J. Hand Gesture Recognition Based on Computer Vision: A Review of
Techniques. J Imaging. 2020 Jul 23;6(8):73. doi: 10.3390/jimaging6080073 PMID: 34460688; PMCID:
PMC8321080. [6]
Referenc
es
Thank You.

More Related Content

PPTX
SYNTAX_ERROR_transforming_gesture_communicationtospeechSpeakify Final_review_...
PPTX
Speakify Final_review_format eDIT-1.pptx
PPTX
Speakify Final_review_format eDIT_ai.pptx
PPTX
PPT sign language indian recogntion system.pptx
PPTX
Hand gesture recognition PROJECT PPT.pptx
PDF
IRJET - Sign Language Recognition using Neural Network
PPTX
Gesture detection
PDF
Dynamic Hand Gesture Recognition for Indian Sign Language: A Review
SYNTAX_ERROR_transforming_gesture_communicationtospeechSpeakify Final_review_...
Speakify Final_review_format eDIT-1.pptx
Speakify Final_review_format eDIT_ai.pptx
PPT sign language indian recogntion system.pptx
Hand gesture recognition PROJECT PPT.pptx
IRJET - Sign Language Recognition using Neural Network
Gesture detection
Dynamic Hand Gesture Recognition for Indian Sign Language: A Review

Similar to Sign Lang Detection Project (PPT) Using Deep Learning (20)

PDF
Sign Language Identification based on Hand Gestures
PPTX
major project ppt final (SignLanguage Detection)
PDF
Sign Language Recognition System ppt2.pdf
PDF
IRJET- Hand Sign Recognition using Convolutional Neural Network
PPTX
Indian Sign Language Recognition Method For Deaf People
PDF
Adopting progressed CNN for understanding hand gestures to native languages b...
PPTX
PID_27_Stage1_Presentation.pptx
PPTX
SYNTAX_ERROR_transforming_gesture_communicationtospeech .pptx
PDF
SILINGO – SIGN LANGUAGE DETECTION/ RECOGNITION USING CONVOLUTIONAL NEURAL NET...
PDF
Indian Sign Language Recognition using Vision Transformer based Convolutional...
PDF
Controlling Computer using Hand Gestures
PPTX
HAND GESTURE RECOGNITION.ppt (1).pptx
PDF
IRJET- Survey on Sign Language and Gesture Recognition System
PPTX
Gesture Recognition (an AI model based project )ppt (3).pptx
PDF
Gesture Recognition System using Computer Vision
PDF
SIGN LANGUAGE RECOGNITION USING CONVOLUTIONAL NEURAL NETWORK.pdf
PDF
Sign Language Detector Using Cloud
PDF
IRJET - Mutecom using Tensorflow-Keras Model
PDF
IRJET - Sign Language Text to Speech Converter using Image Processing and...
PDF
Real Time Translator for Sign Language
Sign Language Identification based on Hand Gestures
major project ppt final (SignLanguage Detection)
Sign Language Recognition System ppt2.pdf
IRJET- Hand Sign Recognition using Convolutional Neural Network
Indian Sign Language Recognition Method For Deaf People
Adopting progressed CNN for understanding hand gestures to native languages b...
PID_27_Stage1_Presentation.pptx
SYNTAX_ERROR_transforming_gesture_communicationtospeech .pptx
SILINGO – SIGN LANGUAGE DETECTION/ RECOGNITION USING CONVOLUTIONAL NEURAL NET...
Indian Sign Language Recognition using Vision Transformer based Convolutional...
Controlling Computer using Hand Gestures
HAND GESTURE RECOGNITION.ppt (1).pptx
IRJET- Survey on Sign Language and Gesture Recognition System
Gesture Recognition (an AI model based project )ppt (3).pptx
Gesture Recognition System using Computer Vision
SIGN LANGUAGE RECOGNITION USING CONVOLUTIONAL NEURAL NETWORK.pdf
Sign Language Detector Using Cloud
IRJET - Mutecom using Tensorflow-Keras Model
IRJET - Sign Language Text to Speech Converter using Image Processing and...
Real Time Translator for Sign Language
Ad

Recently uploaded (20)

PDF
86236642-Electric-Loco-Shed.pdf jfkduklg
PDF
PREDICTION OF DIABETES FROM ELECTRONIC HEALTH RECORDS
PDF
Human-AI Collaboration: Balancing Agentic AI and Autonomy in Hybrid Systems
PPT
Occupational Health and Safety Management System
PDF
Unit I ESSENTIAL OF DIGITAL MARKETING.pdf
PPTX
Safety Seminar civil to be ensured for safe working.
PPTX
Fundamentals of safety and accident prevention -final (1).pptx
PPTX
UNIT - 3 Total quality Management .pptx
PDF
BIO-INSPIRED HORMONAL MODULATION AND ADAPTIVE ORCHESTRATION IN S-AI-GPT
PPTX
CURRICULAM DESIGN engineering FOR CSE 2025.pptx
PPT
INTRODUCTION -Data Warehousing and Mining-M.Tech- VTU.ppt
PPTX
MET 305 2019 SCHEME MODULE 2 COMPLETE.pptx
PPTX
Fundamentals of Mechanical Engineering.pptx
PPTX
Information Storage and Retrieval Techniques Unit III
PDF
Visual Aids for Exploratory Data Analysis.pdf
PDF
Integrating Fractal Dimension and Time Series Analysis for Optimized Hyperspe...
PDF
A SYSTEMATIC REVIEW OF APPLICATIONS IN FRAUD DETECTION
PDF
Soil Improvement Techniques Note - Rabbi
PDF
R24 SURVEYING LAB MANUAL for civil enggi
PDF
737-MAX_SRG.pdf student reference guides
86236642-Electric-Loco-Shed.pdf jfkduklg
PREDICTION OF DIABETES FROM ELECTRONIC HEALTH RECORDS
Human-AI Collaboration: Balancing Agentic AI and Autonomy in Hybrid Systems
Occupational Health and Safety Management System
Unit I ESSENTIAL OF DIGITAL MARKETING.pdf
Safety Seminar civil to be ensured for safe working.
Fundamentals of safety and accident prevention -final (1).pptx
UNIT - 3 Total quality Management .pptx
BIO-INSPIRED HORMONAL MODULATION AND ADAPTIVE ORCHESTRATION IN S-AI-GPT
CURRICULAM DESIGN engineering FOR CSE 2025.pptx
INTRODUCTION -Data Warehousing and Mining-M.Tech- VTU.ppt
MET 305 2019 SCHEME MODULE 2 COMPLETE.pptx
Fundamentals of Mechanical Engineering.pptx
Information Storage and Retrieval Techniques Unit III
Visual Aids for Exploratory Data Analysis.pdf
Integrating Fractal Dimension and Time Series Analysis for Optimized Hyperspe...
A SYSTEMATIC REVIEW OF APPLICATIONS IN FRAUD DETECTION
Soil Improvement Techniques Note - Rabbi
R24 SURVEYING LAB MANUAL for civil enggi
737-MAX_SRG.pdf student reference guides
Ad

Sign Lang Detection Project (PPT) Using Deep Learning

  • 1. Sign Language Detection using Deep Learning
  • 2. Conte nt ➔ Introduction ➔ Literature Review ➔ Objectives ➔ Methodology ➔ Results ➔ Working Model ➔ Conclusion
  • 3. ● Sign language is a language used as a manual communication method used by people who are deaf , mute, etc. ● Hand gesture is one of the main methods used in this language for non-verbal communication. ● In this project, we will be using the ISL (Indian Sign Language) dataset from Kaggle as well as ISLTranslate, which is a dataset containing frames and videos containing sign language, visual language, fingerspelling, and facial expressions in Indian Sign Language. ● The model will use a Deep Learning architecture that is efficient in Image recognition (Convolutional Neural Network Architecture). ● Using this model, we will train the model to recognize hand gestures and movement of hands with the dataset acquired. ● Once the model can successfully classify and recognize the images in real time, it will generate English text according to Sign Language, which will make communication with mute Introducti on
  • 4. Motivatio n ➔ Sign language is a manual type of communication commonly used by deaf and mute people. ➔ Our goal is to improve the communication between deaf/mute people from different areas and those who cannot understand sign language.
  • 5. Technical Concepts (Algorithms) used Using a CNN (Convolutional Neural Network) model for the classification of Indian Sign Languages involves several technical concepts and algorithms. CNN is a deep learning architecture known for its ability to process grid-like data such as images and videos effectively. When applying CNN architecture for Indian Sign Language classification, we encountered the following technical concepts and algorithms: ● CNN Architecture: Convolutional Neural Network (CNN) architecture is designed for processing and analysing computer vision tasks. It utilizes various layers to train very deep neural networks effectively on image and video dataset. ● Convolutional Layers: These layers automatically extract features from hand gesture images. They're crucial for recognizing patterns in images of the hand, helping the model identify signs language and classify them correctly. ● Pooling layers: These layers downgrade the dimensions of the image input by reducing its size. This helps in reducing the computational usage and focuses on important functions while retaining its features. ● Fully Connected Layers: These layers connect to every neuron in one layer with another layer, enabling the network to classify and predict based on the learned weights and features
  • 6. Problem Statement Sign language is a manual type of communication commonly used by deaf and mute people. It is not a universal language, so many deaf/mute people from different regions speak different sign languages. So, this project aims to improve the communication between deaf/mute people from different areas and those who cannot understand sign language. We are using deep learning methods which can improve the classification accuracy of sign language gestures.
  • 7. ● Human-Computer Interaction (HCI): By enabling people to connect with computers and other electronic devices using natural hand gestures, hand gesture recognition systems transform conventional input techniques and improve user experiences. ● Assistive technology: Hand gesture recognition systems can help people with mobility limitations use devices, traverse interfaces, and communicate more successfully. ● Sign language recognition: Hand gesture recognition systems are essential for deciphering sign language motions, enabling smooth communication between the deaf and hard of hearing people. ● VR and Gaming: Hand gesture recognition technologies allow users to interact with virtual worlds and play games in a natural way, controlling avatars or characters with their hands. ● Home automation: Hand gesture recognition technologies can be incorporated into smart home systems, giving consumers the ease and efficiency of simple hand gestures to operate lights, appliances, and other IoT devices. ● Medical Applications: In order to improve efficiency and accuracy in medical procedures, surgeons and other medical professionals use hand gesture recognition systems in operating rooms and diagnostic settings. Area of Applications
  • 8. ➔ Input Format: Hand Gesture Image Data: High-resolution Hand Gesture images should make up the majority of the dataset. A variety of alphabets and numbers which are labeled with hand gestures averaging about 1200 images must be covered by these photographs Image Labels: For the supervised learning approach to work, accurate alphabets and numeric labels are essential. The precise labels that each hand gesture image depicts will be identified by a clear label. To enable model training, a Dataset and input format
  • 9. Title: Hand Gesture Recognition Based on Computer Vision: A Review of Techniques Journal: Journal of Imaging Methodology: ● The study looks at studies that investigate hand gestures as a nonverbal communication method in a variety of domains, including medical applications, robot control, human- computer interface (HCI), home automation, and communication for the deaf and silent. ● It groups the literature according to several methods, such as computer vision and instrumented sensor technology. ● Additionally, the article classifies hand gestures according to their posture, dynamic/static nature, or hybrid forms. Research Gap: ● The present study largely ignores real-world healthcare applications in favour of computer applications, sign language, and virtual environment engagement. ● The majority of studies place more emphasis on developing algorithms and improving frameworks than on actually implementing healthcare practices, which indicates a large research vacuum in this area. LITERATURE SURVEY - 1
  • 10. Title: Hand Gesture Recognition: A Literature Review Journal: International Journal of Artificial Intelligence & Applications Methodology: ● The literature discusses techniques including orientation histogram for features representation, fuzzy c-means clustering, neural networks (NN), and hidden markov models (HMM). ● In particular, HMM techniques perform well in dynamic gestures, especially in robot control scenarios. ● Neural networks play a crucial role in hand form recognition as classifiers. In gesture recognition systems, methods for extracting features—such as algorithms for capturing hand shape—are essential. Research gap: ● Their uses have been well documented in the literature, there are still plenty of unanswered questions regarding the real-world applications of these technologies, particularly in healthcare settings. ● Moreover, although the article address current recognition methods, a thorough assessment and comparison of these systems in practical healthcare settings is lacking. LITERATURE SURVEY - 2
  • 11. Title: An Exploration into Human–Computer Interaction: Hand Gesture Recognition Management in a Challenging Environment Journal: SpringLink Methodology: ● The methodology entails a methodical examination of pertinent literature to pinpoint important developments, strategies, and obstacles in hand gesture detection and human-computer interaction. ● After that, an image dataset is chosen for analysis, and then picture enhancement and segmentation procedures are carried out. ● By isolating the main image from its backdrop, converting colour spaces, and lowering background noise, these methods help to improve the quality of raw photographs. ● The next phase is using machine learning methods, namely Convolutional Neural Networks (CNN), to recognize hand gestures and learn attributes. Research Gap: ● In order to provide a fair and impartial model, the study emphasises the significance of contrasting analytical and discriminatory inclinations. LITERATURE SURVEY - 3
  • 12. LITERATURE SURVEY - 4 Title: Real-Time Hand Gesture Recognition Using Fine-Tuned Convolutional Neural Network Journal: Sensors Methodology: ● The research paper highlights the benefits and drawbacks of different sensors for the development of HGR systems through a methodological comparison. ● Hand regions are identified and resized using image enhancement and segmentation algorithms to match the input sizes of pre-trained Convolutional Neural Networks (CNNs). ● Hand region segmentation is achieved by using maximum-area-based filtering algorithms and depth thresholding. ● Additionally, the study uses a score-level fusion strategy that combines, using a sum- ruled-based fusion procedure, the normalised score vectors from two fine-tuned CNNs. Research Gap: ● There are important concerns, aspects like user comfort, real-time performance, and adaptation to changing contexts are not fully covered. ● Furthermore, the research leaves gaps in comprehending the larger ramifications and possible social repercussions of HGR systems because it focuses exclusively on technical techniques. To close these gaps and create all-encompassing HGR systems that take into account both real-world usability issues and technological constraints, more study is
  • 13. Objecti ve Main Objective ❖ To detect and classify the hand gesture used for sign language with high accuracy and precision. Sub Objective ❖ To use the trained model to detect and classify in real time.
  • 17. ● https://www.google.com/search?q=hand+gesture+recognition+cnn+image&tbm=isch&hl=en- GB&tbs=rimg %3ACdJN_1KyjlLpaYcaramXggfJWsgIRCgIIABAAOgQIARAAVfCqAD_1AAgDYAgDgAgA#imgrc=AUONS 3HNoRhd3M ● Oudah, Munir, Ali Al-Naji, and Javaan Chahl. 2020. "Hand Gesture Recognition Based on Computer Vision: A Review of Techniques" Journal of Imaging 6, no. 8: 73. Available at: https://www.mdpi.com/2313-433X/6/8/73 [1] ● Chang, V., Eniola, R.O., Golightly, L. et al. An Exploration into Human–Computer Interaction: Hand Gesture Recognition Management in a Challenging Environment. SN COMPUT. SCI. 4, 441 (2023). https://doi.org/10.1007/s42979-023-01751-y [3] ● Sahoo JP, Prakash AJ, Pławiak P, Samantray S. Real-Time Hand Gesture Recognition Using Fine-Tuned Convolutional Neural Network. Sensors (Basel). 2022 Jan 18;22(3):706. doi:10.3390/s22030706. PMID: 35161453; PMCID: PMC8840381. [4] ● Oudah M, Al-Naji A, Chahl J. Hand Gesture Recognition Based on Computer Vision: A Review of Techniques. J Imaging. 2020 Jul 23;6(8):73. doi: 10.3390/jimaging6080073 PMID: 34460688; PMCID: PMC8321080. [6] Referenc es