SlideShare a Scribd company logo
Methodist College of Engineering and Technology
King Koti, Abids, Hyderabad-500001
Department of Computer Science and Engineering
A Major Project on
Sign Language Detection Using Cnn
Submitted By
Zuhair M A Hadi (160720747044)
Syed Rashwan (160720747307)
Sandeep (160720747309)
Under the Guidance of
Mrs. J sowmya
Assistant Professor
Dept of CSE
Abstract
Our project aims to create a computer application and
train a model which when shown a real time video of
hand gestures Of Sign Language shows the output for
that particular sign in text format on the screen.
Introduction
● Sign Language is a form of communication used primarily by people hard of hearing or
deaf. This type of gesture-based language allows people to convey ideas and thoughts
easily overcoming the barriers caused by difficulties from hearing issues.
●A major issue with this convenient form of communication is the lack of knowledge of
the language for the vast majority of the global population. Just as any other language,
learning Sign Language takes much time and effort, discouraging to from being learned
by the larger population.
The Need for a Computer System
●The development of a computer application for sign language translation using deep
learning can bridge the communication gap and empower individuals with hearing
impairments to effectively communicate with the wider community.
●This application aims to provide accurate and real-time translation of sign language
gestures into text, enabling seamless communication between deaf and hearing
individuals.
Literature survey
no. Paper Title Authors Publication
Year
Key Contributions
1 Deep Learning for Hand Gesture
Recognition: A Review
Muhammad Ahmad,
Soonja Yeom
2017 Comprehensive review of deep
learning techniques for hand
gesture recognition, covering
CNNs, RNNs, datasets,
benchmarks, challenges, and
future directions.
2 Recent Advances in Deep
Learning-Based SLR
Ahmed Selim,
Mohamed A. El-
Bendary, et al.
2020 Overview of recent advances in
SLR using deep learning,
including dataset collection,
preprocessing, model
architectures, evaluation metrics,
and challenges.
3 SLR with Deep Learning: A
Comparative Review
Jui Pandhare,
Supriya Patil
2021 Comparative review of deep
learning models for SLR,
comparing CNNs, RNNs, and
combinations on various datasets,
discussing performance,
advantages, limitations, and future
directions.
Literature survey
no. Paper Title Authors Publication
Year
Key Contributions
4 Sign Language Recognition
Using CNNs
Daniel McDuff,
Dan R. Jurafsky
2009 Proposal of CNN-based approach for
SLR, including architecture design,
preprocessing steps, and experimental
results, demonstrating effectiveness.
5 Real-Time American Sign
Language Recognition Using
CNNs
Thad Starner, Alex
Pentland
1998 Introduction of real-time SLR system
based on CNNs, covering architecture,
dataset, preprocessing, and
experimental results, laying foundation
for future research.
6 Hand Gesture Recognition
Using Deep Learning: A
Review
Yanan Li, Shuang
Wu, et al.
2020 Review of hand gesture recognition
using deep learning, discussing CNN
architectures, datasets, preprocessing
methods, evaluation metrics,
applications, and challenges.
Problem Statement
Deaf and Dumb people use hand signs to
communicate, hence normal people face
problem in recognizing their language by
signs made.
Existing System
■ Hand Gesture Recognition Systems:
■ Limitations: These systems may struggle with accurately distinguishing
between different sign language gestures, especially in cases of complex
or subtle movements. Limited lighting conditions or occlusions can also
affect recognition accuracy.
Objectives
●In this project, we will use image processing, and neural networks to
develop a robust classification model.
● Sign language will be used to for training our neural network model
for classification.
●Image Processing will be used to take the images as input and for
making them suitable for classification.
deep learning model
● Convolutional Neural Networks
● CNNs consist of multiple convolutional layers each layer
containing numerous “filters” which perform feature extraction.
● Initially these “filters” are random and by training, the feature
extraction gets better by better.
● It’s primarily used for image classification.
Proposed architecture
Simple Project Flow: Architecture
UML Diagrams
Use Case Diagram:
Activity Diagram:
Sequence Diagram:
Class Diagram:
●
Implementation
1.Creating the Dataset:
To create our sign language gesture dataset, we used a webcam and
computer vision techniques. We captured live video feed of hand gestures
for different alphabets and a blank gesture, manually triggered via keyboard
inputs. Each gesture's frame was saved as an image, resulting in a dataset
containing approximately 900 images per alphabet, each of size 48x48
pixels. This dataset was tailored specifically for training and testing our sign
language recognition model.
2.Pre-processing the Dataset:
Our dataset undergoes several pre-processing steps before model training.
We employ Keras' ImageDataGenerator for real-time data augmentation,
normalize pixel values to a range of [0, 1], split data into training and
validation sets, resize images to a standard size of 48x48 pixels, and convert
them to grayscale format. These procedures enhance model robustness, aid
in faster convergence, and ensure consistency in input dimensions,
facilitating effective machine learning model training.
3.Creating CNN Model:
Our Convolutional Neural Network (CNN) built with Keras comprises
several layers: starting with convolutional layers employing 64, 128, 256,
and 512 filters successively, each followed by batch normalization, max
pooling, and dropout for regularization. After flattening the output, we have
densely connected layers with 512, 128, and 64 neurons, each followed by
ReLU activation and dropout for regularization. The final layer is a softmax
activated dense layer with 27 neurons, representing the 27 classes for
classification. This architecture aims to extract hierarchical features from
input images and classify them effectively while preventing overfitting
through regularization techniques like dropout.
4.Training the CNN Model:
●After the model is created, we need to fit our train data into it so that the
model will learn about the data through training and then it would be used
for further classifying real-time data. We use the “fit” method to give our
training data. The batch_size is set as 128 which means that images in
will be fed to the CNN model in batches. The epoch is set to be 100. The
validation split is set as “0.5” which means that the data set is divided
into two parts, one for training and other for validation.
5.Implementing Live Classification:
We will be using OpenCV to get the images through the live feed. After
receiving the images, we will pre-process them similarly as mentioned
above. After pre-processing, the image is given to the model to classify. The
model will give us a numeric output corresponding to the alphabet that the
sign portrays. That alphabet is taken and printed over the screen for user
accessibility.
Results
Output screenshots
Conclusion and Future Scope
Our project has presented a model for translating sign language to text. The model that was developed in
this project has a fair enough accuracy and can be used for further improvisation.
For better live classification, an object detection module could have been developed or an already existing
one could have been used which will lead to better results in translating. Some kind of support for
translating more complex sentences can be further investigated for the future of this project.
Reference
● 1. Jungong Han, Enhanced Computer Vision with Microsoft Kinect Sensor: A
Review, IEEE TRANSACTIONS ON CYBERNETICS.
● 2. Microsoft Kinect SDK, http://www.microsoft.com/en- us/kinectforwindows/.
● 3. Gunasekaran. K, Manikandan. R, International Journal of Engineering and
Technology (IJET): Sign Language to Speech Translation System Using PIC
Microcontroller.
Project Timeline
Thank you!

More Related Content

PPTX
[SDP-2024] Group-H18 End-Term Evaluation PPT.pptx
PDF
SIGN LANGUAGE RECOGNITION USING CONVOLUTIONAL NEURAL NETWORK.pdf
PPTX
finalyear_projecGHHHHHHHHHHHHHHHDYTDYTRTRTD
PDF
SILINGO – SIGN LANGUAGE DETECTION/ RECOGNITION USING CONVOLUTIONAL NEURAL NET...
PDF
Sign Language Recognition System ppt2.pdf
PPTX
Presentation_Conversion of Sign language to text.pptx
PDF
IRJET- Hand Sign Recognition using Convolutional Neural Network
PPTX
PID_27_Stage1_Presentation.pptx
[SDP-2024] Group-H18 End-Term Evaluation PPT.pptx
SIGN LANGUAGE RECOGNITION USING CONVOLUTIONAL NEURAL NETWORK.pdf
finalyear_projecGHHHHHHHHHHHHHHHDYTDYTRTRTD
SILINGO – SIGN LANGUAGE DETECTION/ RECOGNITION USING CONVOLUTIONAL NEURAL NET...
Sign Language Recognition System ppt2.pdf
Presentation_Conversion of Sign language to text.pptx
IRJET- Hand Sign Recognition using Convolutional Neural Network
PID_27_Stage1_Presentation.pptx

Similar to major project ppt final (SignLanguage Detection) (20)

PPTX
Sign Lang Detection Project (PPT) Using Deep Learning
PPTX
Mid-term SDP-2024 Group-IH18 Evaluation PPT.pptx
PPTX
PPT sign language indian recogntion system.pptx
PDF
Sign Language Detector Using Cloud
PDF
SIGN LANGUAGE INTERFACE SYSTEM FOR HEARING IMPAIRED PEOPLE
PPTX
Sign Language Recognition based on Hands symbols Classification
PDF
Sign Language Recognition using Machine Learning
PPTX
Hand gesture recognition PROJECT PPT.pptx
PDF
IRJET - Sign Language Recognition using Neural Network
PDF
IRJET - Sign Language Text to Speech Converter using Image Processing and...
PDF
Sign Language Recognition using Deep Learning
PDF
Sign Language Recognition
PDF
SIGN LANGUAGE RECOGNITION USING MACHINE LEARNING
PDF
INDIAN SIGN LANGUAGE TRANSLATION FOR HARD-OF-HEARING AND HARD-OF-SPEAKING COM...
PPTX
SYNTAX_ERROR_transforming_gesture_communicationtospeechSpeakify Final_review_...
PPTX
Speakify Final_review_format eDIT-1.pptx
PPTX
Speakify Final_review_format eDIT_ai.pptx
PDF
IRJET- Tamil Sign Language Recognition Using Machine Learning to Aid Deaf and...
PPTX
VIVA_PRESENTATION using python in javatpoint
PDF
Real Time Translator for Sign Language
Sign Lang Detection Project (PPT) Using Deep Learning
Mid-term SDP-2024 Group-IH18 Evaluation PPT.pptx
PPT sign language indian recogntion system.pptx
Sign Language Detector Using Cloud
SIGN LANGUAGE INTERFACE SYSTEM FOR HEARING IMPAIRED PEOPLE
Sign Language Recognition based on Hands symbols Classification
Sign Language Recognition using Machine Learning
Hand gesture recognition PROJECT PPT.pptx
IRJET - Sign Language Recognition using Neural Network
IRJET - Sign Language Text to Speech Converter using Image Processing and...
Sign Language Recognition using Deep Learning
Sign Language Recognition
SIGN LANGUAGE RECOGNITION USING MACHINE LEARNING
INDIAN SIGN LANGUAGE TRANSLATION FOR HARD-OF-HEARING AND HARD-OF-SPEAKING COM...
SYNTAX_ERROR_transforming_gesture_communicationtospeechSpeakify Final_review_...
Speakify Final_review_format eDIT-1.pptx
Speakify Final_review_format eDIT_ai.pptx
IRJET- Tamil Sign Language Recognition Using Machine Learning to Aid Deaf and...
VIVA_PRESENTATION using python in javatpoint
Real Time Translator for Sign Language
Ad

Recently uploaded (20)

PDF
737-MAX_SRG.pdf student reference guides
PPT
INTRODUCTION -Data Warehousing and Mining-M.Tech- VTU.ppt
PPTX
CURRICULAM DESIGN engineering FOR CSE 2025.pptx
PPTX
communication and presentation skills 01
PPTX
6ME3A-Unit-II-Sensors and Actuators_Handouts.pptx
PPTX
Software Engineering and software moduleing
PDF
Human-AI Collaboration: Balancing Agentic AI and Autonomy in Hybrid Systems
PDF
III.4.1.2_The_Space_Environment.p pdffdf
PPT
Total quality management ppt for engineering students
PDF
Exploratory_Data_Analysis_Fundamentals.pdf
PPTX
AUTOMOTIVE ENGINE MANAGEMENT (MECHATRONICS).pptx
PPTX
Graph Data Structures with Types, Traversals, Connectivity, and Real-Life App...
PDF
22EC502-MICROCONTROLLER AND INTERFACING-8051 MICROCONTROLLER.pdf
PDF
distributed database system" (DDBS) is often used to refer to both the distri...
PDF
UNIT no 1 INTRODUCTION TO DBMS NOTES.pdf
PPT
Occupational Health and Safety Management System
PPTX
"Array and Linked List in Data Structures with Types, Operations, Implementat...
PPTX
Sorting and Hashing in Data Structures with Algorithms, Techniques, Implement...
PDF
Abrasive, erosive and cavitation wear.pdf
PPTX
Current and future trends in Computer Vision.pptx
737-MAX_SRG.pdf student reference guides
INTRODUCTION -Data Warehousing and Mining-M.Tech- VTU.ppt
CURRICULAM DESIGN engineering FOR CSE 2025.pptx
communication and presentation skills 01
6ME3A-Unit-II-Sensors and Actuators_Handouts.pptx
Software Engineering and software moduleing
Human-AI Collaboration: Balancing Agentic AI and Autonomy in Hybrid Systems
III.4.1.2_The_Space_Environment.p pdffdf
Total quality management ppt for engineering students
Exploratory_Data_Analysis_Fundamentals.pdf
AUTOMOTIVE ENGINE MANAGEMENT (MECHATRONICS).pptx
Graph Data Structures with Types, Traversals, Connectivity, and Real-Life App...
22EC502-MICROCONTROLLER AND INTERFACING-8051 MICROCONTROLLER.pdf
distributed database system" (DDBS) is often used to refer to both the distri...
UNIT no 1 INTRODUCTION TO DBMS NOTES.pdf
Occupational Health and Safety Management System
"Array and Linked List in Data Structures with Types, Operations, Implementat...
Sorting and Hashing in Data Structures with Algorithms, Techniques, Implement...
Abrasive, erosive and cavitation wear.pdf
Current and future trends in Computer Vision.pptx
Ad

major project ppt final (SignLanguage Detection)

  • 1. Methodist College of Engineering and Technology King Koti, Abids, Hyderabad-500001 Department of Computer Science and Engineering A Major Project on Sign Language Detection Using Cnn Submitted By Zuhair M A Hadi (160720747044) Syed Rashwan (160720747307) Sandeep (160720747309) Under the Guidance of Mrs. J sowmya Assistant Professor Dept of CSE
  • 2. Abstract Our project aims to create a computer application and train a model which when shown a real time video of hand gestures Of Sign Language shows the output for that particular sign in text format on the screen.
  • 3. Introduction ● Sign Language is a form of communication used primarily by people hard of hearing or deaf. This type of gesture-based language allows people to convey ideas and thoughts easily overcoming the barriers caused by difficulties from hearing issues. ●A major issue with this convenient form of communication is the lack of knowledge of the language for the vast majority of the global population. Just as any other language, learning Sign Language takes much time and effort, discouraging to from being learned by the larger population. The Need for a Computer System ●The development of a computer application for sign language translation using deep learning can bridge the communication gap and empower individuals with hearing impairments to effectively communicate with the wider community. ●This application aims to provide accurate and real-time translation of sign language gestures into text, enabling seamless communication between deaf and hearing individuals.
  • 4. Literature survey no. Paper Title Authors Publication Year Key Contributions 1 Deep Learning for Hand Gesture Recognition: A Review Muhammad Ahmad, Soonja Yeom 2017 Comprehensive review of deep learning techniques for hand gesture recognition, covering CNNs, RNNs, datasets, benchmarks, challenges, and future directions. 2 Recent Advances in Deep Learning-Based SLR Ahmed Selim, Mohamed A. El- Bendary, et al. 2020 Overview of recent advances in SLR using deep learning, including dataset collection, preprocessing, model architectures, evaluation metrics, and challenges. 3 SLR with Deep Learning: A Comparative Review Jui Pandhare, Supriya Patil 2021 Comparative review of deep learning models for SLR, comparing CNNs, RNNs, and combinations on various datasets, discussing performance, advantages, limitations, and future directions.
  • 5. Literature survey no. Paper Title Authors Publication Year Key Contributions 4 Sign Language Recognition Using CNNs Daniel McDuff, Dan R. Jurafsky 2009 Proposal of CNN-based approach for SLR, including architecture design, preprocessing steps, and experimental results, demonstrating effectiveness. 5 Real-Time American Sign Language Recognition Using CNNs Thad Starner, Alex Pentland 1998 Introduction of real-time SLR system based on CNNs, covering architecture, dataset, preprocessing, and experimental results, laying foundation for future research. 6 Hand Gesture Recognition Using Deep Learning: A Review Yanan Li, Shuang Wu, et al. 2020 Review of hand gesture recognition using deep learning, discussing CNN architectures, datasets, preprocessing methods, evaluation metrics, applications, and challenges.
  • 6. Problem Statement Deaf and Dumb people use hand signs to communicate, hence normal people face problem in recognizing their language by signs made.
  • 7. Existing System ■ Hand Gesture Recognition Systems: ■ Limitations: These systems may struggle with accurately distinguishing between different sign language gestures, especially in cases of complex or subtle movements. Limited lighting conditions or occlusions can also affect recognition accuracy.
  • 8. Objectives ●In this project, we will use image processing, and neural networks to develop a robust classification model. ● Sign language will be used to for training our neural network model for classification. ●Image Processing will be used to take the images as input and for making them suitable for classification.
  • 9. deep learning model ● Convolutional Neural Networks ● CNNs consist of multiple convolutional layers each layer containing numerous “filters” which perform feature extraction. ● Initially these “filters” are random and by training, the feature extraction gets better by better. ● It’s primarily used for image classification.
  • 11. Simple Project Flow: Architecture
  • 16. Implementation 1.Creating the Dataset: To create our sign language gesture dataset, we used a webcam and computer vision techniques. We captured live video feed of hand gestures for different alphabets and a blank gesture, manually triggered via keyboard inputs. Each gesture's frame was saved as an image, resulting in a dataset containing approximately 900 images per alphabet, each of size 48x48 pixels. This dataset was tailored specifically for training and testing our sign language recognition model.
  • 17. 2.Pre-processing the Dataset: Our dataset undergoes several pre-processing steps before model training. We employ Keras' ImageDataGenerator for real-time data augmentation, normalize pixel values to a range of [0, 1], split data into training and validation sets, resize images to a standard size of 48x48 pixels, and convert them to grayscale format. These procedures enhance model robustness, aid in faster convergence, and ensure consistency in input dimensions, facilitating effective machine learning model training.
  • 18. 3.Creating CNN Model: Our Convolutional Neural Network (CNN) built with Keras comprises several layers: starting with convolutional layers employing 64, 128, 256, and 512 filters successively, each followed by batch normalization, max pooling, and dropout for regularization. After flattening the output, we have densely connected layers with 512, 128, and 64 neurons, each followed by ReLU activation and dropout for regularization. The final layer is a softmax activated dense layer with 27 neurons, representing the 27 classes for classification. This architecture aims to extract hierarchical features from input images and classify them effectively while preventing overfitting through regularization techniques like dropout.
  • 19. 4.Training the CNN Model: ●After the model is created, we need to fit our train data into it so that the model will learn about the data through training and then it would be used for further classifying real-time data. We use the “fit” method to give our training data. The batch_size is set as 128 which means that images in will be fed to the CNN model in batches. The epoch is set to be 100. The validation split is set as “0.5” which means that the data set is divided into two parts, one for training and other for validation.
  • 20. 5.Implementing Live Classification: We will be using OpenCV to get the images through the live feed. After receiving the images, we will pre-process them similarly as mentioned above. After pre-processing, the image is given to the model to classify. The model will give us a numeric output corresponding to the alphabet that the sign portrays. That alphabet is taken and printed over the screen for user accessibility.
  • 23. Conclusion and Future Scope Our project has presented a model for translating sign language to text. The model that was developed in this project has a fair enough accuracy and can be used for further improvisation. For better live classification, an object detection module could have been developed or an already existing one could have been used which will lead to better results in translating. Some kind of support for translating more complex sentences can be further investigated for the future of this project.
  • 24. Reference ● 1. Jungong Han, Enhanced Computer Vision with Microsoft Kinect Sensor: A Review, IEEE TRANSACTIONS ON CYBERNETICS. ● 2. Microsoft Kinect SDK, http://www.microsoft.com/en- us/kinectforwindows/. ● 3. Gunasekaran. K, Manikandan. R, International Journal of Engineering and Technology (IJET): Sign Language to Speech Translation System Using PIC Microcontroller.