Nothing Special   »   [go: up one dir, main page]

Black Book Final Word

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 66

A PROJECT REPORT ON

SIGN LANGUAGE RECOGNITION USING CNN ALGORITHM


AND HORIZONTAL VOTING ENSEMBLE

SUBMITTED TO SAVITRIBAI PHULE PUNE UNIVERSITY, PUNE


IN THE PARTIAL FULFILLMENT OF THE REQUIREMENTS
FOR THE AWARD OF THE DEGREE

OF

BACHELOR OF ENGINEERING
(COMPUTER ENGINEERING)

SUBMITTED BY

PARTH NITIN JAISWAL Exam No : B190554260


SANIYA YOGESH GAPCHUP Exam No : B190554243
RUSHIKESH RAJENDRA DHAWALE Exam No : B190554231

DEPARTMENT OF COMPUTER ENGINEERING

NUTAN MAHARASHTRA INSTITUTE OF ENGINEERING& TECHNOLOGY

TALEGAON DABHADE TAL.MAVAL, PUNE, 410507


SAVITRIBAI PHULE PUNE UNIVERSITY

2023 -2024

NMIET, Department of Computer Engineering 2023-2024


1
CERTIFICATE

This is to certify that the project report entitled

“SIGN LANGUAGE RECOGNITON USING CNN ALGORITHM


AND HORIZONTAL VOTING ENSEMBLE”
Submitted by

PARTH NITIN JAISWAL Exam No : 72031405E


SANIYA YOGESH GAPCHUP Exam No : 72153076B
RUSHIKESH RAJENDRA DHAWALE Exam No : 72153063L

Are bonafide student of this institute and the work has been carried out by
him/her under the supervision of Prof. Rohini Hanchateand it is approved for the
partial fulfillment of the requirement of Savitribai Phule Pune University, for the
award of the degree of Bachelor of Engineering (Computer Engineering).

Prof. Rohini Hanchate Prof. Pritam Ahire Dr. Saurabh SaojiDr. Vilas Deotare
Internal Guide Project Coordinator H.O.D Principle
Dept.of Computer Engg. Dept.of Computer Engg. Dept.of Computer Engg. NMIET Pune

External Examiner
Sign

Nutan Maharashtra Institute of Engineering and Technology, Pune – 07


Place : Pune
Date :

NMIET, Department of Computer Engineering 2023-2024


2
ACKNOWLEDGEMENT

It gives us great pleasure in presenting the preliminary project report on


“Sign Language Recognition Using CNN Algorithm and Horizontal
Voting Ensemble”.

I would like to take this opportunity to thank my internal guide


Prof. Rohini Hanchate giving us all the help and guidance needed,
especially for the useful suggestions given during the course of the
project.
We would also like to thank our project coordinator Prof. Pritam
Ahire, for his assistance and support. We would also like to thank our
Head of the Computer Department, Prof. Dr. Saurabh Saoji, for his
unwavering support for this project work.
We are grateful to our Principal Prof. Dr. Vilas Deotare for providing
us with an environment to complete our project successfully. We also
thank all the staff members and technicians of our college for their
help.
We also thank all the Machine learning committees for enriching us
with their immense knowledge. Finally, we take this opportunity to
extend our deep appreciation to our family and friends, for all that
they meant to us during the crucial times of the completion of our
project.

PARTH NITIN JAISWAL Exam No : 72031405E


SANIYA YOGESH GAPCHUP Exam No : 72153076B
RUSHIKESH RAJENDRA DHAWALE Exam No : 72153063L
NMIET, Department of Computer Engineering 2023-2024
3
(B.E. Computer Engg.)

ABSTRACT

Numerous fields and disciplines are intersected in the study of sign


language. Data gloves and visual sign language recognition are now
the two main study areas in sign language recognition. While the
latter uses the camera to capture the user's hand features for sign
language detection and translation, the former employs the data
collected by the sensor for these purposes.

For the purpose of communicating both inside and outside of their


own community, deaf and hard of hearing individuals primarily use
sign language. They can't speak or hear; therefore they communicate
with hand gestures in this language. The field of Sign Language
Recognition (SLR) focuses on the identification of hand gestures and
progresses until related hand gestures are converted into text or
speech.

Sign language hand motions can be categorized as either static or


dynamic in this context. Though the human community values both
types of recognition, static hand gesture recognition is more
straightforward than dynamic hand gesture recognition.With Deep
Neural Network designs (Convolution Neural Network designs), we
may utilize Deep Learning Computer Vision to recognize hand
NMIET, Department of Computer Engineering 2023-2024
4
gestures. The model will learn to recognize the hand gesture photos
throughout an epoch.

TABLE OF CONTENTS

CHAPTER TITLE PAGE NO.

1 Introduction 12
1.1 Motivation 13
1.2 Problem Definition 14

2 Literature Survey 15

3 Software Requirements Specification 19


3.1 Introduction 20
3.1.1 Project Scope 20
3.1.2 Project Objective 21
3.1.3 Problem Identification 21
3.2 Existing System 22
3.2.1 Description 22
3.2.2 Disadvantages of System 23
3.3 Proposed System 21

3.4 Methodology 24
3.4.1 Dataset 24
3.4.2 Preprocessing of Data 25
3.4.3 Neural Network using CNN 25
3.5 System Requirements 27
3.5.1 Requirements 27
3.5.2 Software Requirements 28
3.5.3 Hardware Requirements 29
3.6 Analysis Models: SDLC Model to be Applied 29
3.7 System Implementation Plan 33

4 System Design 38
4.1 System Architecture 38
4.2 Data Flow Diagrams 41

NMIET, Department of Computer Engineering 2023-2024


5
4.3 Entity Relationship Diagrams 43
4.4 UML Diagrams 46

CHAPTER TITLE PAGE NO

5 Other Specifications 52
5.1 Advantages 53
5.2 Applications 53

6 Implementation 55
6.1 GUI Design 55
6.2 Database 56

7 Conclusion & Future Work 56


7.1 Conclusion 57
7.2 Future Work 57

References 58

NMIET, Department of Computer Engineering 2023-2024


6
LIST OF ABBREVATIONS

ABBREVIATION ILLUSTRATION

3.1 REQUIREMENT GATHERING


3.2 SDLC
4.1 System Architecture
4.5 UML Diagram
4.5.1 USE Case Diagram
4.5.2 Class Diagram
4.2.3 Sequence Diagram
4.2.4 State chart

NMIET, Department of Computer Engineering 2023-2024


7
LIST OF FIGURES

FIGURE ILLUSTRATION PAGE NO.

1 System Design 39
2 Data flow Diagram 42

3 ER Diagram 43

4 User Case Diagram 48


5 Class Diagram 49
6 Sequence Diagram 50
7 State Chart 51

NMIET, Department of Computer Engineering 2023-2024


8
CHAPTER 1

INTRODUCTION

NMIET, Department of Computer Engineering 2023-2024


9
1.1 Overview

Sign Language (SL) is the principal technique by which deaf and dumb individuals
communicate with one other and with their own community through hand and body
gestures.Its vocabulary, meaning, and grammar are all distinct from those of spoken
or written language. In order to express meaningful meanings, spoken language is
made up of articulate sounds that are mapped onto particular words and
grammatical combinations. Sign language is a visual language that communicates
meaning through hand and body gestures. for an estimated 7 million deaf people.
There aren't many sign language interpreters working now, therefore it would be
difficult to teach sign language to the deaf and dumb. The goal of sign language
recognition is to translate these hand motions into the appropriate spoken or written
language. These days, there is a lot of interest in Computer Vision and Deep Learning,
and it is possible to create several State of the Art (SOTA) models. These hand
motions can be classified, and matching text can be generated, with the use of Deep
Learning algorithms and Image Processing. An illustration of how the English letter
"A" is used in speech or writing.
Convolution neural networks, or CNNs, are the most widely used neural network
technique in deep learning and are frequently employed for image and video
tasks. We can use cutting-edge convolution neural network (CNN) designs, such as
LeNET-5 and MobileNetV2, to reach the State of the Art (SOTA). All of these
architectures can be used, and we can use neural network ensemble techniques to
integrate them. By doing this, we are able to create a model that can recognize
hand motions with nearly 100% accuracy. This approach will be implemented in
standalone apps, embedded devices, and web frameworks like Django, where
hand movements captured by a live camera will be translated into text. This
technology will facilitate easy communication for dumb and deaf people.

NMIET, Department of Computer Engineering 2023-2024


10
1.2 Motivation of the Project:

• Our main goal is to empower the communities of the deaf and hard of
hearing by giving them access to effective communication tools.
• Due to the subtleties and changes in hand positions, illumination, and
signing gestures, sign language detection is a challenging undertaking.
• Our goal is to close the communication gap between the hearing
community and the hearing-impaired by creating real-time sign
language recognition.
• Sign Language hand gestures to text/speech translation systems or
dialog systems which are used in specific public domains such as
airports, post offices, or hospitals.
• Sign Language Recognition (SLR) can help to translate the video to text
or speech enables inter-communication between normal and deaf
people.
• Additionally, we hope that our findings will stimulate more research in
the area of sign language recognition.
• We want to make a difference in creating a more inclusive society in a
world where technology is used more and more. Our research is to
develop solutions that allow the hard of hearing people to participate
in several spheres of life, such as social relationships, work, and
education.

NMIET, Department of Computer Engineering 2023-2024


11
1.3 Problem Definition and Objectives:

Many gestures are used in sign language to give the impression that it
is movement language, which is made up of various hand and arm
motions. Different sign languages and hand gestures correspond to
different nations. It should be noted that certain terms that are not
well-known can be translated by merely making motions for each
letter in the word. Additionally, each letter in the English vocabulary
and every number from 0 to 9 has a specific gesture in sign language.

NMIET, Department of Computer Engineering 2023-2024


12
CHAPTER 2

LITERATURE SURVEY

NMIET, Department of Computer Engineering 2023-2024


13
LITERATURE SURVEY

Paper-1: Indian Sign Language Recognition Using Neural Networks and KNN
Classifiers
• Publication Year: 8 August 2017
• Author: Madhuri Sharma, Ranjna Pal and Ashok Kumar Sahoo.
• Journal Name: ARPN Journal of Engineering and Applied Sciences.
• Summary: In this paper using KNN classifiers, The gesture recognition system is
capable of recognizing only numerical ISL static signs with 97.10% accuracy. The
experimental result shows that system can be used as a "working system" for Indian
Sign Language numerical recognition.

Paper-2: Sign language Recognition Using Machine Learning Algorithm.


• Publication Year: 3 March 2020
• Author: Prof. Radha S. Shirbhate1, Mr. Vedant D. Shinde, Ms. Sanam A. Metkari,
Ms. Pooja U. Borkar, Ms. Mayuri A. Khandge.
• Journal Name: International Research Journal of Engineering and Technology
(IRJET).
• Summary: In this work, we have gone through an automatic sign language gesture
recognition system in real-time, using different tools. Although our proposed work
expected to recognized the sign language and convert it into the text, there’s still a
lot of scope for possible future work.

NMIET, Department of Computer Engineering 2023-2024


14
Paper-3: Sign Language Action Recognition System Based on Deep Learning.
• Publication Year: 10 November 2021
• Author: Chaoqin Chu, Qinkun Xiao, Jielei Xiao, Chuanhai Gao.
• Journal Name: 5th International Conference on Automation, Control and Robots
(ICACR).
• Summary: In this paper, a sign language action recognition system based on deep
learning is implemented. The system has been successfully trained in all 10 types of
sign language action data sets. The training accuracy is 99%, and the test verification
accuracy is 98%. Sign language action recognition is an extensive research field,
which includes finger spelling, dynamic letters, dynamic words, isolated words and so
on. The proposed system framework can extend additional modules and
technologies, and form a fully automated sign language action recognition system in
the future.
Paper-4: Sign Language Recognition Based on Computer Vision
• Publication Year: 10 November 2021
• Author: Wanbo Li, Hang Pu, Ruijuan Wang
• Journal Name: IEEE International Conference on Artificial Intelligence and
Computer Applications (ICAICA)
• Summary: In this paper, a sign language recognition system based on computer
vision is designed, which uses The CNN neural network to extract the characteristics
of the ASL data corpus, and then puts it into the LSTM classifier to realize the
recognition of character-level sign language (American sign language, Arabic
numerals), and also enables the translation of sign language through the system, and
can convert user text or speech input into the corresponding American sign language

NMIET, Department of Computer Engineering 2023-2024


15
or Arabic numeral sign language. Experiments test sign language recognition
accuracy of 95.52.

Paper-5: Sign Language Recognition


• Publication Year: August 2021
• Author: Satwik Ram Kodandaram, N Pavan Kumar, unil G L
• Journal Name: Turkish Journal of Computer and Mathematics Education.
• Summary: In conclusion, we were successfully able to develop a practical and
meaningful system that can able to understand sign language and translate that to
the corresponding text. There are still many shortages of our system like this system
can detect 0-9 digits and A-Z alphabets hand gestures but doesn’t cover body
gestures and other dynamic gestures. We are sure and it can be improved and
optimized in the future.

Paper-6: A Review Paper on Sign Language Recognition for The Deaf and Dumb
• Publication Year: 10 October 2021
• Author: R Rumana, Reddygari Sandhya Rani, Mrs. R. Prema.
• Journal Name: International Journal of Engineering Research & Technology (IJERT)
• Summary: In this report, a functional real time vision based American sign
language recognition for Deaf and Dumb people have been developed for asl
alphabets. We achieved final accuracy of 92.0% on our dataset. We are able to
improve our prediction after implementing two layers of algorithms in which we
verify and predict symbols which are more similar to each other. This way we are
able to detect almost all the symbols provided that they are shown properly, there is
no noise in the background and lighting is adequate.

NMIET, Department of Computer Engineering 2023-2024


16
Paper-7: Sign Language Recognition Based on Computer Vision
• Publication Year: 05 May 2022
• Author: Ravindra Bula, Dipalee Golekar, Rutuja Hole, Sidheshwar Katare, S. R.
Bhujbal
• Journal Name: International Journal of Creative Research Thoughts
• Summary: Sign language recognition system is a powerful tool to prepare an
expert knowledge, edge detect and the combination of inaccurate information from
different sources. the intend of convolution neural network is to get the appropriate
classification.

Paper-8: Sign Language Action Recognition System Based on Deep Learning.


• Publication Year: 10 November 2021
• Author: Chaoqin Chu, Qinkun Xiao, Jielei Xiao, Chuanhai Gao.
• Journal Name: 5th International Conference on Automation, Control and Robots
(ICACR).
• Summary: In this paper, a sign language action recognition system based on deep
learning is implemented. The system has been successfully trained in all 10 types of
sign language action data sets. The training accuracy is 99%, and the test verification
accuracy is 98%. Sign language action recognition is an extensive research field,
which includes finger spelling, dynamic letters, dynamic words, isolated words and so
on. The proposed system framework can extend additional modules and

NMIET, Department of Computer Engineering 2023-2024


17
technologies, and form a fully automated sign language action recognition system in
the future.

CHAPTER 3
SOFTWARE REQUIREMENTS SPECIFICATION

NMIET, Department of Computer Engineering 2023-2024


18
SOFTWARE REQUIREMENTS SPECIFICATION

3.1 Introduction:
• Millions of people with hearing loss communicate vitally and
expressively with one other through sign language worldwide. It
is an intricate visual language with a large vocabulary of
expressions and gestures that let people communicate with one
another and transmit meaning.
• Although sign language is an amazing form of communication,
both those who use it and those who try to comprehend and
interpret it face particular difficulties.
• The advancement of sign language recognition systems, namely
those utilizing Convolution Neural Networks (CNNs), has
become a crucial measure in improving communication
accessibility and inclusion for the deaf and hard of hearing
populations.

3.1.1 Project Scope:


Our project's main objective is to create a CNN-based system for
recognizing sign language that satisfies the needs and obstacles listed
in the problem statement.
The following are the main components of the project:
• Data Collection and Preparation
• Model Development
• Evaluation and Testing
• Inclusivity and User-Friendliness
• Documentation and Reporting

NMIET, Department of Computer Engineering 2023-2024


19
3.1.2 Project Objective:

• This project's primary goals are to advance the fields of automatic sign
language recognition and voice or text translation. We are
concentrating on hand movements in static sign language for our
project.
• This work focused on using Deep Neural Networks (DNN) to recognize
hand motions that included 10 numerals (0-9) and 26 English alphabets
(A-Z).
• We developed a convolution neural network classifier that can identify
English numerals and alphabets from the hand motions.
• The neural network has been trained using a variety of setups and
designs, including LeNet-5, MobileNetV2, and our own design.
• To get the highest level of model accuracy possible, we employed the
horizontal voting ensemble technique.
• Additionally, we used the D-jango Rest Frameworks to build a web
application to test the outcomes of live camera.

3.1.3 Problem Identification:


It is difficult to develop a system that can reliably identify a variety of
signing styles and variations since sign language involves a broad
spectrum of motions and expressions.It can be difficult to recognize sign
language under a variety of lighting and ambient conditions, such as
strong sunlight, loud environments, and low light levels.Sign languages
differ between and even within geographical areas. It is difficult to
create a system that is cognizant of regional variations and
dialects.Real-time recognition is essential for useful applications such as

NMIET, Department of Computer Engineering 2023-2024


20
mobile apps and assistive devices. Effective communication can be
hampered by recognition delays

3.2 Existing System:

3.2.1 Description:

INDIAN SIGN LANGUAGE RECOGNITION USING NEURAL NETWORKS


ANDKNN CLASSIFIERS

The study that is being presented here details a method for


automatically recognizing numerical signs in Indian sign language that
take the form of solitary photographs. The signs were captured using
a standard camera. In order to use the project in a real-world setting,
we had to first construct a database of numerical signs with 5000
signs total—500 photos for each numeral sign. To extract desirable
features from sign photos, hierarchical centroid and direct pixel value
approaches are applied. Neural network and KNN classification
algorithms were employed to classify the signs following the
extraction of features from the photos. These studies yielded results
with an accuracy of up to 93.10%.

3.3.2 Disadvantages Of Existing System:


• High-dimensional areas might cause KNN's performance to
deteriorate, which makes it less effective for certain applications.

NMIET, Department of Computer Engineering 2023-2024


21
• Sensitivity to Noise: When dealing with noisy data points, KNN is
quite sensitive. KNN can yield unreliable results in sign language
recognition since there might be fluctuations and noise in the
recorded signs.
• Absence of Feature Learning: Neither representation optimization
nor feature learning are carried out by KNN. CNNs, on the other
hand, are better at extracting hierarchical features from
unprocessed picture data.

3.3 Proposed System:

Dataset

Since the datasets we identified were only available in RGB values, we were unable
to find any pre-existing raw image datasets that would meet our needs for project
implementation. As a result, we made the decision to produce our own dataset.
These are the actions we did: To create our dataset, we used the OpenCV (Open
Computer Vision) library. For training purposes, we first took about 800 pictures of
every American Sign Language (ASL) symbol, and for testing, we took about 200
pictures of each symbol. We started by taking a picture of every frame that our
computer's webcam showed. We identified a region of interest (ROI) denoted by a
blue square within each frame.[1]

Alphabet in ASL

The data consists of a set of photographs depicting the alphabet in American Sign
Language, arranged into 29 folders, each of which represents a different class. The
remaining 3 classes are DELETE, SPACE, and NOTHING. These three courses are highly
significant and practical for real-time applications.[10]

Images of Sign Language Gestures Collection

NMIET, Department of Computer Engineering 2023-2024


22
There is total 37 hand gestures in datasets include the A-Z alphabet, 0–9 number sign
gestures, and a gesture for the space, which indicates how hard of hearing
individuals or deaf convey space between two letters or words in communication.
This dataset is ideal for training neural networks with convolutions (CNNs), which are
used for gesture identification and model training.[8]

Pre-processing of Data

A two-dimensional array of pixels, or numbers spanning range between 0 to 255 is all


that an image is. Generally, white is represented by 255 and black by 0. The pixel
value of an image is returns the value of f(x, y) at any given position.[9]
In image pre-processing, algorithms are used to perform a variety of operations on
images. Pre-processing the images is essential before sending them for training the
model. For examples, all images ought to be in the similar dimensions, 200 by 200
pixels. If it isn't, the model can't be trained.

1. Read Images.
2. Reshape or resize images to the similar size.
3. Eliminate noise.

All pixel arrays of the images are scaled between a range of 0 and 255 by splitting
each image array by 255.

Neural Networks using Convolution (CNN)

Within the topic of artificial intelligence, computer vision addresses image and video-
related issues. CNN can handle complicated tasks when paired with computer vision.
• Feature extraction and classification are the two main stages of the convolution
neural network.
• Number of pooling and convolutions procedures are carried out to extract the
features from the images.
•Resulting matrix size may get less as we apply more filters over it.
• (Size of old matrix – filter size) +1 is the size of the new matrix.
NMIET, Department of Computer Engineering 2023-2024
23
•The classifier in convolution neural networks will be the fully connected layer.

The class optimality will be forecast in the last layer.


The following are the primary steps of convolution neural networks:
1. First layer -Convolution
2. Second Layer-Pooling
3. Third Layer-Flatten
4. Fourth Layer-Full connection.

Convergence

Convolution is essentially a filter that is applied to an image to extract specific


properties. We will employ a number of filters to extract information from an image,
such as edges and emphasized patterns. The production of the filters will be random.
The default size of the filter created by this convolution is 3 x 3. Once the filter is
generated, it occurs multiplication element-wise, commencing starts with the top-
left corner and progressing diagonally towards the bottom most-right corner of the
image. Subsequently, the extracted features will be discarded.

Pooling

Following the convolution process, this layer is introduced to decrease the


dimensions of the images. There are two primary forms of the pooling:
1. Max Pooling (maximum pooling)
2. Average Pooling

Flatters

There will be multiple dimensions in the final matrix that is produced. The data must
be flattened and converted into a 1-dimensional array in order to be entered into the
layer that comes after it. There is only one feature vector created when the
convolution layers are flattened.

Full Connection

NMIET, Department of Computer Engineering 2023-2024


24
For a final layer, a feed-forward neural network suffices. A prognosis will be made
and each procedure completed. In order to calculate the loss and update the weights
in accordance with the ground truth, the gradient descent back propagation
approach is employed.

Autocorrect Functionality:

The Hunspell suggest Python library is utilized to propose suitable corrections for
each wrongly spelled input word. It displays a list of words that resemble the
incorrect word, allowing the user to choose the most appropriate option to include
in their sentence. This feature aids in minimizing spelling errors and facilitates the
prediction of difficult words.[11]

Training and Testing:

For our process, initially we convert RGB images to grayscale and apply Gaussian blur
over to eliminate any superfluous noise or disturbance. We then use an adaptive
thresholding technique that separate the hand portion from our background and
reshape or size the images to pixels of 128 x 128. These preprocessed images are
after words feeded to our model for training and testing both, as described. We
measure performance using cross-entropy, which is a loss function that is minimized
when the predicted value matches the actual label. Our goal is to minimize this
function towards zero by optimizing the neural network's weights. TensorFlow offers
a built-in function to compute cross-entropy, and we further refine our model's
performance using gradient descent, specifically with an optimizer known as the
Adam Optimizer.[12]

Methodology:

Dataset

We have used multiple datasets and trained multiple models to achieve good
accuracy.

Alphabet in ASL

The data consists of a set of photographs depicting the alphabet in American Sign
Language, arranged into 29 folders, each of which represents a different class.
NMIET, Department of Computer Engineering 2023-2024
25
There are 87,000 200x200 pixel images in the training dataset. There are 29 classes
total; 26 of them contain the English alphabet from A to Z. The remaining 3 classes
are DELETE, SPACE, and NOTHING. These three courses are highly significant and
practical for real-time applications.

Images of Sign Language Gestures Collection

The 37 hand sign gestures in the dataset include the A-Z alphabet, 0–9 number
gestures, and a gesture for space, which indicates how deaf or hard of hearing
individuals convey space between two letters or words in communication. There are
37 gestures in all, and each gesture has 1500 50x50 pixel images, for a total of 55,500
images throughout all gestures. This dataset is ideal for training convolutional neural
networks (CNNs), which are used for gesture recognition and model training.

Pre-processing of Data

An image is nothing more than a 2-dimensional array of numbers or pixels


which are ranging from 0 to 255.Typically, 0 means black, and 255 means
white. Image is defined by mathematical function f (x, y) where ‘x’ represents
horizontal and ‘y’ represents vertical in a coordinate plane. The value of f (x,
y) at any point is giving the pixel value at that point of an image.
Algorithms are used in image pre-processing to carry out various operations
on images. Prior to delivering the photos for model training, it is crucial to
pre-process them. For instance, every image should be the same size—200 by
200 pixels. The model cannot be trained if it is not.
The steps we have taken for image Pre-processing are:
 Read Images.
 Resize or reshape all the images to the same
 Remove noise.
All the image pixels arrays are converted to 0 to 255 by dividing the image
array by255.

Neural Networks using Convolution (CNN)

Within the topic of artificial intelligence, computer vision addresses image


and video-related issues. CNN can handle complicated tasks when paired with
computer vision.
• The two primary stages of a convolution neural network are feature
extraction and classification.
NMIET, Department of Computer Engineering 2023-2024
26
• To extract the image's features, a number of convolution and pooling
procedures are carried out.
• As we apply more filters, the resulting matrix's size gets less.
• (Size of old matrix – filter size) +1 is the size of the new matrix.
• The classifier in the convolution neural networks will be a fully
connected layer.
• The class probability will be forecast in the final layer.
The following are the primary steps of convolution neural networks:
1. Convolution
2. Pooling
3. Flatten
4. Full connection.

Convergence

Convolution is essentially a filter used to extract characteristics from a


picture. To extract features from an image, such as edges and highlighted
patterns, we will use several filters. The filters are going to be produced at
random.
The filter that is produced by this convolution has a default size of 3 by 3.
Following the creation of the filter, the element-wise multiplication is carried
out, moving from the upper left corner of the image to the lower right corner.
The results acquired will have their features removed.

Pooling

After the convolution operation, the pooling layer will be applied. The pooling
layer is used
to reduce the size of the image. There are two types of pooling:
1. Max Pooling
2. Average Pooling

Flatters

The resulting matrix that is obtained will have several dimensions. In order to
enter the layer into the following layer, the data must be flattened and made
NMIET, Department of Computer Engineering 2023-2024
27
into a 1-dimensional array. The convolution layers are flattened to produce a
single feature vector.

Full Connection

All that's needed for a completely linked layer is a feed-forward neural


network. Every procedure will be carried out, and a prognosis will be made.
The gradient descent backpropagation technique is used to determine the
loss and update the weights based on the ground truth.

CHAPTER 2

LITERATURE SURVEY

NMIET, Department of Computer Engineering 2023-2024


28
2.1 Study of Research Paper:

Paper-1: Indian Sign Language Recognition Using Neural Networks


and KNN Classifiers
• Publication Year: 8 August 2017
• Author: Madhuri Sharma, Ranjna Pal and Ashok Kumar Sahoo.
• Journal Name: ARPN Journal of Engineering and Applied Sciences.
• Summary: In this paper using KNN classifiers, the gesture
recognition system is capable of recognizing only numerical ISL static
signs with 97.10% accuracy. The experimental result shows that
system can be used as a "working system" for Indian Sign Language
numerical recognition.

Paper-2: Sign language Recognition Using Machine Learning


Algorithm.
• Publication Year: 3 March 2020
• Author: Prof. Radha S. Shirbhate1, Mr. Vedant D. Shinde, Ms. Sanam
A. Metkari, Ms. Pooja U. Borkar, Ms. Mayuri A. Khandge.
• Journal Name: International Research Journal of Engineering and
Technology (IRJET).
• Summary: In this work, we have gone through an automatic sign
language gesture recognition system in real-time, using different
NMIET, Department of Computer Engineering 2023-2024
29
tools. Although our proposed work expected to recognized the sign
language and convert it into the text, there’s still a lot of scope for
possible future work.

Paper-3: Sign Language Action Recognition System Based on Deep


Learning.
• Publication Year: 10 November 2021
• Author: Chaoqin Chu, Qinkun Xiao, Jielei Xiao, Chuanhai Gao.
• Journal Name: 5th International Conference on Automation, Control
and Robots (ICACR).

• Summary: In this paper, a sign language action recognition system


based on deep learning is implemented. The system has been
successfully trained in all 10 types of sign language action data sets.
The training accuracy is 99%, and the test verification accuracy is 98%.
Sign language action recognition is an extensive research field, which
includes finger spelling, dynamic letters, dynamic words, isolated
words and so on. The proposed system framework can extend
additional modules and technologies, and form a fully automated sign
language action recognition system in the future.

Paper-4: Sign Language Recognition Based on Computer Vision


• Publication Year: 10 November 2021
• Author: Wanbo Li, Hang Pu, Ruijuan Wang
• Journal Name: IEEE International Conference on Artificial
Intelligence and Computer Applications (ICAICA)
• Summary: In this paper, a sign language recognition system based
on computer vision is designed, which uses The CNN neural network
to extract the characteristics of the ASL data corpus, and then puts it
into the LSTM classifier to realize the recognition of character-level
sign language (American sign language, Arabic numerals), and also
enables the translation of sign language through the system, and can
convert user text or speech input into the corresponding American

NMIET, Department of Computer Engineering 2023-2024


30
sign language or Arabic numeral sign language. Experiments test sign
language recognition accuracy of 95.52.

Paper-5: Sign Language Recognition


• Publication Year: August 2021
• Author: Satwik Ram Kodandaram, N Pavan Kumar, Sunil G L
• Journal Name: Turkish Journal of Computer and Mathematics
Education.
• Summary: In conclusion, we were successfully able to develop a
practical and meaningful system that can able to understand sign
language and translate that to the corresponding text. There are still
many shortages of our system like this system can detect 0-9 digits
and A-Z alphabets hand gestures but doesn’t cover body gestures and
other dynamic gestures. We are sure and it can be improved and
optimized in the future.

Paper-6: A Review Paper on Sign Language Recognition for The Deaf


and Dumb
• Publication Year: 10 October 2021
• Author: R Rumana , Reddygari Sandhya Rani , Mrs. R. Prema.
• Journal Name: International Journal of Engineering Research &
Technology (IJERT)

• Summary: In this report, a functional real time vision based


American sign language recognition for Deaf and Dumb people have
been developed for asl alphabets. We achieved final accuracy of
92.0% on our dataset. We are able to improve our prediction after
implementing two layers of algorithms in which we verify and predict
symbols which are more similar to each other. This way we are able to
detect almost all the symbols provided that they are shown properly,
there is no noise in the background and lighting is adequate.

Paper-7: Sign Language Recognition Based on Computer Vision


• Publication Year: 05 May 2022
• Author: Ravindra Bula, Dipalee Golekar, Rutuja Hole, Sidheshwar
Katare, S. R. Bhujbal
NMIET, Department of Computer Engineering 2023-2024
31
• Journal Name: International Journal of Creative Research Thoughts
• Summary: Sign language recognition system is a powerful tool to
prepare an expert knowledge, edge detect and the combination of
inaccurate information from different sources. the intend of
convolution neural network is to get the appropriate classification.

Paper-8: Sign Language Action Recognition System Based on Deep


Learning.
• Publication Year: 10 November 2021
• Author: Chaoqin Chu, Qinkun Xiao, Jielei Xiao, Chuanhai Gao.
• Journal Name: 5th International Conference on Automation, Control
and Robots (ICACR).

• Summary: In this paper, a sign language action recognition system


based on deep learning is implemented. The system has been
successfully trained in all 10 types of sign language action data sets.
The training accuracy is 99%, and the test verification accuracy is 98%.
Sign language action recognition is an extensive research field, which
includes finger spelling, dynamic letters, dynamic words, isolated
words and so on. The proposed system framework can extend
additional modules and technologies, and form a fully automated sign
language action recognition system in the future.

3.5.1 Requirements:

A software requirements specification (SRS) is a description of a


software system to be developed, its defined after business
requirements specification (CONOPS) also called stakeholder
requirements specification (STRS) other document related is the
system requirements specification (SYRS).

3.5.2 Hardware And Software Requirements

NMIET, Department of Computer Engineering 2023-2024


32
All computer software needs certain hardware components or other
software resources to bepresent on a computer. These prerequisites
are known as (computer) system requirements and areoften used as a
guideline as opposed to an absolute rule. Most software defines two
sets of systemrequirements: minimumand recommended. With
increasing demand for higher processing power and resources in
newerversions of software, system requirements tend to increase
over time. Industry analysts suggestthat this trend plays a bigger part
in driving upgrades to existing computer systems thantechnological
advancements. A second meaning of the term of System
requirements is ageneralization of this first definition, giving the
requirements to be met in the design of a systemor sub-system.

3.5.2 Software requirements:

• Operating system : Windows 10.

 Coding Language : Python 3.9.

 Front-End :Streamlit 3.7, Python

 Back-End : Python3.9

 Python Modules : Pickle 1.2.3

 3.5.3 Hardware Requirements:

• Server:
• Operating System: 64-bit Windows 7 or above.

NMIET, Department of Computer Engineering 2023-2024


33
• System processors: Intel i3 processor or higher

• RAM: 8 GB or higher.

• Browser: Google Chrome (Recommended), Mozilla


Firefox.

• HARD DISC: 500GB

• Client:

 RAM:128 MB

 PROCESSOR:Intel i3 processoror higher.

 HARD DISC: 10 GB

3.6Analysis Models: SDLC Model To Be Applied:

 The Software Development Life Cycle (SDLC) is a phenomenon for


designing, developing and testing high- quality software.
 The primary goal of the SDLC is to produce high-quality software
that meets the customer's requirements on time and within the
estimated cost.

NMIET, Department of Computer Engineering 2023-2024


34
 The Agile Software Development Life Cycle (SDLC) is a combination
of iterative and incremental process models.
 It focuses on process adaptability and customer satisfaction by
quickly delivering a functional software product. Agile SDLC breaks
the product down into small incremental builds. These builds are
provided in iterations.

Phases:
 Requirement gathering and analysis
 Design the requirements
 Construction/ iteration
 Deployment
 Testing
 Feedback

NMIET, Department of Computer Engineering 2023-2024


35
1. Gathering and analyzing requirements:

At this stage you need to define the requirements. You should explain
the business opportunities and plan the time and effort required to
build the project. Based on this information, you can evaluate the
technical and economic feasibility.

2. Design requirements:

After project identification, work with stakeholders to define


requirements. You can use a user flowchart or a high-level UML
diagram to show how new features work and how they will apply to
your existing system.

3. Construction iteration:

NMIET, Department of Computer Engineering 2023-2024


36
When the team defines the requirements, the work begins, Designers
and developers start working on their project. The goals of designers
and developers to deploy a working product within the estimated
time. The product will various stages of improvement so that it
contains simple, minimal functionality.

4. Development:

In this phase, the team will release the product for the user's work
environment.

5. Testing:

In this phase, the quality assurance team examines the performance


of the product and looks for the defect.

6. Feedback:

After releasing the product, the last step is feedback. In this step, the
team receives feedback about the product and processes the
feedback.

NMIET, Department of Computer Engineering 2023-2024


37
Agile SDLC Process Flow:

Concept: Projects are thought out and prioritized.

Start: Team members are formed, funding is secured, and basic


environment and requirements are discussed.
Iteration/Constraints: The software development team is working to
deliver working software. It is based on requests and feedback.
Release: Performs quality testing (QA), provides internal and external
training, documentation development, and final iteration to product.

Production: This is ongoing software support.

3.7 System Implementation Plan:

Stages of the implementation phase

 Coding:

NMIET, Department of Computer Engineering 2023-2024


38
The implementation phase begins with coding the application. When
coding, some fields must be adapted to match the interfaces in the
system. This phase involves converting human characters into a
language that the system understands. It is usually very complicated
and requires the use of professionals in various fields involving the
system to come up with the appropriate codes that can be used to
control the proposed system. Different programming languages can
be used to code the system program depending on the required
system functions. Once the coding of the system is complete, we can
move to the next phase which is the testing phase.

 Testing:

The main objective of the testing phase is to bring together all the
programs that the system contains to confirm that they work as
required. For the purposes of this project and to make testing more
successful, the testing phase can be divided into two parts. The first
phase of testing involves testing the entire system as a whole. A
report is then generated and errors found in the system are corrected
as necessary. Once this is done, the test will run again on the system
to see if any errors are still available on the system. Every part of the
system is tested separately and then all the junk is mixed together.

This is repeated until all errors are cleared from the system. The
testing phase usually takes a lot of time, but it is a very critical phase
in the implementation phase as it ensures that the system is working

NMIET, Department of Computer Engineering 2023-2024


39
as per the specified requirements. After completing the testing phase,
we can proceed to the installation phase.

 Installation:

Once testing is complete, all test components used in the system are
removed from the server and the system is built completely from
scratch. At the end of the installation of any system component, a test
was run. This was to ensure that the system worked as required and if
an

error occurred, it was removed as soon as it occurred. This helped


ensure that if there was a problem, it was detected early enough and
fixed before the system moved to a new phase.

Installation is very important and must be done with utmost care to


ensure that the system is completed as specified in the requirements
phase. After successfully completing the installation and testing of the
system, an overall system validation should be performed. This
ensures that each component of the system functions as required and
that only the required outputs are produced. The next phase is to
document the activities performed in the system during the
implementation phase.

 Documentation:

NMIET, Department of Computer Engineering 2023-2024


40
This stage is very important and helps to define each stage and the
type of action performed on it. It also includes documentation of the
testing and installation phase. If the system is well documented, it will
help in future maintenance of the system as well as help in
determining the
possible cause of the error. System documentation also helps provide
users with a user guide on how the system works and how future
modifications can be obtained. Once the system is documented,
training sessions can be organized to familiarize users of the system
with how the system works . During the documentation phase, the
following documents are generated: user manuals, operation
manuals, maintenance manuals, system manuals and control
documents.

 Training:

Training should be done by users who are familiar with the system. It
is necessary that this involves the use of professionals from different
stages of the implementation phase. This will help to gain a good
understanding of the system. It makes more sense if the training is
done from within, especially from the staff who participated in the
implementation phase. The training is primarily aimed at equipping
system users with operational and troubleshooting information for
the newly designed system. The system can then be implemented, but
its operation must overlap with the operation of the old system. This
helps users become more familiar with the new system and various
files can be transferred to the new system.

NMIET, Department of Computer Engineering 2023-2024


41
CHAPTER 4
SYSTEM DESIGN

NMIET, Department of Computer Engineering 2023-2024


42
SYSTEM DESIGN

4.1 System Architecture:

 Design is the first step in the development stage of all techniques


and principles for defining a device, process, or system
in enough detail to enable physical realization.
 After software requirements have been analyzed
and specified, software design includes the three technical
activities required to build and verify software: design, coding,
implementation, and testing.
 Design activities are of central importance at this stage. This
activity makes decisions that ultimately affect the success and
maintainability of the software implementation.
 These decisions ultimately affect system reliability and
maintainability. Design is the only way to accurately
translate a customer's requirements into a finished software or
system.
 Design is where quality is encouraged in development. Software
design is the process of transforming requirements into software
representations. Software design is done in two steps.
Preliminary design is the transformation of requirements into
data.

NMIET, Department of Computer Engineering 2023-2024


43
Fig 1. System Design

To design a system for Multiple Disease prediction based on lab reports using
machine
learning, we can follow the following steps:

1. Data Collection: The first component of the system involves


collecting a large dataset of medical records containing patient
information and various medical features related to multiple diseases.
This dataset will be used to train the machine learning models.

2. Data Preprocessing: The collected data will be preprocessed to


handle missing values, outliers, and to perform feature scaling. This
component of the system involves cleaning and preparing the data for
model training.

NMIET, Department of Computer Engineering 2023-2024


44
3. Model Training: This component involves training different machine
learning algorithms such as decision trees, random forests, and
artificial neural networks on the preprocessed data. The trained
models will be used for disease prediction.

4. Model Selection: The performance of different machine learning


algorithms will be compared using metrics such as accuracy, precision,
and recall, and the best- performing model will be selected for disease
prediction.

5. Model Evaluation: The selected model will be evaluated on a


separate test dataset to
measure its accuracy and reliability in predicting multiple diseases.
This component of the system involves testing the model and
measuring its performance.

6. User Interface Development: The final component of the system


involves developing a user-friendly interface that allows healthcare
professionals to input patient information and receive predictions for
multiple diseases. The interface will be designed to provide an easyto-
use tool for disease prediction

NMIET, Department of Computer Engineering 2023-2024


45
4.2 Data Flow Diagram:

 A Data Flow Diagram (DFD) maps the flow of information in


each process or system.
 Defined symbols such as rectangles, circles, arrows, and short
text labels are used to indicate data inputs, outputs, save
points, and routes between each target.
 Data flow diagrams range from simple hand-drawn
process overviews to detailed multi-level DFDs that delve deeper
and deeper into data manipulation.
 They can be used to analyze existing systems or model
new systems. Like the best charts and diagrams, DFDs can
visually "tell" things that are difficult to explain in words,
and are useful to both technical and non-technical
people, from a developer to his CEO. This is why DFD is still so
popular after so many years.

While they work well for data flow software and systems, they
are currently not well suited for visualization of interactive, real-
time, or database-oriented software and systems.

NMIET, Department of Computer Engineering 2023-2024


46
Fig 2 Data Flow Diagram

NMIET, Department of Computer Engineering 2023-2024


47
4.3 Entity Relationship Diagram:

ER Diagram, short for Entity Relationship Diagram, also known as ERD, is a


diagram that shows the relationships of a set of entities stored in a
database. In other words, ER diagrams help explain the logical structure
of a database.
• ER diagrams are built on three basic concepts:
entities, attributes, and relationships.
Entity Relationship Diagram Symbols & Notations mainly contains three
basic symbols namely Rectangle, Oval and Diamond to represent
relationships between elements, entities and attributes.
• Based on the ERD diagram main element, there are several sub-
elements.
• An ER diagram is a visual representation of data
that shows how it relates to each other
using various ERD symbols and notations.

The main components of the ER diagram and their symbols are:


Rectangle: This symbol in the Entity Relationship Diagram represents a
type of entity.
Oval: This symbol represents an attribute.
Diamond: This symbol represents a relationship type.
Line: Links an attribute and an entity.
Primary Key: attribute is underlined.
Duplicate ellipsis: represents a multi-valued attribute

NMIET, Department of Computer Engineering 2023-2024


48
4.4 Convolution Neural Network (CNN) Architectures

4.4.1LeNet-5

The LeNet-5 architecture consists of two pairs of convolutional and


average poolinglayers, followed by a flattening convolutional layer,
then two fullyconnected layers, andfinally a SoftMax classifier.

NMIET, Department of Computer Engineering 2023-2024


49
4.4.2MobileNetV2

MobileNetV2 [3] is a convolutional neural network architecture that


performs well on mobile devices.The architecture of MobileNetV2
contains thefully convolution layer with 32 filters, followed by 19
residual bottleneck layers. This network is lightweight and efficient.

NMIET, Department of Computer Engineering 2023-2024


50
4.5 UML Diagram:

 UML is a method of visualizing software programs using a series of


diagrams. This notation evolved from the work of Grady Booch,
James Rumbaugh, Ivar Jacobson, and the Rational Software
Corporation and was used for object-oriented design, but has since
been extended to cover a wide variety of
software development projects. .
 Today, UML is accepted by the Object Management Group (OMG)
as the standard for modeling software development.
 UML stands for Unified Modeling Language. UML 2.0 extended the
original UML specification to help cover the majority of software
development work, including Agile practices.
 Improved integration between structural models such as class
diagrams and behavioral models such as activity diagrams.
 Added the ability to define hierarchies and decompose
software systems into components and subcomponents.

NMIET, Department of Computer Engineering 2023-2024


51
4.5.1 Use case Diagram
Use case between explanation and analysis of requirements to represent system performance. Use
case description of the function of the system which gives visible results for the actor. Identifying
actors and their problem cases by lecturing on the boundaries of the system, rep-resenting the
work done by them and the whole environment. Actors are outside the system, while cases are
within the system. The use case describes the system as seen from the example of the actor’s
behavior. It describes the work provided by the system as a set of events that provide visible results
for the actor.

Fig 6. User Case Diagram

NMIET, Department of Computer Engineering 2023-2024


52
4.5.2 Class Diagram

Class Diagram Create class structure and content elements using class drawing
class, package and object marked design. The class describes the approach to the
figure when constructing the method—idea, result, and outcome.Classes are
made up of three things: name, properties, and operations. Class diagrams also
display relationships such as inheritance, cohabitation, and so on.Relation is the
most common relation in a class diagram. Association refers to the relationship
between instances of classes.

Fig 7. Class Diagram

NMIET, Department of Computer Engineering 2023-2024


53
4.5.3 Sequence Diagram
Sequence diagram displays the time sequence of the objects participating in the
interaction. This consists of the vertical dimension(time) and horizontal dimension
(different objects). Objects: - The commodity can be viewed as an entity with a
specific value at a specific time and as the holder of the identity. The sequence
diagram shows the interaction of objects presented chronologically. It refers to
the order of messages to be exchanged between the objects included in the view
and the objects needed to complete the class and visual functionality.Sequence
diagrams are commonly used the system under development k is obtained in the
case of logical approach.Events like events are events or events that happen.

Fig 8 . Sequence Diagram

NMIET, Department of Computer Engineering 2023-2024


54
4.5.4 State Chart
Object calling methods use messages and add new activation boxes on the second
vertex to indicate the level of the next process. If an object is destroyed (removed
from memory), an X will be drawn below the lifeline and a dash line will be drawn
below it. It must be the result of the message, either from the object or from
something. Extraordinary messages can be displayed in circles (queries in UML) or
in extrinsic message sequences (gates in UML). Extraordinary messages can be
displayed in circles (queries in UML) or in extrinsic message sequences (gates in
UML). Multiple fragments are linked to each other, which are then used to model
parallelism, conditional branching, and alternative interactions.

Fig 9. State Chart

NMIET, Department of Computer Engineering 2023-2024


55
CHAPTER 5
OTHER SPECIFICATION

NMIET, Department of Computer Engineering 2023-2024


56
5.1 Advantage:
• Real-time Sign Language Translation: Sign language recognition can
convert sign language gestures into text or speech, enabling real-time
communication between individuals who use sign language and those who
don't.
• Sign language recognition can be integrated into smartphones and tablets,
allowing Deaf and Hard of Hearing individuals to use sign language for
voice commands and text messages.
• Websites and online platforms can use sign language recognition to
provide sign language interpretation for multimedia content.
• Researchers can use sign language recognition to analyze sign language
communication patterns for various purposes, including sociolinguistic
studies.

NMIET, Department of Computer Engineering 2023-2024


57
5.1 Application And Limitations

Applications

• Communication Assistance: It can assist sign language interpreters by transcribing sign


language into spoken or written language, reducing interpreter fatigue and improving
accessibility during events, conferences, and meetings.
• Educational Support: Sign language recognition can be integrated into educational
materials, making them more accessible for Deaf or Hard of Hearing students.
• Customer Support and Service: Sign language recognition can be employed in call centers
to provide support for Deaf or Hard of Hearing customers, facilitating communication via
video calls.
• Medical and Healthcare: Sign language recognition can improve communication between
healthcare providers and Deaf or Hard of Hearing patients, ensuring accurate medical
information exchange.

Limitations

• Limited Dataset Diversity: Acquiring a diverse dataset representing various sign languages,
dialects, and gestures may be challenging. This limitation could affect the model's ability to
generalize across different sign language variations accurately.
• Complexity of Gesture Interpretation: Sign language gestures can be intricate and context-
dependent, leading to challenges in accurately interpreting their meaning. Variability in
hand movements, facial expressions, and body language adds complexity to the detection
process, potentially affecting the model's performance.

NMIET, Department of Computer Engineering 2023-2024


58
CHAPTER 6
IMPLEMENTATION

NMIET, Department of Computer Engineering 2023-2024


59
 GUI Design:

• Graphical user interface design principles conform to the model–view–controller


software pattern, which separates internal representations of information from the
manner in which information is presented to the user, resulting in a platform
where users are shown which functions are possible rather than requiring the input
of command codes.
• Users interact with information by manipulating visual widgets, which are designed
to respond in accordance with the type of data they hold and support the actions
necessary to complete the user’s task.
• The appearance, or “skin,” of an operating system or application software may be
redesigned at will due to the nature of graphical user interfaces being independent
from application functions.
• Applications typically implement their own unique graphical user interface display
elements in addition to graphical user interface elements already present on the
existing operating system.
• A typical graphical user interface also includes standard formats for representing
graphics and text, making it possible to share data between applications running
under common graphical user interface design software.
.

NMIET, Department of Computer Engineering 2023-2024


60
6.1Database:

 Gesture Data Storage:


Image and Video Data:Sign language gestures are typically captured as images
or videos. We can use a file storage system or a combination of a database and a file
system to store gesture data

 Metadata and Annotations:


Relational Databases (e.g., PostgreSQL, MySQL):Use a relational database to
store metadata and annotations associated with sign language gestures. This can
include information like gesture labels, timestamps, user IDs, and any other relevant
details.

 User Profiles and Preferences:


User Profile Databases: Store user profiles, preferences, and interaction history
in a dedicated database for user management. Consider using databases like
Firebase for mobile applications or NoSQL databases designed for user data.

 Data Security and Privacy:


Compliance with Data Protection Regulations: Ensure that your chosen
database complies with data protection regulations and provides mechanisms for
securing user data, especially if you are storing personal information.

Scalability and Performance:


Cloud-based Databases (e.g., Amazon DynamoDB, Google Cloud
Firestore): If you anticipate a large user base and need scalability and high
availability, consider cloud-based databases. They offer scalability options and are
well-suited for applications with varying workloads.

NMIET, Department of Computer Engineering 2023-2024


61
CHAPTER 7
CONCLUSION& FUTURE WORK

NMIET, Department of Computer Engineering 2023-2024


62
CONCLUSION

The development and implementation of sign language recognition technology


represent a significant advancement in the pursuit of a more inclusive and
accessible society. This report has delved into the principles, challenges, and
applications of sign language recognition using machine learning.
Sign language recognition has the potential to break down communication
barriers for individuals who are Deaf or Hard of Hearing, enhancing their ability to
interact with the broader community. From real-time translation to educational
support and accessibility features, the applications of this technology are diverse
and far-reaching.
In the years to come, the collaboration of researchers, developers, educators, and
the Deaf community will continue to drive innovation in this field, ultimately
fostering a more inclusive and equitable world for all. Sign language recognition
stands as a testament to the power of technology to bridge gaps and promote
understanding among individuals with diverse communication needs.

NMIET, Department of Computer Engineering 2023-2024


63
FUTURE WORK

The field of sign language recognition is continually evolving, and there is still much
work to be done to further improve the technology and expand its
applications.Enhance the accuracy and robustness of recognition systems by
developing more sophisticated machine learning models, including deep learning
architectures.Develop models that can recognize sign language gestures from
different signers, accounting for individual variations in signing styles.Investigate sign
language generation technology, which can convert spoken or written language into
sign language gestures, further enhancing accessibility for Deaf individuals.Research
and implement advanced privacy and security mechanisms to protect the sensitive
data involved in sign language recognition, especially in applications that involve
personal or medical information.Continue collaborating with Deaf and Hard of
Hearing individuals in the development process to ensure that the technology meets
their specific needs and cultural considerations.Investigate the ethical implications of
sign language recognition technology, including issues related to consent, data
protection, and potential misuse.The future of sign language recognition holds great
promise for bridging communication gaps and making the world more inclusive. As
technology advances and more research is conducted, sign language recognition
systems will become more accurate, versatile, and integrated into various aspects of
daily life.

NMIET, Department of Computer Engineering 2023-2024


64
REFERENCES

1. Prof. Radha S. Shirbhate, Mr. Vedant D. Shinde, Ms. Sanam A.

Metkari, Ms. Pooja U. Borkar, Ms. Mayuri A. Khandge” Sign


language Recognition Using Machine Learning Algorithm,”
International Research Journal of Engineering and Technology.
2. Madhuri Sharma, Ranjna Pal and Ashok Kumar Sahoo(2014)”

Indian Sign Language Recognition Using Neural Networks And Knn


Classifiers”, ARPN Journal of Engineering and Applied Sciences.
3. Le, Trong T. et al. (2023) "Deep Learning for Hand Sign Language

Recognition," IEEE Transactions on Image Processing.


4. Starner, Thad et al. (2022) "Real-time American Sign Language

recognition from video using hidden Markov models," ACM.


5. Athitsos, Vassilis et al. (2020) "The challenging CLEVR-KiDS

database: Tools and methodology for hand gesture recognition,"


Proceedings of the 10th ACM international conference on
Multimodal interfaces.
6. Li, Ying et al. (2020) "Sign Language Recognition Using LSTM and

CNN," IEEE Access.


7. Sunitha K. A, Anitha Saraswathi.P, Aarthi.M, Jayapriya. K, Lingam

Sunny, “Deaf Mute Communication Interpreter- A Review”,


International Journal of Applied Engineering Research, Volume 11,
pp 290-296, (2020).

NMIET, Department of Computer Engineering 2023-2024


65
8. M. Sandler, A. Howard, M. Zhu, A. Zhmoginov and L. Chen,

"MobileNetV2: Inverted Residuals and Linear


Bottlenecks," (2020) IEEE/CVF Conference on Computer Vision and
Pattern Recognition, 2020, pp. 4510-4520, doi:
10.1109/CVPR.2018.00474.
9. Pfister, Thomas et al. (2011) "Recognizing American Sign Language

Using Kinect," International Conference on Computer Vision.


10. S.Shirbhate1, Mr. Vedant D. Shinde2, Ms. Sanam A. Metkari3, Ms.

Pooja U. Borkar4, Ms. Mayuri A. Khandge/Sign-Language


Recognition-System.2020 IRJET Vol3 March,2020.

NMIET, Department of Computer Engineering 2023-2024


66

You might also like