Publication Date: 2021/06/27
Abstract: People having hearing and speaking disabilities will have problems communicating with other people. This creates a gap between them. To avoid this problem, they use some special gestures to express their thoughts. These gestures have different meanings. They are defined as “Sign Language”. Sign language is very important for deaf and mute people because they are the primary means of communication between both normal people and between themselves. It is most commonly used for people with talking and hearing disorders to communicate. In this application, we present a simple structure for sign language recognition. Our system involves implementing such an application that detects predefined signs through hand gestures. For the detection of gestures, we use a basic level of hardware components like a camera, and interfacing is needed. Our system would be a comprehensive User-friendly Based application built on Convolutional Neural Networks. The hand gestures are recognized by three main steps. First, the dataset is created by capturing images and these images are preprocessed by resizing, masking, and converting RGB into grayscale images. Secondly, after creating the dataset, we have to train the system using the Convolutional Neural Network, and using the trained classifier model the given sign is recognized. Thus, the recognized sign is displayed. We expect that the overall method of this application may attract technophiles with an extensive introduction in the sector of automated gesture and sign language recognition, and may help in future works in these fields.
Keywords: Convolutional Neural Network ; Preprocessing ; Sign Language ; ReLU
DOI: No DOI Available
PDF: https://ijirst.demo4.arinfotech.co/assets/upload/files/IJISRT21JUN525.pdf
REFERENCES