Publication Date: 2023/04/05
Abstract: Individuals with hearing and speech disabilities use sign language as their primary mode of communication to express their thoughts, ideas, feelings, and opinions to the rest of the world. They use multiple complementary channels to convey information as visual languages. This includes manual characteristics like hand shape, movement and pose, facial expression, lip movement, and so on. For someone who has never learned the language, the sign gestures are frequently mixed up and confused. Our project focuses on bridging this gap by recognizing hand gestures and converting them into readable text and audio speech using machine learning algorithms, and it also allows written text to be converted into hand gestures. Sign language recognition and translation enable us to learn the spatial representations, underlying language model, and mapping between sign and spoken language in real time.
Keywords: No Keywords Available
DOI: https://doi.org/10.5281/zenodo.7800718
PDF: https://ijirst.demo4.arinfotech.co/assets/upload/files/IJISRT23MAR1660.pdf
REFERENCES