Survey on Real Time Hand Gestures Recognition for Speech Impaired Peoples Using Convolutional Neural Network
Sahil A. Mujawar1, Dr. Sangram T. Patil2, Dr. Jaydeep B. Patil3
Sahil A. Mujawar1, Dr. Sangram T. Patil2, Dr. Jaydeep B. Patil3
1 Student Department of CSE(DS), D Y Patil agriculture and technical University Talsande Kolhapur
2 Dean, School Of Engineering & Technology, D Y Patil agriculture and technical University Talsande
3 Associate Professor & HoD CSE, D Y Patil agriculture and technical University Talsande Kolhapur
1Sahilmujawar1555@gmail.com, 2 sangrampatil@dyp-atu.org , 3jaydeeppatil@dyp-atu.org
--------------------------------------------------------------------***---------------------------------------------------------------------
Abstract - In today’s society, effective communication is essential for human interaction. However, speech-impaired individuals often encounter significant barriers in expressing themselves, particularly when engaging with those unfamiliar with sign language. Real-time gesture recognition systems provide a transformative solution by interpreting hand gestures and converting them into audible speech or text, thus enabling seamless communication. This review paper explores state-of-the-art advancements in gesture recognition systems, emphasizing the integration of Convolutional Neural Networks (CNNs) for precise and efficient gesture classification. CNNs have revolutionized gesture recognition with their ability to extract complex features from input data, enhancing accuracy and real-time adaptability. Additionally, this paper highlights the role of Internet of Things (IoT) integration in extending the functionality of these systems. IoT enables multilingual support and facilitates control over smart devices, promoting autonomy and accessibility for speech-impaired individuals. By analyzing existing techniques, datasets, and performance metrics, this review identifies critical gaps such as the lack of dataset standardization, challenges in maintaining high accuracy across diverse environments, and the need for scalable, real-time solutions. The paper concludes with a discussion on future directions, including the development of multimodal systems that incorporate facial expressions and body movements, as well as personalized gesture recognition models. These advancements have the potential to enhance inclusivity and empower speech-impaired individuals, fostering a more accessible and connected society.
Key Words: Gesture Recognition, Convolutional Neural Networks (CNNs), Speech-Impaired Individuals, Real-Time Systems, Internet of Things (IoT), Multilingual Support, Accessibility, Human-Computer Interaction.