Sign-Prac : Real - Time Language and Practice System
Dr. N. Venkateswara Rao
Professor, Department of Computer Science and Engineering (AI&ML), R.V.R& J.C College of Engineering, Chowdavaram, Guntur, Andhra Pradesh.
J. Prasanna Chandrika
B. Harsha Vardhan
G. Rajani
UG Students, Department of Computer Science and Engineering (AI&ML), R.V.R& J.C College of Engineering, Chowdavaram, Guntur, Andhra Pradesh
Abstract:
In today’s technologically advanced world, bridging the communication gap between hearing-impaired individuals and the rest of society is a critical challenge. Sign language serves as a primary mode of communication for the deaf and hard of-hearing community. However, due to the limited number of people proficient in sign language, there exists a significant communication barrier. This project aims to address this gap by developing an intelligent, real-time sign language recognition system using deep learning techniques. The proposed system utilizes a combination of computer vision and deep learning algorithms to accurately recognize hand gestures representing sign language. Leveraging tools such as MediaPipe for hand tracking and a Convolutional Neural Network (CNN) or keypoint-based classifier for gesture classification, the system processes live video input or uploaded images to identify signs and convert them into readable text. The model is trained on a custom or publicly available sign language dataset, ensuring accuracy and robustness across various lighting conditions and hand orientations. Key modules of the system include data preprocessing, feature extraction, model training using gesture sequences, and real-time inference. The model demonstrates high classification accuracy and low latency, making it suitable for real-world applications such as education, customer service, and accessibility platforms. This project not only highlights the potential of artificial intelligence in assistive technologies but also contributes to fostering inclusivity and equal communication opportunities for all individuals, regardless of physical ability.
Keywords: Sign Language Recognition (SLR), Real-Time Gesture Recognition, Deaf and Hard-of-Hearing Communication, MediaPipe, Deep Learning, Computer Vision, DualNet-SLR, Point History Network, Keypoint History Network, Streamlit Interface.