- Download 99
- File Size 708.21 KB
- File Count 1
- Create Date 29/06/2024
- Last Updated 29/06/2024
Lip Reading using Deep Learning
Robin Anburaj B
Department of Artificial Intelligence and Machine Learning
Sri Shakthi Institute of Engineering and Technology Coimbatore, India
robinanburajb22aml@srishakthi.ac.in
Dinesh J
Department of Artificial Intelligence and Machine Learning
Sri Shakthi Institute of Engineering and Technology Coimbatore, India
dineshj22aml@srishakthi.ac.in
Deepak T
Department of Artificial Intelligence and Machine Learning
Sri Shakthi Institute of
Engineering and Technology Coimbatore, India
deepakt22aml@srishakthi.ac.in
Mrs. R. Hemavathi
Department of Artificial Intelligence and Machine Learning
Sri Shakthi Institute of Engineering and Technology Coimbatore, India
hemavathiaiml@siet.ac.in
Abstract— Lip reading, the process of interpreting speech by visually observing the movements of the lips, has emerged as a critical area of research with applications spanning communication aids for the hearing impaired, silent speech interfaces, and enhanced human-computer interaction. This paper reviews recent advancements in lip reading technologies, focusing on the integration of machine learning and computer vision techniques. We explore state-of-the-art methods including convolutional neural networks (CNNs), recurrent neural networks (RNNs), and transformer-based models that have significantly improved the accuracy and robustness of lip reading systems. The study highlights the importance of large annotated datasets, such as LipNet and LRW, which have facilitated the training of deep learning models.
Additionally, we examine multimodal approaches that combine visual information with audio signals to enhance performance, especially in noisy environments. Despite substantial progress, challenges remain in addressing speaker variability, low resolution, and real-time processing. Future research directions are discussed, emphasizing the need for more diverse datasets, improved model generalization, and real-world application testing. This comprehensive review underscores the potential of advanced lip reading technologies to revolutionize communication accessibility and human-computer interaction.
This paper presents the method for Vision based Lip Reading system that uses convolutional neural network (CNN) with attention-based Long Short-Term Memory (LSTM). The dataset includes video clips pronouncing words sentence. The pretrained CNN is used for extracting features from pre- processed video frames which then are processed for learning temporal characteristics by LSTM. The SoftMax layer of architecture provides the result of lip reading. In the present work experiments are performed with two pre-trained models. The system provides 80% accuracy using Tensorflow and ensemble learning.
Keywords— CNN; RNN; LSTM; Tensorflow; lip reading; deep learning