Navigation Using AI for Visually Impaired People
1Dr. Poornima Raikar,2Khushi R Borkar,3Kirti Katavakar,4Simaran M Ansari,5Simran Y Khan,6Pandurang Parwatikar
1Head of Department, CSE (AI & ML), KLS Vishwanathrao Deshpande Institute of Technology, Haliyal, India
2Student, Department of Computer Science & Engineering, KLS Vishwanathrao Deshpande Institute of Technology, Haliyal, India
3Student, Department of Computer Science & Engineering, KLS Vishwanathrao Deshpande Institute of Technology, Haliyal, India
4Student, Department of Computer Science & Engineering, KLS Vishwanathrao Deshpande Institute of Technology, Haliyal, India
5Student, Department of Computer Science & Engineering, KLS Vishwanathrao Deshpande Institute of Technology, Haliyal, India
6Business Architecture Associate Manager, Accenture, Bangalore, India
Abstract— The navigation of visually impaired individuals remains a significant challenge due to the lack of effective, real-time assistance systems. Current solutions primarily rely on basic aids such as guide dogs or cane-based methods, which do not provide dynamic, adaptive navigation support. This paper proposes an AI-based navigation system that enhances mobility for visually impaired individuals by integrating real-time object detection and text-to-speech (TTS) feedback. The system employs the YOLOv5 object detection model to recognize obstacles and dangers, while Google Text-to-Speech (gTTS) delivers immediate auditory instructions to the user. Addition-ally, natural language processing (NLP) is employed for seamless interaction with the system. Our results demonstrate the system's efficacy in real-time obstacle detection and navigation, offering an innovative solution for enhancing independence and safety for visually impaired users. The impact of this system could be transformative, paving the way for more accessible and intelligent assistive technologies.
Keywords— AI Navigation, Visually Impaired, Object Detection, YOLOv5, gTTS (Google Text-to-Speech), Assistive Technology