Automated Glaucoma Detection and Severity Classification from Retinal Fundus Images Using CNN and U-Net Deep Learning Architecture
Kanta Rohitha¹, M. Joshithe Maha Lakshmi², M. Satyanarayane Reddy³,
M. Surya Sar Abhilash Reddy⁴
¹²³⁴ Department of Computer Science and Engineering (AI & ML)
Aditya College of Engineering and Technology (Autonomous)
Surampalem, Kakinada District, Andhra Pradesh – 533437, India
Guide: Mr. K. Govindaraju, M.Tech., (Ph.D.), Assistant Professor
Abstract
Glaucoma, a leading cause of irreversible blindness affecting over 80 million individuals worldwide, is characterized by progressive optic nerve damage often associated with elevated intraocular pressure. Early detection is critical as the disease progresses asymptomatically until significant vision loss has occurred. This paper presents an automated glaucoma detection and severity classification system using a dual deep learning architecture: Convolutional Neural Networks (CNN) for binary classification (glaucomatous vs. non-glaucomatous) and U-Net for semantic segmentation of the optic cup and disc regions in retinal fundus images. The system computes the Cup-to-Disc Ratio (CDR) from U-Net segmentation masks and classifies severity into three levels: Mild (CDR 0.3–0.5), Moderate (CDR 0.5–0.7), and Severe (CDR > 0.7). Trained on a Kaggle dataset of 3,600 fundus images, the CNN classifier achieves 96.2% accuracy, 95.8% precision, 96.5% recall, and an F1-score of 96.1%. The U-Net segmentation achieves a Dice coefficient of 0.912 for optic disc and 0.874 for optic cup. A web-based interface built with React.js frontend and Flask backend enables users to upload fundus images and receive instant diagnosis with severity classification and preventive precautions. The system addresses the global shortage of ophthalmologists by enabling accessible, automated glaucoma screening.
Keywords: Glaucoma Detection, Deep Learning, Convolutional Neural Networks, U-Net, Fundus Image, Cup-to-Disc Ratio, Optic Nerve, Semantic Segmentation