A Hybrid Deep Learning Framework for Early Diagnosis of Neurological Disorders Using Multimodal Medical Imaging Data
Deepak Kumar Patel
Research Scholar
Department of Artificial Intelligence, Kalinga University Naya Raipur
Abstract
Early and accurate diagnosis of neurodegenerative disorders such as Alzheimer’s disease (AD) and Parkinson’s disease (PD) is critical for effective clinical intervention. However, conventional diagnostic approaches, relying on single-modality imaging or clinical assessments, often lack sensitivity and specificity in early-stage detection. In this study, we propose a novel hybrid deep learning framework that integrates multimodal neuroimaging data—including MRI, PET, and fMRI—using a combination of convolutional neural networks (CNNs), transformer-based encoders, and attention-driven fusion strategies. The model is designed to capture both local anatomical patterns and global inter-modality dependencies for robust classification. We evaluate our model on the publicly available ADNI and PPMI datasets, focusing on classifying cognitive states (e.g., cognitively normal, mild cognitive impairment, and AD). The proposed framework achieves superior performance with an accuracy of 88.5%, AUC of 0.915, and F1-score of 0.88, outperforming several state-of-the-art baselines. Ablation studies confirm the effectiveness of the transformer and attention components, while modality contribution analysis reveals significant diagnostic gains from multimodal integration. Additionally, interpretability is enhanced via Grad-CAM and attention heatmaps, which highlight clinically relevant brain regions such as the hippocampus and temporal lobes. These results demonstrate the promise of multimodal, interpretable AI in advancing early neurological diagnostics. Future work will focus on prospective clinical validation, longitudinal modeling, and deployment in real-time settings.
Keywords: Multimodal neuroimaging, Deep learning, Alzheimer’s disease, Parkinson’s disease, Transformer networks