An Intelligent AI-Powered Framework for Detecting Fraudulent Transactions in Online Banking Systems
1st Md Shabrin
Dept. of Computer Applications Aditya University Surampalem, India
shabrinmhd@gmail.com
2nd Gadi Manikanta Sai Ram
Dept. of Computer Applications Aditya University
Surampalem, India gsairam.gadi@gmail.com
3rd Nulakani Mohith Akhi
Dept. of Computer Applications Aditya University
Surampalem, India mohithnulakani@gmail.com
4th P. Ramesh
Dept. of Computer Applications Aditya University Surampalem, India
potupureddiramesh9581@gmail.com
Abstract—Artificial Intelligence (AI) has significantly trans-formed healthcare by enabling automated diagnosis, predictive analytics, and personalized treatment planning. However, most state-of-the-art AI models, particularly deep learning systems, operate as black boxes, limiting their adoption in critical med-ical applications where transparency and trust are essential. Explainable Artificial Intelligence (XAI) addresses this challenge by providing interpretable insights into model decisions, allowing clinicians to understand, validate, and trust AI-driven outcomes. This paper presents a comprehensive study of XAI techniques ap-plied to healthcare diagnosis, including model-specific and model-agnostic approaches such as Grad-CAM, LIME, and SHAP. A hybrid deep learning framework integrated with explainability modules is proposed for medical image-based diagnosis. The performance of the system is evaluated using standard met-rics, along with qualitative interpretability analysis. The results demonstrate that incorporating explainability not only enhances model transparency but also improves clinical reliability and decision-making. The study highlights the importance of XAI in bridging the gap between advanced AI systems and real-world healthcare applications.
Keywords:Explainable AI, Healthcare Diagnosis, Deep Learn-ing, Grad-CAM, LIME, SHAP, Medical Imaging, Interpretability
Index Terms—component, formatting, style, styling, insert