- Download 52
- File Size 508.71 KB
- File Count 1
- Create Date 16/05/2025
- Last Updated 16/05/2025
Challenges in Standardizing Explainable AI Metrics for Ambiguous Learning Models in Cloud Infrastructure
Anant Manish Singh
anantsingh1302@gmail.com
Department of Computer Engineering
Thakur College of Engineering and Technology, Mumbai, Maharashtra, India
Krishna Jitendra Jaiswal
krishnajaiswal2512@gmail.com
Department of Computer Engineering
Thakur College of Engineering and Technology (TCET), Mumbai, Maharashtra, India
Devesh Amlesh Rai
deveshrai162@gmail.com
Department of Computer Engineering
Thakur College of Engineering and Technology (TCET), Mumbai, Maharashtra, India
Arya Brijesh Tiwari
aryabbrijeshtiwari@gmail.com
Department of Computer Engineering
Thakur College of Engineering and Technology (TCET), Mumbai, Maharashtra, India
Shifa Siraj Khan
shifakhan.work@gmail.com
Department of Information Technology
Thakur College of Engineering and Technology (TCET), Mumbai, Maharashtra, India
Sanika Satish Lad
ladsanika01@gmail.com
Department of Computer Engineering
Thakur College of Engineering and Technology (TCET), Mumbai, Maharashtra, India
Sanika Rajan Shete
sanika.shetee@gmail.com
Department of Computer Engineering
Thakur College of Engineering and Technology (TCET), Mumbai, Maharashtra, India
Disha Satyan Dahanukar
dishadahanukar@gmail.com
Department of Computer Engineering
Thakur College of Engineering and Technology (TCET), Mumbai, Maharashtra, India
Darshit Sandeep Raut
darshitraut@gmail.com
Department of Electronics and Telecommunication Engineering
Thakur College of Engineering and Technology (TCET), Mumbai, Maharashtra, India
Kaif Qureshi
kaif0829@gmail.com
Department of Computer Engineering
Thakur College of Engineering and Technology (TCET), Mumbai, Maharashtra, India
Abstract: As AI systems increasingly influence high-stakes decision-making processes, the need for transparent and interpretable models has become paramount. However, despite significant research advancements in XAI techniques, there remains a notable absence of standardized evaluation frameworks to assess explanation quality, particularly for ambiguous learning models deployed in distributed cloud environments. This paper examines the critical challenges in developing standardized evaluation metrics for Explainable Artificial Intelligence (XAI) within cloud-based infrastructures. This research identifies key barriers to standardization including the multidimensional nature of explanations, stakeholder diversity, operational constraints in cloud settings and the inherent complexity of modern AI systems. Through systematic analysis of current evaluation approaches, we propose a novel comprehensive evaluation framework with four core metrics: Fidelity Index (measuring truthfulness to the underlying model), Complexity Quotient (assessing cognitive accessibility), Operational Efficiency (quantifying resource requirements) and Stakeholder Alignment (evaluating relevance to different users). We validate our framework through empirical testing across three cloud platforms using both classification and natural language processing tasks, demonstrating significant improvements in explanation consistency (27.8%), stakeholder satisfaction (34.2%) and cross-platform standardization (41.3%) compared to existing approaches. Our findings reveal that effective standardization requires balancing technical rigor with contextual adaptability, addressing both objective computational measures and subjective human-centered assessments. This research contributes to the evolving discourse on XAI accountability by establishing a foundation for consistent, comparable evaluation standards that can accommodate the complexity of modern AI systems while meeting the varying needs of developers, users, regulators and cloud infrastructure providers.
Keywords: Explainable AI, Standardization, Evaluation Metrics, Cloud Infrastructure, Ambiguous Learning Models, XAI Frameworks, Stakeholder Requirements, Regulatory Compliance