Explainable Deep Reinforcement Learning for Efficient Security and Stability of Smart Grid
Kailash Pati Dutta1*
1Associate Professor, Department of Computer Science Engineering and Information Technology, Jharkhand Rai University Ranchi, Jharkhand-834010, India
*Corresponding author e-mail: kpdutta.ece@yahoo.com
Abstract: The modernization of electric power systems into intelligent cyber–physical infrastructures has intensified the need for adaptive control mechanisms capable of ensuring both operational stability and cybersecurity. Conventional machine learning approaches, although effective in prediction, lack transparency and dynamic adaptability under uncertain grid conditions. This paper proposes an Explainable Deep Reinforcement Learning framework for enhancing security and stability in smart grids by integrating interpretable policy learning with dynamic state optimization. The framework combines deep Q-learning with attention-driven explanation modules to provide actionable insights into decision policies governing voltage regulation, load balancing, and anomaly mitigation. A hybrid IoT-enabled smart grid environment is simulated to evaluate performance against established machine learning baselines. Analytical evaluation demonstrates improved stability margins, faster convergence, and enhanced anomaly detection accuracy with interpretable decision mapping. Comparative analysis reveals that the proposed approach outperforms prior AI-based grid management models in resilience and operational transparency. The integration of explainability addresses regulatory compliance and trust requirements, making the framework suitable for deployment in critical infrastructures. The findings establish that explainable reinforcement learning offers a viable pathway toward secure, self-adaptive, and stable smart grid ecosystems.
Keywords: anomaly detection, deep reinforcement learning, explainable artificial intelligence, smart grid security, stability analysis, voltage regulation