Secure Interpretable Deep Convolutional Network (SIDCN) for Malware Detection
Author Names and Affiliations
S.Akash , Dr.N.Mahendiran (M.Sc., M.Phil., Ph.D.),
Department of Computer Science
Sri Ramakrishna College of Arts and Science
Coimbatore, India
{ 23106004@srcas.ac.in, mahendiran@srcas.ac.in}
Abstract
Machine learning (ML) and deep learning (DL) approaches have become parts of modern malware detection systems because of their capabilities to evaluate complex and large amounts of data. Unfortunately, while many models have demonstrated strong detection accuracies in laboratory settings, they have significant limitations when placed in operational, security-critical environments. Some examples of such limitations include a lack of interpretability, exposure to adversarial evasion, high false-positive rates, and degraded performance with the passage of time. This paper proposes a Secure Interpretable Deep Convolutional Network (SIDCN) that incorporates interpretability into the learning process. In contrast to conventional black-box models and post-hoc methods for providing explanations for the behavior of model predictions, SIDCN co-optimizes the accuracy of malware detection and the stability of explanatory outputs. The proposed approach employs a method for enforcing explanation-consistency regularization that allows for the generation of stable and robust explanatory outputs under adversarial perturbations. Additionally, the instability of explanatory outputs has been used as an additional signal to identify behavior that may be abnormal or evasive. Results from both an experimental analysis and real-world attack case studies demonstrate that the proposed SIDCN yields enhanced trustworthiness, robustness and operational effectiveness compared with conventional ML/DL-based malware detection systems and is, therefore, applicable within real-time security scenarios.
Keywords
Malware Detection, Interpretable Deep Learning, Cybersecurity, Adversarial Attacks, Explainable AI