- Download 12
- File Size 409.13 KB
- File Count 1
- Create Date 02/04/2025
- Last Updated 02/04/2025
Energy-Aware Optimization of Neural Networks for Sustainable AI
Abhinav Dileep1, Abhinav P V2, Nivitha Vijesh3 , Vinayak K M4 ,Vanimol Sajan5
1DEPT OF CSE(ARTIFICIAL INTELLIGENCE AND DATA SCIENCE)& VIMAL JYOTHI ENGINEERING COLLEGE,CHEMPERI,KANNUR,KERALA
2DEPT OF CSE(ARTIFICIAL INTELLIGENCE AND DATA SCIENCE)& VIMAL JYOTHI ENGINEERING COLLEGE,CHEMPERI,KANNUR,KERALA
3DEPT OF CSE(ARTIFICIAL INTELLIGENCE AND DATA SCIENCE)& VIMAL JYOTHI ENGINEERING COLLEGE,CHEMPERI,KANNUR,KERALA
4DEPT OF CSE(ARTIFICIAL INTELLIGENCE AND DATA SCIENCE)& VIMAL JYOTHI ENGINEERING COLLEGE,CHEMPERI,KANNUR,KERALA
5DEPT OF CSE(ARTIFICIAL INTELLIGENCE AND DATA SCIENCE)& VIMAL JYOTHI ENGINEERING COLLEGE,CHEMPERI,KANNUR,KERALA
---------------------------------------------------------------------***---------------------------------------------------------------------
Abstract - Artificial intelligence (AI) models are increasingly being implemented, and energy-efficient neural networks have become essential for sustainable AI development. Traditional deep learning models require significant computational resources, leading to high energy consumption and carbon footprints. This study explores energy-aware optimization techniques to enhance the efficiency of neural networks while maintaining high accuracy. We employ a multi-faceted approach using pruning, quantization, and knowledge distillation to minimize resource usage and improve sustainability.A monitoring framework is established to track energy and memory consumption using NVIDIA-SMI, PyTorch Profiler, and psutil for GPU, CPU, and RAM analysis. Pruning techniques remove redundant network parameters, reducing computational overhead while preserving model performance. Quantization compresses model weights by converting high-precision values to lower-bit representations, optimizing inference speed and energy efficiency. Knowledge distillation transfers knowledge from larger teacher models to compact student networks, maintaining accuracy with reduced complexity.Statistical analysis, including evaluates the trade-offs between energy consumption and model accuracy. An efficiency score metric (Efficiency = Accuracy / Energy Consumed) is introduced to quantify improvements. Visualizations through scatter and line plots illustrate energy-accuracy relationships across different optimization strategies.Experiments are conducted on diverse datasets such as CIFAR-10, Fashion-MNIST, MedMNIST, and a Brain Tumor MRI dataset. Lightweight CNN architectures, including ResNet-18, ResNet-50, MobileNet, and EfficientNet, are assessed on GPUs with varying capacities (4GB, 8GB, 16GB, and 32GB). The study systematically evaluates hyperparameters, exploring batch sizes (32, 64, 128), learning rates (0.001, 0.01, 0.1), and training epochs (10, 20, 30) to balance accuracy and energy consumption.The results demonstrate that applying energy-aware optimization techniques significantly enhances computational efficiency without compromising model performance. This work contributes to the growing field of sustainable AI by bridging the gap between high-performance deep learning and energy-efficient computation, paving the way for more eco-friendly AI applications in real-world scenarios.
Key Words: Artificial intelligence, Sustainable AI, Deep learning