DIAA: Distillation with Illumination-aware Adaptive Attention for Low-Light Image Enhancement
Keywords:
Low-light, Image Enhancement, Knowledge Distillation, Deep Learning, Adaptive Exposure CorrectionAbstract
Low-light image enhancement plays a vital role in improving images obtained under poor illumination conditions encountered in photography, surveillance, and autonomous systems. Such images often exhibit low contrast, amplified noise, and color distortion, which adversely affect downstream tasks. In this paper, we transfer knowledge from a high-performing teacher network to a smaller student network designed for low-light image enhancement. This student model is a hybrid deep-learning architecture integrating adaptive exposure correction, trainable gamma modulation, multi-scale feature extraction, spatial and frequency attention, noise-aware residual blocks, and transformer-CNN fusion. A key innovation in our paper is the use of a hybrid loss function that combines pixel-level accuracy, perceptual quality, structural consistency, and illumination stability. It includes MSE, L1, SSIM, VGG19-based perceptual loss, illumination smoothness, color constancy, and exposure-control terms. Furthermore, when a teacher model is available, a temperature-scaled knowledge distillation loss transfers soft supervision to the student network. This multi-term objective enables the model to restore natural brightness, preserve details, maintain color balance, and suppress noise. Comprehensive experiments on benchmark datasets demonstrate that our method outperforms state-of-the-art techniques in terms of PSNR, SSIM, and perceptual quality.
Downloads
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2025 International Journal of Computers and Informatics (Zagazig University)

This work is licensed under a Creative Commons Attribution 4.0 International License.