Open Access
ARTICLE
You KAN See through the Sand in the Dark: Uncertainty-Aware Meets KAN in Joint Low-Light Image Enhancement and Sand-Dust Removal
1 School of Computer Science, Wuhan University, Wuhan, 430072, China
2 Intelligent Transport Systems Research Center, Wuhan University of Technology, Wuhan, 430062, China
3 GNSS Research Center, Wuhan University, Wuhan, 430072, China
* Corresponding Author: Hui Liu. Email:
(This article belongs to the Special Issue: Computer Vision and Image Processing: Feature Selection, Image Enhancement and Recognition)
Computers, Materials & Continua 2025, 84(3), 5095-5109. https://doi.org/10.32604/cmc.2025.065812
Received 21 March 2025; Accepted 11 June 2025; Issue published 30 July 2025
Abstract
Within the domain of low-level vision, enhancing low-light images and removing sand-dust from single images are both critical tasks. These challenges are particularly pronounced in real-world applications such as autonomous driving, surveillance systems, and remote sensing, where adverse lighting and environmental conditions often degrade image quality. Various neural network models, including MLPs, CNNs, GANs, and Transformers, have been proposed to tackle these challenges, with the Vision KAN models showing particular promise. However, existing models, including the Vision KAN models use deterministic neural networks that do not address the uncertainties inherent in these processes. To overcome this, we introduce the Uncertainty-Aware Kolmogorov-Arnold Network (UAKAN), a novel structure that integrates KAN with uncertainty estimation. Our approach uniquely employs Tokenized KANs for sampling within a U-Net architecture’s encoder and decoder layers, enhancing the network’s ability to learn complex representations. Furthermore, for aleatoric uncertainty, we propose an uncertainty coupling certainty module that couples uncertainty distribution learning and residual learning in a feature fusion manner. For epistemic uncertainty, we propose a feature selection mechanism for spatial and pixel dimension uncertainty modeling, which captures and models uncertainty by learning the uncertainty contained between feature maps. Notably, our uncertainty-aware framework enables the model to produce both high-quality enhanced images and reliable uncertainty maps, which are crucial for downstream applications requiring confidence estimation. Through comparative and ablation studies on our synthetic SLLIE6K dataset, designed for low-light enhancement and sand-dust removal, we validate the effectiveness and theoretical robustness of our methodology.Keywords
Cite This Article
Copyright © 2025 The Author(s). Published by Tech Science Press.This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


Submit a Paper
Propose a Special lssue
View Full Text
Download PDF
Downloads
Citation Tools