Open Access
ARTICLE
DeblurTomo: Self-Supervised Computed Tomography Reconstruction from Blurry Images
1 College of Computer Science and Technology, National University of Defense Technology, Changsha, 410073, China
2 921 Hospital of Joint Logistics Support Force People’s Liberation Army of China, Changsha, 410073, China
3 School of Design, Hunan University, Changsha, 410082, China
* Corresponding Author: Yunfan Ye. Email:
Computers, Materials & Continua 2025, 84(2), 2411-2427. https://doi.org/10.32604/cmc.2025.066810
Received 17 April 2025; Accepted 23 May 2025; Issue published 03 July 2025
Abstract
Computed Tomography (CT) reconstruction is essential in medical imaging and other engineering fields. However, blurring of the projection during CT imaging can lead to artifacts in the reconstructed images. Projection blur combines factors such as larger ray sources, scattering and imaging system vibration. To address the problem, we propose DeblurTomo, a novel self-supervised learning-based deblurring and reconstruction algorithm that efficiently reconstructs sharp CT images from blurry input without needing external data and blur measurement. Specifically, we constructed a coordinate-based implicit neural representation reconstruction network, which can map the coordinates to the attenuation coefficient in the reconstructed space for more convenient ray representation. Then, we model the blur as a weighted sum of offset rays and design the Ray Correction Network (RCN) and Weight Proposal Network (WPN) to fit these rays and their weights by multi-view consistency and geometric information, thereby extending 2D deblurring to 3D space. In the training phase, we use the blurry input as the supervision signal to optimize the reconstruction network, the RCN, and the WPN simultaneously. Extensive experiments on the widely used synthetic dataset show that DeblurTomo performs superiorly on the limited-angle and sparse-view in the simulated blurred scenarios. Further experiments on real datasets demonstrate the superiority of our method in practical scenarios.Keywords
Cite This Article
Copyright © 2025 The Author(s). Published by Tech Science Press.This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


Submit a Paper
Propose a Special lssue
View Full Text
Download PDF
Downloads
Citation Tools