Open Access
ARTICLE
LR-Net: Lossless Feature Fusion and Revised SIoU for Small Object Detection
1 School of Artificial Intelligence, Chongqing University of Technology, Chongqing, 401135, China
2 School of Computer and Information Science, Chongqing Normal University, Chongqing, 401331, China
* Corresponding Author: Yang Zhang. Email:
# These authors contributed equally to this work
(This article belongs to the Special Issue: Advances in Object Detection: Methods and Applications)
Computers, Materials & Continua 2025, 85(2), 3267-3288. https://doi.org/10.32604/cmc.2025.067763
Received 12 May 2025; Accepted 10 July 2025; Issue published 23 September 2025
Abstract
Currently, challenges such as small object size and occlusion lead to a lack of accuracy and robustness in small object detection. Since small objects occupy only a few pixels in an image, the extracted features are limited, and mainstream downsampling convolution operations further exacerbate feature loss. Additionally, due to the occlusion-prone nature of small objects and their higher sensitivity to localization deviations, conventional Intersection over Union (IoU) loss functions struggle to achieve stable convergence. To address these limitations, LR-Net is proposed for small object detection. Specifically, the proposed Lossless Feature Fusion (LFF) method transfers spatial features into the channel domain while leveraging a hybrid attention mechanism to focus on critical features, mitigating feature loss caused by downsampling. Furthermore, RSIoU is proposed to enhance the convergence performance of IoU-based losses for small objects. RSIoU corrects the inherent convergence direction issues in SIoU and proposes a penalty term as a Dynamic Focusing Mechanism parameter, enabling it to dynamically emphasize the loss contribution of small object samples. Ultimately, RSIoU significantly improves the convergence performance of the loss function for small objects, particularly under occlusion scenarios. Experiments demonstrate that LR-Net achieves significant improvements across various metrics on multiple datasets compared with YOLOv8n, achieving a 3.7% increase in mean Average Precision (AP) on the VisDrone2019 dataset, along with improvements of 3.3% on the AI-TOD dataset and 1.2% on the COCO dataset.Keywords
Cite This Article
Copyright © 2025 The Author(s). Published by Tech Science Press.This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


Submit a Paper
Propose a Special lssue
View Full Text
Download PDF
Downloads
Citation Tools