Open Access
ARTICLE
Hierarchical Shape Pruning for 3D Sparse Convolution Networks
1 School of Information Engineering, Liaodong University, Dandong, 118003, China
2 School of Computer Science and Technology, Anhui University, Hefei, 230601, China
3 Institute of Artificial Intelligence, Beihang University, Beijing, 100191, China
4 School of Aerospace Engineering, Xiamen University, Xiamen, 361005, China
* Corresponding Authors: Hai Chen. Email: ; Gang Chen. Email:
Computers, Materials & Continua 2025, 84(2), 2975-2988. https://doi.org/10.32604/cmc.2025.065047
Received 02 March 2025; Accepted 09 May 2025; Issue published 03 July 2025
Abstract
3D sparse convolution has emerged as a pivotal technique for efficient voxel-based perception in autonomous systems, enabling selective feature extraction from non-empty voxels while suppressing computational waste. Despite its theoretical efficiency advantages, practical implementations face under-explored limitations: the fixed geometric patterns of conventional sparse convolutional kernels inevitably process non-contributory positions during sliding-window operations, particularly in regions with uneven point cloud density. To address this, we propose Hierarchical Shape Pruning for 3D Sparse Convolution (HSP-S), which dynamically eliminates redundant kernel stripes through layer-adaptive thresholding. Unlike static soft pruning methods, HSP-S maintains trainable sparsity patterns by progressively adjusting pruning thresholds during optimization, enlarging original parameter search space while removing redundant operations. Extensive experiments validate effectiveness of HSP-S across major autonomous driving benchmarks. On KITTI’s 3D object detection task, our method reduces 93.47% redundant kernel computations while maintaining comparable accuracy (1.56% mAP drop). Remarkably, on the more complex NuScenes benchmark, HSP-S achieves simultaneous computation reduction (21.94% sparsity) and accuracy gains (1.02% mAP (mean Average Precision) and 0.47% NDS (nuScenes detection score) improvement), demonstrating its scalability to diverse perception scenarios. This work establishes the first learnable shape pruning framework that simultaneously enhances computational efficiency and preserves detection accuracy in 3D perception systems.Keywords
Cite This Article

This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.