Open Access
ARTICLE
CMACF-Net: Cross-Multiscale Adaptive Collaborative and Fusion Grasp Detection Network
1 School of Electrical and Information Engineering, Wuhan Institute of Technology, Wuhan, 430205, China
2 College of Information and Artificial Intelligence, Nanchang Institute of Science and Technology, Nanchang, 330108, China
* Corresponding Author: Runpu Nie. Email:
Computers, Materials & Continua 2025, 85(2), 2959-2984. https://doi.org/10.32604/cmc.2025.066740
Received 16 April 2025; Accepted 03 July 2025; Issue published 23 September 2025
Abstract
With the rapid development of robotics, grasp prediction has become fundamental to achieving intelligent physical interactions. To enhance grasp detection accuracy in unstructured environments, we propose a novel Cross-Multiscale Adaptive Collaborative and Fusion Grasp Detection Network (CMACF-Net). Addressing the limitations of conventional methods in capturing multi-scale spatial features, CMACF-Net introduces the Quantized Multi-scale Global Attention Module (QMGAM), which enables precise multi-scale spatial calibration and adaptive spatial-channel interaction, ultimately yielding a more robust and discriminative feature representation. To reduce the degradation of local features and the loss of high-frequency information, the Cross-scale Context Integration Module (CCI) is employed to facilitate the effective fusion and alignment of global context and local details. Furthermore, an Efficient Up-Convolution Block (EUCB) is integrated into a U-Net architecture to effectively restore spatial details lost during the downsampling process, while simultaneously preserving computational efficiency. Extensive evaluations demonstrate that CMACF-Net achieves state-of-the-art detection accuracies of 98.9% and 95.9% on the Cornell and Jacquard datasets, respectively. Additionally, real-time grasping experiments on the RM65-B robotic platform validate the framework’s robustness and generalization capability, underscoring its applicability to real-world robotic manipulation scenarios.Keywords
Cite This Article
Copyright © 2025 The Author(s). Published by Tech Science Press.This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


Submit a Paper
Propose a Special lssue
View Full Text
Download PDF
Downloads
Citation Tools