Vol.73, No.2, 2022, pp.2529-2540, doi:10.32604/cmc.2022.029542
OPEN ACCESS
ARTICLE
Multi-Level Feature Aggregation-Based Joint Keypoint Detection and Description
  • Jun Li1, Xiang Li1, Yifei Wei1,*, Mei Song1, Xiaojun Wang2
1 Beijing Key Laboratory of Work Safety Intelligent Monitoring, School of Electronic Engineering, Beijing University of Posts and Telecommunications, Beijing, 100876, China
2 Dublin City University, Dublin, 9, Ireland
* Corresponding Author: Yifei Wei. Email:
Received 06 March 2022; Accepted 26 April 2022; Issue published 16 June 2022
Abstract
Image keypoint detection and description is a popular method to find pixel-level connections between images, which is a basic and critical step in many computer vision tasks. The existing methods are far from optimal in terms of keypoint positioning accuracy and generation of robust and discriminative descriptors. This paper proposes a new end-to-end self-supervised training deep learning network. The network uses a backbone feature encoder to extract multi-level feature maps, then performs joint image keypoint detection and description in a forward pass. On the one hand, in order to enhance the localization accuracy of keypoints and restore the local shape structure, the detector detects keypoints on feature maps of the same resolution as the original image. On the other hand, in order to enhance the ability to percept local shape details, the network utilizes multi-level features to generate robust feature descriptors with rich local shape information. A detailed comparison with traditional feature-based methods Scale Invariant Feature Transform (SIFT), Speeded Up Robust Features (SURF) and deep learning methods on HPatches proves the effectiveness and robustness of the method proposed in this paper.
Keywords
Multi-scale information; keypoint detection and description; artificial intelligence
Cite This Article
J. Li, X. Li, Y. Wei, M. Song and X. Wang, "Multi-level feature aggregation-based joint keypoint detection and description," Computers, Materials & Continua, vol. 73, no.2, pp. 2529–2540, 2022.
This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.