Open Access iconOpen Access


Multi-Level Feature Aggregation-Based Joint Keypoint Detection and Description

Jun Li1, Xiang Li1, Yifei Wei1,*, Mei Song1, Xiaojun Wang2

1 Beijing Key Laboratory of Work Safety Intelligent Monitoring, School of Electronic Engineering, Beijing University of Posts and Telecommunications, Beijing, 100876, China
2 Dublin City University, Dublin, 9, Ireland

* Corresponding Author: Yifei Wei. Email: email

Computers, Materials & Continua 2022, 73(2), 2529-2540.


Image keypoint detection and description is a popular method to find pixel-level connections between images, which is a basic and critical step in many computer vision tasks. The existing methods are far from optimal in terms of keypoint positioning accuracy and generation of robust and discriminative descriptors. This paper proposes a new end-to-end self-supervised training deep learning network. The network uses a backbone feature encoder to extract multi-level feature maps, then performs joint image keypoint detection and description in a forward pass. On the one hand, in order to enhance the localization accuracy of keypoints and restore the local shape structure, the detector detects keypoints on feature maps of the same resolution as the original image. On the other hand, in order to enhance the ability to percept local shape details, the network utilizes multi-level features to generate robust feature descriptors with rich local shape information. A detailed comparison with traditional feature-based methods Scale Invariant Feature Transform (SIFT), Speeded Up Robust Features (SURF) and deep learning methods on HPatches proves the effectiveness and robustness of the method proposed in this paper.


Cite This Article

J. Li, X. Li, Y. Wei, M. Song and X. Wang, "Multi-level feature aggregation-based joint keypoint detection and description," Computers, Materials & Continua, vol. 73, no.2, pp. 2529–2540, 2022.

cc This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
  • 1134


  • 607


  • 0


Share Link