Open Access iconOpen Access

ARTICLE

CerfeVPR: Cross-Environment Robust Feature Enhancement for Visual Place Recognition

Lingyun Xiang1, Hang Fu1, Chunfang Yang2,*

1 School of Computer and Communication Engineering, Changsha University of Science and Technology, Changsha, 410114, China
2 Key Laboratory of Cyberspace Situation Awareness of Henan Province, Information Engineering University, Zhengzhou, 450001, China

* Corresponding Author: Chunfang Yang. Email: email

Computers, Materials & Continua 2025, 84(1), 325-345. https://doi.org/10.32604/cmc.2025.062834

Abstract

In the Visual Place Recognition (VPR) task, existing research has leveraged large-scale pre-trained models to improve the performance of place recognition. However, when there are significant environmental differences between query images and reference images, a large number of ineffective local features will interfere with the extraction of key landmark features, leading to the retrieval of visually similar but geographically different images. To address this perceptual aliasing problem caused by environmental condition changes, we propose a novel Visual Place Recognition method with Cross-Environment Robust Feature Enhancement (CerfeVPR). This method uses the GAN network to generate similar images of the original images under different environmental conditions, thereby enhancing the learning of robust features of the original images. This enables the global descriptor to effectively ignore appearance changes caused by environmental factors such as seasons and lighting, showing better place recognition accuracy than other methods. Meanwhile, we introduce a large kernel convolution adapter to fine tune the pre-trained model, obtaining a better image feature representation for subsequent robust feature learning. Then, we process the information of different local regions in the general features through a 3-layer pyramid scene parsing network and fuse it with a tag that retains global information to construct a multi-dimensional image feature representation. Based on this, we use the fused features of similar images to drive the robust feature learning of the original images and complete the feature matching between query images and retrieved images. Experiments on multiple commonly used datasets show that our method exhibits excellent performance. On average, CerfeVPR achieves the highest results, with all Recall@N values exceeding 90%. In particular, on the highly challenging Nordland dataset, the R@1 metric is improved by 4.6%, significantly outperforming other methods, which fully verifies the superiority of CerfeVPR in visual place recognition under complex environments.

Keywords

Visual place recognition; cross-environment robustness; pre-trained model; feature learning

Cite This Article

APA Style
Xiang, L., Fu, H., Yang, C. (2025). CerfeVPR: Cross-Environment Robust Feature Enhancement for Visual Place Recognition. Computers, Materials & Continua, 84(1), 325–345. https://doi.org/10.32604/cmc.2025.062834
Vancouver Style
Xiang L, Fu H, Yang C. CerfeVPR: Cross-Environment Robust Feature Enhancement for Visual Place Recognition. Comput Mater Contin. 2025;84(1):325–345. https://doi.org/10.32604/cmc.2025.062834
IEEE Style
L. Xiang, H. Fu, and C. Yang, “CerfeVPR: Cross-Environment Robust Feature Enhancement for Visual Place Recognition,” Comput. Mater. Contin., vol. 84, no. 1, pp. 325–345, 2025. https://doi.org/10.32604/cmc.2025.062834



cc Copyright © 2025 The Author(s). Published by Tech Science Press.
This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
  • 452

    View

  • 207

    Download

  • 0

    Like

Share Link