Open Access iconOpen Access

ARTICLE

crossmark

Augmented Deep Multi-Granularity Pose-Aware Feature Fusion Network for Visible-Infrared Person Re-Identification

Zheng Shi, Wanru Song*, Junhao Shan, Feng Liu

School of Educational Science and Technology, Nanjing University of Posts and Telecommunications, Nanjing, 210013, China

* Corresponding Author: Wanru Song. Email: email

Computers, Materials & Continua 2023, 77(3), 3467-3488. https://doi.org/10.32604/cmc.2023.045849

Abstract

Visible-infrared Cross-modality Person Re-identification (VI-ReID) is a critical technology in smart public facilities such as cities, campuses and libraries. It aims to match pedestrians in visible light and infrared images for video surveillance, which poses a challenge in exploring cross-modal shared information accurately and efficiently. Therefore, multi-granularity feature learning methods have been applied in VI-ReID to extract potential multi-granularity semantic information related to pedestrian body structure attributes. However, existing research mainly uses traditional dual-stream fusion networks and overlooks the core of cross-modal learning networks, the fusion module. This paper introduces a novel network called the Augmented Deep Multi-Granularity Pose-Aware Feature Fusion Network (ADMPFF-Net), incorporating the Multi-Granularity Pose-Aware Feature Fusion (MPFF) module to generate discriminative representations. MPFF efficiently explores and learns global and local features with multi-level semantic information by inserting disentangling and duplicating blocks into the fusion module of the backbone network. ADMPFF-Net also provides a new perspective for designing multi-granularity learning networks. By incorporating the multi-granularity feature disentanglement (mGFD) and posture information segmentation (pIS) strategies, it extracts more representative features concerning body structure information. The Local Information Enhancement (LIE) module augments high-performance features in VI-ReID, and the multi-granularity joint loss supervises model training for objective feature learning. Experimental results on two public datasets show that ADMPFF-Net efficiently constructs pedestrian feature representations and enhances the accuracy of VI-ReID.

Keywords


Cite This Article

APA Style
Shi, Z., Song, W., Shan, J., Liu, F. (2023). Augmented deep multi-granularity pose-aware feature fusion network for visible-infrared person re-identification. Computers, Materials & Continua, 77(3), 3467-3488. https://doi.org/10.32604/cmc.2023.045849
Vancouver Style
Shi Z, Song W, Shan J, Liu F. Augmented deep multi-granularity pose-aware feature fusion network for visible-infrared person re-identification. Comput Mater Contin. 2023;77(3):3467-3488 https://doi.org/10.32604/cmc.2023.045849
IEEE Style
Z. Shi, W. Song, J. Shan, and F. Liu "Augmented Deep Multi-Granularity Pose-Aware Feature Fusion Network for Visible-Infrared Person Re-Identification," Comput. Mater. Contin., vol. 77, no. 3, pp. 3467-3488. 2023. https://doi.org/10.32604/cmc.2023.045849



cc This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
  • 269

    View

  • 160

    Download

  • 1

    Like

Share Link