Open Access iconOpen Access

ARTICLE

Multi-Modal Pre-Synergistic Fusion Entity Alignment Based on Mutual Information Strategy Optimization

Huayu Li1,2, Xinxin Chen1,2, Lizhuang Tan3,4,*, Konstantin I. Kostromitin5,6, Athanasios V. Vasilakos7, Peiying Zhang1,2

1 Qingdao Institute of Software, College of Computer Science and Technology, China University of Petroleum (East China), Qingdao, 266580, China
2 Shandong Key Laboratory of Intelligent Oil & Gas Industrial Software, China University of Petroleum (East China), Qingdao, 266580, China
3 Key Laboratory of Computing Power Network and Information Security, Ministry of Education, Shandong Computer Science Center (National Supercomputer Center in Jinan), Qilu University of Technology (Shandong Academy of Sciences), Jinan, 250014, China
4 Shandong Provincial Key Laboratory of Computing Power Internet and Service Computing, Shandong Fundamental Research Center for Computer Science, Jinan, 250014, China
5 Department of Physics of Nanoscale Systems, South Ural State University, Chelyabinsk, 454080, Russia
6 Institute of Radioelectronics and Information Technologies, Ural Federal University, Yekaterinburg, 620002, Russia
7 Department of ICT and Center for AI Research, University of Agder (UiA), Grimstad, 4879, Norway

* Corresponding Author: Lizhuang Tan. Email: email

Computers, Materials & Continua 2025, 85(2), 4133-4153. https://doi.org/10.32604/cmc.2025.069690

Abstract

To address the challenge of missing modal information in entity alignment and to mitigate information loss or bias arising from modal heterogeneity during fusion, while also capturing shared information across modalities, this paper proposes a Multi-modal Pre-synergistic Entity Alignment model based on Cross-modal Mutual Information Strategy Optimization (MPSEA). The model first employs independent encoders to process multi-modal features, including text, images, and numerical values. Next, a multi-modal pre-synergistic fusion mechanism integrates graph structural and visual modal features into the textual modality as preparatory information. This pre-fusion strategy enables unified perception of heterogeneous modalities at the model’s initial stage, reducing discrepancies during the fusion process. Finally, using cross-modal deep perception reinforcement learning, the model achieves adaptive multi-level feature fusion between modalities, supporting learning more effective alignment strategies. Extensive experiments on multiple public datasets show that the MPSEA method achieves gains of up to 7% in Hits@1 and 8.2% in MRR on the FBDB15K dataset, and up to 9.1% in Hits@1 and 7.7% in MRR on the FBYG15K dataset, compared to existing state-of-the-art methods. These results confirm the effectiveness of the proposed model.

Keywords

Knowledge graph; multi-modal; entity alignment; feature fusion; pre-synergistic fusion

Cite This Article

APA Style
Li, H., Chen, X., Tan, L., Kostromitin, K.I., Vasilakos, A.V. et al. (2025). Multi-Modal Pre-Synergistic Fusion Entity Alignment Based on Mutual Information Strategy Optimization. Computers, Materials & Continua, 85(2), 4133–4153. https://doi.org/10.32604/cmc.2025.069690
Vancouver Style
Li H, Chen X, Tan L, Kostromitin KI, Vasilakos AV, Zhang P. Multi-Modal Pre-Synergistic Fusion Entity Alignment Based on Mutual Information Strategy Optimization. Comput Mater Contin. 2025;85(2):4133–4153. https://doi.org/10.32604/cmc.2025.069690
IEEE Style
H. Li, X. Chen, L. Tan, K. I. Kostromitin, A. V. Vasilakos, and P. Zhang, “Multi-Modal Pre-Synergistic Fusion Entity Alignment Based on Mutual Information Strategy Optimization,” Comput. Mater. Contin., vol. 85, no. 2, pp. 4133–4153, 2025. https://doi.org/10.32604/cmc.2025.069690



cc Copyright © 2025 The Author(s). Published by Tech Science Press.
This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
  • 854

    View

  • 796

    Download

  • 0

    Like

Share Link