Open Access iconOpen Access

ARTICLE

Multi-Modal Military Event Extraction Based on Knowledge Fusion

Yuyuan Xiang, Yangli Jia*, Xiangliang Zhang, Zhenling Zhang

School of Computer Science, Liaocheng University, Liaocheng, 252059, China

* Corresponding Author: Yangli Jia. Email: email

Computers, Materials & Continua 2023, 77(1), 97-114. https://doi.org/10.32604/cmc.2023.040751

Abstract

Event extraction stands as a significant endeavor within the realm of information extraction, aspiring to automatically extract structured event information from vast volumes of unstructured text. Extracting event elements from multi-modal data remains a challenging task due to the presence of a large number of images and overlapping event elements in the data. Although researchers have proposed various methods to accomplish this task, most existing event extraction models cannot address these challenges because they are only applicable to text scenarios. To solve the above issues, this paper proposes a multi-modal event extraction method based on knowledge fusion. Specifically, for event-type recognition, we use a meticulous pipeline approach that integrates multiple pre-trained models. This approach enables a more comprehensive capture of the multidimensional event semantic features present in military texts, thereby enhancing the interconnectedness of information between trigger words and events. For event element extraction, we propose a method for constructing a priori templates that combine event types with corresponding trigger words. This approach facilitates the acquisition of fine-grained input samples containing event trigger words, thus enabling the model to understand the semantic relationships between elements in greater depth. Furthermore, a fusion method for spatial mapping of textual event elements and image elements is proposed to reduce the category number overload and effectively achieve multi-modal knowledge fusion. The experimental results based on the CCKS 2022 dataset show that our method has achieved competitive results, with a comprehensive evaluation value F1-score of 53.4% for the model. These results validate the effectiveness of our method in extracting event elements from multi-modal data.

Keywords


Cite This Article

APA Style
Xiang, Y., Jia, Y., Zhang, X., Zhang, Z. (2023). Multi-modal military event extraction based on knowledge fusion. Computers, Materials & Continua, 77(1), 97-114. https://doi.org/10.32604/cmc.2023.040751
Vancouver Style
Xiang Y, Jia Y, Zhang X, Zhang Z. Multi-modal military event extraction based on knowledge fusion. Comput Mater Contin. 2023;77(1):97-114 https://doi.org/10.32604/cmc.2023.040751
IEEE Style
Y. Xiang, Y. Jia, X. Zhang, and Z. Zhang "Multi-Modal Military Event Extraction Based on Knowledge Fusion," Comput. Mater. Contin., vol. 77, no. 1, pp. 97-114. 2023. https://doi.org/10.32604/cmc.2023.040751



cc This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
  • 477

    View

  • 299

    Download

  • 1

    Like

Share Link