Home / Advanced Search

  • Title/Keywords

  • Author/Affliations

  • Journal

  • Article Type

  • Start Year

  • End Year

Update SearchingClear
  • Articles
  • Online
Search Results (8)
  • Open Access

    ARTICLE

    PNMT: Zero-Resource Machine Translation with Pivot-Based Feature Converter

    Lingfang Li1,2, Weijian Hu2, Mingxing Luo1,*

    CMC-Computers, Materials & Continua, Vol.84, No.3, pp. 5915-5935, 2025, DOI:10.32604/cmc.2025.064349 - 30 July 2025

    Abstract Neural machine translation (NMT) has been widely applied to high-resource language pairs, but its dependence on large-scale data results in poor performance in low-resource scenarios. In this paper, we propose a transfer-learning-based approach called shared space transfer for zero-resource NMT. Our method leverages a pivot pre-trained language model (PLM) to create a shared representation space, which is used in both auxiliary source→pivot (Ms2p) and pivot→target (Mp2t) translation models. Specifically, we exploit pivot PLM to initialize the Ms2p decoder and Mp2t encoder, while adopting a freezing strategy during the training process. We further propose a feature… More >

  • Open Access

    ARTICLE

    Leveraging Unlabeled Corpus for Arabic Dialect Identification

    Mohammed Abdelmajeed1,*, Jiangbin Zheng1, Ahmed Murtadha1, Youcef Nafa1, Mohammed Abaker2, Muhammad Pervez Akhter3

    CMC-Computers, Materials & Continua, Vol.83, No.2, pp. 3471-3491, 2025, DOI:10.32604/cmc.2025.059870 - 16 April 2025

    Abstract Arabic Dialect Identification (DID) is a task in Natural Language Processing (NLP) that involves determining the dialect of a given piece of text in Arabic. The state-of-the-art solutions for DID are built on various deep neural networks that commonly learn the representation of sentences in response to a given dialect. Despite the effectiveness of these solutions, the performance heavily relies on the amount of labeled examples, which is labor-intensive to attain and may not be readily available in real-world scenarios. To alleviate the burden of labeling data, this paper introduces a novel solution that leverages… More >

  • Open Access

    ARTICLE

    AdaptForever: Elastic and Mutual Learning for Continuous NLP Task Mastery

    Ke Chen1,2, Cheng Peng1,2, Xinyang He1,2, Jiakang Sun1,2, Xu Liu1,2, Xiaolin Qin1,2,*, Yong Zhong1,2,*

    CMC-Computers, Materials & Continua, Vol.82, No.3, pp. 4003-4019, 2025, DOI:10.32604/cmc.2025.057443 - 06 March 2025

    Abstract In natural language processing (NLP), managing multiple downstream tasks through fine-tuning pre-trained models often requires maintaining separate task-specific models, leading to practical inefficiencies. To address this challenge, we introduce AdaptForever, a novel approach that enables continuous mastery of NLP tasks through the integration of elastic and mutual learning strategies with a stochastic expert mechanism. Our method freezes the pre-trained model weights while incorporating adapters enhanced with mutual learning capabilities, facilitating effective knowledge transfer from previous tasks to new ones. By combining Elastic Weight Consolidation (EWC) for knowledge preservation with specialized regularization terms, AdaptForever successfully maintains More >

  • Open Access

    ARTICLE

    Multi-Head Encoder Shared Model Integrating Intent and Emotion for Dialogue Summarization

    Xinlai Xing, Junliang Chen*, Xiaochuan Zhang, Shuran Zhou, Runqing Zhang

    CMC-Computers, Materials & Continua, Vol.82, No.2, pp. 2275-2292, 2025, DOI:10.32604/cmc.2024.056877 - 17 February 2025

    Abstract In task-oriented dialogue systems, intent, emotion, and actions are crucial elements of user activity. Analyzing the relationships among these elements to control and manage task-oriented dialogue systems is a challenging task. However, previous work has primarily focused on the independent recognition of user intent and emotion, making it difficult to simultaneously track both aspects in the dialogue tracking module and to effectively utilize user emotions in subsequent dialogue strategies. We propose a Multi-Head Encoder Shared Model (MESM) that dynamically integrates features from emotion and intent encoders through a feature fusioner. Addressing the scarcity of datasets More >

  • Open Access

    ARTICLE

    Enhancing Relational Triple Extraction in Specific Domains: Semantic Enhancement and Synergy of Large Language Models and Small Pre-Trained Language Models

    Jiakai Li, Jianpeng Hu*, Geng Zhang

    CMC-Computers, Materials & Continua, Vol.79, No.2, pp. 2481-2503, 2024, DOI:10.32604/cmc.2024.050005 - 15 May 2024

    Abstract In the process of constructing domain-specific knowledge graphs, the task of relational triple extraction plays a critical role in transforming unstructured text into structured information. Existing relational triple extraction models face multiple challenges when processing domain-specific data, including insufficient utilization of semantic interaction information between entities and relations, difficulties in handling challenging samples, and the scarcity of domain-specific datasets. To address these issues, our study introduces three innovative components: Relation semantic enhancement, data augmentation, and a voting strategy, all designed to significantly improve the model’s performance in tackling domain-specific relational triple extraction tasks. We first… More >

  • Open Access

    ARTICLE

    Classification of Conversational Sentences Using an Ensemble Pre-Trained Language Model with the Fine-Tuned Parameter

    R. Sujatha, K. Nimala*

    CMC-Computers, Materials & Continua, Vol.78, No.2, pp. 1669-1686, 2024, DOI:10.32604/cmc.2023.046963 - 27 February 2024

    Abstract Sentence classification is the process of categorizing a sentence based on the context of the sentence. Sentence categorization requires more semantic highlights than other tasks, such as dependence parsing, which requires more syntactic elements. Most existing strategies focus on the general semantics of a conversation without involving the context of the sentence, recognizing the progress and comparing impacts. An ensemble pre-trained language model was taken up here to classify the conversation sentences from the conversation corpus. The conversational sentences are classified into four categories: information, question, directive, and commission. These classification label sequences are for… More >

  • Open Access

    ARTICLE

    Personality Trait Detection via Transfer Learning

    Bashar Alshouha1, Jesus Serrano-Guerrero1,*, Francisco Chiclana2, Francisco P. Romero1, Jose A. Olivas1

    CMC-Computers, Materials & Continua, Vol.78, No.2, pp. 1933-1956, 2024, DOI:10.32604/cmc.2023.046711 - 27 February 2024

    Abstract Personality recognition plays a pivotal role when developing user-centric solutions such as recommender systems or decision support systems across various domains, including education, e-commerce, or human resources. Traditional machine learning techniques have been broadly employed for personality trait identification; nevertheless, the development of new technologies based on deep learning has led to new opportunities to improve their performance. This study focuses on the capabilities of pre-trained language models such as BERT, RoBERTa, ALBERT, ELECTRA, ERNIE, or XLNet, to deal with the task of personality recognition. These models are able to capture structural features from textual… More >

  • Open Access

    ARTICLE

    Vulnerability Detection of Ethereum Smart Contract Based on SolBERT-BiGRU-Attention Hybrid Neural Model

    Guangxia Xu1,*, Lei Liu2, Jingnan Dong3

    CMES-Computer Modeling in Engineering & Sciences, Vol.137, No.1, pp. 903-922, 2023, DOI:10.32604/cmes.2023.026627 - 23 April 2023

    Abstract In recent years, with the great success of pre-trained language models, the pre-trained BERT model has been gradually applied to the field of source code understanding. However, the time cost of training a language model from zero is very high, and how to transfer the pre-trained language model to the field of smart contract vulnerability detection is a hot research direction at present. In this paper, we propose a hybrid model to detect common vulnerabilities in smart contracts based on a lightweight pre-trained language model BERT and connected to a bidirectional gate recurrent unit model. More >

Displaying 1-10 on page 1 of 8. Per Page