Home / Advanced Search

  • Title/Keywords

  • Author/Affliations

  • Journal

  • Article Type

  • Start Year

  • End Year

Update SearchingClear
  • Articles
  • Online
Search Results (12)
  • Open Access

    ARTICLE

    Multimodal Neural Machine Translation Based on Knowledge Distillation and Anti-Noise Interaction

    Erlin Tian1, Zengchao Zhu2,*, Fangmei Liu2, Zuhe Li2

    CMC-Computers, Materials & Continua, Vol.83, No.2, pp. 2305-2322, 2025, DOI:10.32604/cmc.2025.061145 - 16 April 2025

    Abstract Within the realm of multimodal neural machine translation (MNMT), addressing the challenge of seamlessly integrating textual data with corresponding image data to enhance translation accuracy has become a pressing issue. We saw that discrepancies between textual content and associated images can lead to visual noise, potentially diverting the model’s focus away from the textual data and so affecting the translation’s comprehensive effectiveness. To solve this visual noise problem, we propose an innovative KDNR-MNMT model. The model combines the knowledge distillation technique with an anti-noise interaction mechanism, which makes full use of the synthesized graphic knowledge… More >

  • Open Access

    ARTICLE

    Improving Machine Translation Formality with Large Language Models

    Murun Yang1,*, Fuxue Li2

    CMC-Computers, Materials & Continua, Vol.82, No.2, pp. 2061-2075, 2025, DOI:10.32604/cmc.2024.058248 - 17 February 2025

    Abstract Preserving formal style in neural machine translation (NMT) is essential, yet often overlooked as an optimization objective of the training processes. This oversight can lead to translations that, though accurate, lack formality. In this paper, we propose how to improve NMT formality with large language models (LLMs), which combines the style transfer and evaluation capabilities of an LLM and the high-quality translation generation ability of NMT models to improve NMT formality. The proposed method (namely INMTF) encompasses two approaches. The first involves a revision approach using an LLM to revise the NMT-generated translation, ensuring a… More >

  • Open Access

    ARTICLE

    LKMT: Linguistics Knowledge-Driven Multi-Task Neural Machine Translation for Urdu and English

    Muhammad Naeem Ul Hassan1,2, Zhengtao Yu1,2,*, Jian Wang1,2, Ying Li1,2, Shengxiang Gao1,2, Shuwan Yang1,2, Cunli Mao1,2

    CMC-Computers, Materials & Continua, Vol.81, No.1, pp. 951-969, 2024, DOI:10.32604/cmc.2024.054673 - 15 October 2024

    Abstract Thanks to the strong representation capability of pre-trained language models, supervised machine translation models have achieved outstanding performance. However, the performances of these models drop sharply when the scale of the parallel training corpus is limited. Considering the pre-trained language model has a strong ability for monolingual representation, it is the key challenge for machine translation to construct the in-depth relationship between the source and target language by injecting the lexical and syntactic information into pre-trained language models. To alleviate the dependence on the parallel corpus, we propose a Linguistics Knowledge-Driven Multi-Task (LKMT) approach to… More >

  • Open Access

    ARTICLE

    Improving Low-Resource Machine Translation Using Reinforcement Learning from Human Feedback

    Liqing Wang*, Yiheng Xiao

    Intelligent Automation & Soft Computing, Vol.39, No.4, pp. 619-631, 2024, DOI:10.32604/iasc.2024.052971 - 06 September 2024

    Abstract Neural Machine Translation is one of the key research directions in Natural Language Processing. However, limited by the scale and quality of parallel corpus, the translation quality of low-resource Neural Machine Translation has always been unsatisfactory. When Reinforcement Learning from Human Feedback (RLHF) is applied to low-resource machine translation, commonly encountered issues of substandard preference data quality and the higher cost associated with manual feedback data. Therefore, a more cost-effective method for obtaining feedback data is proposed. At first, optimizing the quality of preference data through the prompt engineering of the Large Language Model (LLM), More >

  • Open Access

    ARTICLE

    Neural Machine Translation Models with Attention-Based Dropout Layer

    Huma Israr1,*, Safdar Abbas Khan1, Muhammad Ali Tahir1, Muhammad Khuram Shahzad1, Muneer Ahmad1, Jasni Mohamad Zain2,*

    CMC-Computers, Materials & Continua, Vol.75, No.2, pp. 2981-3009, 2023, DOI:10.32604/cmc.2023.035814 - 31 March 2023

    Abstract In bilingual translation, attention-based Neural Machine Translation (NMT) models are used to achieve synchrony between input and output sequences and the notion of alignment. NMT model has obtained state-of-the-art performance for several language pairs. However, there has been little work exploring useful architectures for Urdu-to-English machine translation. We conducted extensive Urdu-to-English translation experiments using Long short-term memory (LSTM)/Bidirectional recurrent neural networks (Bi-RNN)/Statistical recurrent unit (SRU)/Gated recurrent unit (GRU)/Convolutional neural network (CNN) and Transformer. Experimental results show that Bi-RNN and LSTM with attention mechanism trained iteratively, with a scalable data set, make precise predictions on unseen… More >

  • Open Access

    ARTICLE

    Text Simplification Using Transformer and BERT

    Sarah Alissa1,*, Mike Wald2

    CMC-Computers, Materials & Continua, Vol.75, No.2, pp. 3479-3495, 2023, DOI:10.32604/cmc.2023.033647 - 31 March 2023

    Abstract Reading and writing are the main interaction methods with web content. Text simplification tools are helpful for people with cognitive impairments, new language learners, and children as they might find difficulties in understanding the complex web content. Text simplification is the process of changing complex text into more readable and understandable text. The recent approaches to text simplification adopted the machine translation concept to learn simplification rules from a parallel corpus of complex and simple sentences. In this paper, we propose two models based on the transformer which is an encoder-decoder structure that achieves state-of-the-art… More >

  • Open Access

    ARTICLE

    Neural Machine Translation by Fusing Key Information of Text

    Shijie Hu1, Xiaoyu Li1,*, Jiayu Bai1, Hang Lei1, Weizhong Qian1, Sunqiang Hu1, Cong Zhang2, Akpatsa Samuel Kofi1, Qian Qiu2,3, Yong Zhou4, Shan Yang5

    CMC-Computers, Materials & Continua, Vol.74, No.2, pp. 2803-2815, 2023, DOI:10.32604/cmc.2023.032732 - 31 October 2022

    Abstract When the Transformer proposed by Google in 2017, it was first used for machine translation tasks and achieved the state of the art at that time. Although the current neural machine translation model can generate high quality translation results, there are still mistranslations and omissions in the translation of key information of long sentences. On the other hand, the most important part in traditional translation tasks is the translation of key information. In the translation results, as long as the key information is translated accurately and completely, even if other parts of the results are… More >

  • Open Access

    ARTICLE

    DLBT: Deep Learning-Based Transformer to Generate Pseudo-Code from Source Code

    Walaa Gad1,*, Anas Alokla1, Waleed Nazih2, Mustafa Aref1, Abdel-badeeh Salem1

    CMC-Computers, Materials & Continua, Vol.70, No.2, pp. 3117-3132, 2022, DOI:10.32604/cmc.2022.019884 - 27 September 2021

    Abstract Understanding the content of the source code and its regular expression is very difficult when they are written in an unfamiliar language. Pseudo-code explains and describes the content of the code without using syntax or programming language technologies. However, writing Pseudo-code to each code instruction is laborious. Recently, neural machine translation is used to generate textual descriptions for the source code. In this paper, a novel deep learning-based transformer (DLBT) model is proposed for automatic Pseudo-code generation from the source code. The proposed model uses deep learning which is based on Neural Machine Translation (NMT)… More >

  • Open Access

    ARTICLE

    A Novel Beam Search to Improve Neural Machine Translation for English-Chinese

    Xinyue Lin1, Jin Liu1, *, Jianming Zhang2, Se-Jung Lim3

    CMC-Computers, Materials & Continua, Vol.65, No.1, pp. 387-404, 2020, DOI:10.32604/cmc.2020.010984 - 23 July 2020

    Abstract Neural Machine Translation (NMT) is an end-to-end learning approach for automated translation, overcoming the weaknesses of conventional phrase-based translation systems. Although NMT based systems have gained their popularity in commercial translation applications, there is still plenty of room for improvement. Being the most popular search algorithm in NMT, beam search is vital to the translation result. However, traditional beam search can produce duplicate or missing translation due to its target sequence selection strategy. Aiming to alleviate this problem, this paper proposed neural machine translation improvements based on a novel beam search evaluation function. And we More >

  • Open Access

    ARTICLE

    Improve Neural Machine Translation by Building Word Vector with Part of Speech

    Jinyingming Zhang1 , Jin Liu1, *, Xinyue Lin1

    Journal on Artificial Intelligence, Vol.2, No.2, pp. 79-88, 2020, DOI:10.32604/jai.2020.010476 - 15 July 2020

    Abstract Neural Machine Translation (NMT) based system is an important technology for translation applications. However, there is plenty of rooms for the improvement of NMT. In the process of NMT, traditional word vector cannot distinguish the same words under different parts of speech (POS). Aiming to alleviate this problem, this paper proposed a new word vector training method based on POS feature. It can efficiently improve the quality of translation by adding POS feature to the training process of word vectors. In the experiments, we conducted extensive experiments to evaluate our methods. The experimental result shows More >

Displaying 1-10 on page 1 of 12. Per Page