Home / Advanced Search

  • Title/Keywords

  • Author/Affliations

  • Journal

  • Article Type

  • Start Year

  • End Year

Update SearchingClear
  • Articles
  • Online
Search Results (8)
  • Open Access

    ARTICLE

    Neural Machine Translation Models with Attention-Based Dropout Layer

    Huma Israr1,*, Safdar Abbas Khan1, Muhammad Ali Tahir1, Muhammad Khuram Shahzad1, Muneer Ahmad1, Jasni Mohamad Zain2,*

    CMC-Computers, Materials & Continua, Vol.75, No.2, pp. 2981-3009, 2023, DOI:10.32604/cmc.2023.035814

    Abstract In bilingual translation, attention-based Neural Machine Translation (NMT) models are used to achieve synchrony between input and output sequences and the notion of alignment. NMT model has obtained state-of-the-art performance for several language pairs. However, there has been little work exploring useful architectures for Urdu-to-English machine translation. We conducted extensive Urdu-to-English translation experiments using Long short-term memory (LSTM)/Bidirectional recurrent neural networks (Bi-RNN)/Statistical recurrent unit (SRU)/Gated recurrent unit (GRU)/Convolutional neural network (CNN) and Transformer. Experimental results show that Bi-RNN and LSTM with attention mechanism trained iteratively, with a scalable data set, make precise predictions on unseen data. The trained models yielded… More >

  • Open Access

    ARTICLE

    Text Simplification Using Transformer and BERT

    Sarah Alissa1,*, Mike Wald2

    CMC-Computers, Materials & Continua, Vol.75, No.2, pp. 3479-3495, 2023, DOI:10.32604/cmc.2023.033647

    Abstract Reading and writing are the main interaction methods with web content. Text simplification tools are helpful for people with cognitive impairments, new language learners, and children as they might find difficulties in understanding the complex web content. Text simplification is the process of changing complex text into more readable and understandable text. The recent approaches to text simplification adopted the machine translation concept to learn simplification rules from a parallel corpus of complex and simple sentences. In this paper, we propose two models based on the transformer which is an encoder-decoder structure that achieves state-of-the-art (SOTA) results in machine translation.… More >

  • Open Access

    ARTICLE

    Neural Machine Translation by Fusing Key Information of Text

    Shijie Hu1, Xiaoyu Li1,*, Jiayu Bai1, Hang Lei1, Weizhong Qian1, Sunqiang Hu1, Cong Zhang2, Akpatsa Samuel Kofi1, Qian Qiu2,3, Yong Zhou4, Shan Yang5

    CMC-Computers, Materials & Continua, Vol.74, No.2, pp. 2803-2815, 2023, DOI:10.32604/cmc.2023.032732

    Abstract When the Transformer proposed by Google in 2017, it was first used for machine translation tasks and achieved the state of the art at that time. Although the current neural machine translation model can generate high quality translation results, there are still mistranslations and omissions in the translation of key information of long sentences. On the other hand, the most important part in traditional translation tasks is the translation of key information. In the translation results, as long as the key information is translated accurately and completely, even if other parts of the results are translated incorrect, the final translation… More >

  • Open Access

    ARTICLE

    DLBT: Deep Learning-Based Transformer to Generate Pseudo-Code from Source Code

    Walaa Gad1,*, Anas Alokla1, Waleed Nazih2, Mustafa Aref1, Abdel-badeeh Salem1

    CMC-Computers, Materials & Continua, Vol.70, No.2, pp. 3117-3132, 2022, DOI:10.32604/cmc.2022.019884

    Abstract Understanding the content of the source code and its regular expression is very difficult when they are written in an unfamiliar language. Pseudo-code explains and describes the content of the code without using syntax or programming language technologies. However, writing Pseudo-code to each code instruction is laborious. Recently, neural machine translation is used to generate textual descriptions for the source code. In this paper, a novel deep learning-based transformer (DLBT) model is proposed for automatic Pseudo-code generation from the source code. The proposed model uses deep learning which is based on Neural Machine Translation (NMT) to work as a language… More >

  • Open Access

    ARTICLE

    A Novel Beam Search to Improve Neural Machine Translation for English-Chinese

    Xinyue Lin1, Jin Liu1, *, Jianming Zhang2, Se-Jung Lim3

    CMC-Computers, Materials & Continua, Vol.65, No.1, pp. 387-404, 2020, DOI:10.32604/cmc.2020.010984

    Abstract Neural Machine Translation (NMT) is an end-to-end learning approach for automated translation, overcoming the weaknesses of conventional phrase-based translation systems. Although NMT based systems have gained their popularity in commercial translation applications, there is still plenty of room for improvement. Being the most popular search algorithm in NMT, beam search is vital to the translation result. However, traditional beam search can produce duplicate or missing translation due to its target sequence selection strategy. Aiming to alleviate this problem, this paper proposed neural machine translation improvements based on a novel beam search evaluation function. And we use reinforcement learning to train… More >

  • Open Access

    ARTICLE

    Improve Neural Machine Translation by Building Word Vector with Part of Speech

    Jinyingming Zhang1 , Jin Liu1, *, Xinyue Lin1

    Journal on Artificial Intelligence, Vol.2, No.2, pp. 79-88, 2020, DOI:10.32604/jai.2020.010476

    Abstract Neural Machine Translation (NMT) based system is an important technology for translation applications. However, there is plenty of rooms for the improvement of NMT. In the process of NMT, traditional word vector cannot distinguish the same words under different parts of speech (POS). Aiming to alleviate this problem, this paper proposed a new word vector training method based on POS feature. It can efficiently improve the quality of translation by adding POS feature to the training process of word vectors. In the experiments, we conducted extensive experiments to evaluate our methods. The experimental result shows that the proposed method is… More >

  • Open Access

    ARTICLE

    Corpus Augmentation for Improving Neural Machine Translation

    Zijian Li1, Chengying Chi1, *, Yunyun Zhan2, *

    CMC-Computers, Materials & Continua, Vol.64, No.1, pp. 637-650, 2020, DOI:10.32604/cmc.2020.010265

    Abstract The translation quality of neural machine translation (NMT) systems depends largely on the quality of large-scale bilingual parallel corpora available. Research shows that under the condition of limited resources, the performance of NMT is greatly reduced, and a large amount of high-quality bilingual parallel data is needed to train a competitive translation model. However, not all languages have large-scale and high-quality bilingual corpus resources available. In these cases, improving the quality of the corpora has become the main focus to increase the accuracy of the NMT results. This paper proposes a new method to improve the quality of data by… More >

  • Open Access

    ARTICLE

    Dependency-Based Local Attention Approach to Neural Machine Translation

    Jing Qiu1, Yan Liu2, Yuhan Chai2, Yaqi Si2, Shen Su1, ∗, Le Wang1, ∗, Yue Wu3

    CMC-Computers, Materials & Continua, Vol.59, No.2, pp. 547-562, 2019, DOI:10.32604/cmc.2019.05892

    Abstract Recently dependency information has been used in different ways to improve neural machine translation. For example, add dependency labels to the hidden states of source words. Or the contiguous information of a source word would be found according to the dependency tree and then be learned independently and be added into Neural Machine Translation (NMT) model as a unit in various ways. However, these works are all limited to the use of dependency information to enrich the hidden states of source words. Since many works in Statistical Machine Translation (SMT) and NMT have proven the validity and potential of using… More >

Displaying 1-10 on page 1 of 8. Per Page