Home / Advanced Search

  • Title/Keywords

  • Author/Affliations

  • Journal

  • Article Type

  • Start Year

  • End Year

Update SearchingClear
  • Articles
  • Online
Search Results (3)
  • Open Access


    An Efficient Long Short-Term Memory Model for Digital Cross-Language Summarization

    Y. C. A. Padmanabha Reddy1, Shyam Sunder Reddy Kasireddy2, Nageswara Rao Sirisala3, Ramu Kuchipudi4, Purnachand Kollapudi5,*

    CMC-Computers, Materials & Continua, Vol.74, No.3, pp. 6389-6409, 2023, DOI:10.32604/cmc.2023.034072

    Abstract The rise of social networking enables the development of multilingual Internet-accessible digital documents in several languages. The digital document needs to be evaluated physically through the Cross-Language Text Summarization (CLTS) involved in the disparate and generation of the source documents. Cross-language document processing is involved in the generation of documents from disparate language sources toward targeted documents. The digital documents need to be processed with the contextual semantic data with the decoding scheme. This paper presented a multilingual cross-language processing of the documents with the abstractive and summarising of the documents. The proposed model is represented as the Hidden Markov… More >

  • Open Access


    Cross-Language Transfer Learning-based Lhasa-Tibetan Speech Recognition

    Zhijie Wang1, Yue Zhao1,*, Licheng Wu1, Xiaojun Bi1, Zhuoma Dawa2, Qiang Ji3

    CMC-Computers, Materials & Continua, Vol.73, No.1, pp. 629-639, 2022, DOI:10.32604/cmc.2022.027092

    Abstract As one of Chinese minority languages, Tibetan speech recognition technology was not researched upon as extensively as Chinese and English were until recently. This, along with the relatively small Tibetan corpus, has resulted in an unsatisfying performance of Tibetan speech recognition based on an end-to-end model. This paper aims to achieve an accurate Tibetan speech recognition using a small amount of Tibetan training data. We demonstrate effective methods of Tibetan end-to-end speech recognition via cross-language transfer learning from three aspects: modeling unit selection, transfer learning method, and source language selection. Experimental results show that the Chinese-Tibetan multi-language learning method using… More >

  • Open Access


    Improve Representation for Cross-Language Clone Detection by Pretrain Using Tree Autoencoder

    Huading Ling1, Aiping Zhang1, Changchun Yin1, Dafang Li2,*, Mengyu Chang3

    Intelligent Automation & Soft Computing, Vol.33, No.3, pp. 1561-1577, 2022, DOI:10.32604/iasc.2022.027349

    Abstract With the rise of deep learning in recent years, many code clone detection (CCD) methods use deep learning techniques and achieve promising results, so is cross-language CCD. However, deep learning techniques require a dataset to train the models. The dataset is typically small and has a gap between real-world clones due to the difficulty of collecting datasets for cross-language CCD. This creates a data bottleneck problem: data scale and quality issues will cause that model with a better design can still not reach its full potential. To mitigate this, we propose a tree autoencoder (TAE) architecture. It uses unsupervised learning… More >

Displaying 1-10 on page 1 of 3. Per Page