Home / Advanced Search

  • Title/Keywords

  • Author/Affliations

  • Journal

  • Article Type

  • Start Year

  • End Year

Update SearchingClear
  • Articles
  • Online
Search Results (3)
  • Open Access

    ARTICLE

    A Multi-Feature Learning Model with Enhanced Local Attention for Vehicle Re-Identification

    Wei Sun1,2,*, Xuan Chen3, Xiaorui Zhang1,3, Guangzhao Dai2, Pengshuai Chang2, Xiaozheng He4

    CMC-Computers, Materials & Continua, Vol.69, No.3, pp. 3549-3561, 2021, DOI:10.32604/cmc.2021.021627

    Abstract Vehicle re-identification (ReID) aims to retrieve the target vehicle in an extensive image gallery through its appearances from various views in the cross-camera scenario. It has gradually become a core technology of intelligent transportation system. Most existing vehicle re-identification models adopt the joint learning of global and local features. However, they directly use the extracted global features, resulting in insufficient feature expression. Moreover, local features are primarily obtained through advanced annotation and complex attention mechanisms, which require additional costs. To solve this issue, a multi-feature learning model with enhanced local attention for vehicle re-identification (MFELA) is proposed in this paper.… More >

  • Open Access

    ARTICLE

    Dependency-Based Local Attention Approach to Neural Machine Translation

    Jing Qiu1, Yan Liu2, Yuhan Chai2, Yaqi Si2, Shen Su1, ∗, Le Wang1, ∗, Yue Wu3

    CMC-Computers, Materials & Continua, Vol.59, No.2, pp. 547-562, 2019, DOI:10.32604/cmc.2019.05892

    Abstract Recently dependency information has been used in different ways to improve neural machine translation. For example, add dependency labels to the hidden states of source words. Or the contiguous information of a source word would be found according to the dependency tree and then be learned independently and be added into Neural Machine Translation (NMT) model as a unit in various ways. However, these works are all limited to the use of dependency information to enrich the hidden states of source words. Since many works in Statistical Machine Translation (SMT) and NMT have proven the validity and potential of using… More >

  • Open Access

    ARTICLE

    An Improved End-to-End Memory Network for QA Tasks

    Aziguli Wulamu1,2, Zhenqi Sun1,2, Yonghong Xie1,2,*, Cong Xu1,2, Alan Yang3

    CMC-Computers, Materials & Continua, Vol.60, No.3, pp. 1283-1295, 2019, DOI:10.32604/cmc.2019.07722

    Abstract At present, End-to-End trainable Memory Networks (MemN2N) has proven to be promising in many deep learning fields, especially on simple natural language-based reasoning question and answer (QA) tasks. However, when solving some subtasks such as basic induction, path finding or time reasoning tasks, it remains challenging because of limited ability to learn useful information between memory and query. In this paper, we propose a novel gated linear units (GLU) and local-attention based end-to-end memory networks (MemN2N-GL) motivated by the success of attention mechanism theory in the field of neural machine translation, it shows an improved possibility to develop the ability… More >

Displaying 1-10 on page 1 of 3. Per Page