Home / Advanced Search

  • Title/Keywords

  • Author/Affliations

  • Journal

  • Article Type

  • Start Year

  • End Year

Update SearchingClear
  • Articles
  • Online
Search Results (1)
  • Open Access

    ARTICLE

    TEAM: Transformer Encoder Attention Module for Video Classification

    Hae Sung Park1, Yong Suk Choi2,*

    Computer Systems Science and Engineering, Vol.48, No.2, pp. 451-477, 2024, DOI:10.32604/csse.2023.043245

    Abstract Much like humans focus solely on object movement to understand actions, directing a deep learning model’s attention to the core contexts within videos is crucial for improving video comprehension. In the recent study, Video Masked Auto-Encoder (VideoMAE) employs a pre-training approach with a high ratio of tube masking and reconstruction, effectively mitigating spatial bias due to temporal redundancy in full video frames. This steers the model’s focus toward detailed temporal contexts. However, as the VideoMAE still relies on full video frames during the action recognition stage, it may exhibit a progressive shift in attention towards spatial contexts, deteriorating its ability… More >

Displaying 1-10 on page 1 of 1. Per Page