Vol.63, No.3, 2020, pp.1545-1561, doi:10.32604/cmc.2020.09867
OPEN ACCESS
ARTICLE
Hidden Two-Stream Collaborative Learning Network for Action Recognition
  • Shuren Zhou1, *, Le Chen1, Vijayan Sugumaran2
1 School of Computer and Communication Engineering, Changsha University of Science and Technology, Changsha, 410114, China.
2 Department of Decision and Information Sciences, School of Business Administration, Oakland University, Rochester, 48309, USA.
* Corresponding Author: Shuren Zhou. Email: .
Received 22 January 2020; Accepted 28 February 2020; Issue published 30 April 2020
Abstract
The two-stream convolutional neural network exhibits excellent performance in the video action recognition. The crux of the matter is to use the frames already clipped by the videos and the optical flow images pre-extracted by the frames, to train a model each, and to finally integrate the outputs of the two models. Nevertheless, the reliance on the pre-extraction of the optical flow impedes the efficiency of action recognition, and the temporal and the spatial streams are just simply fused at the ends, with one stream failing and the other stream succeeding. We propose a novel hidden twostream collaborative (HTSC) learning network that masks the steps of extracting the optical flow in the network and greatly speeds up the action recognition. Based on the two-stream method, the two-stream collaborative learning model captures the interaction of the temporal and spatial features to greatly enhance the accuracy of recognition. Our proposed method is highly capable of achieving the balance of efficiency and precision on large-scale video action recognition datasets.
Keywords
Action recognition, collaborative learning, optical flow.
Cite This Article
. , "Hidden two-stream collaborative learning network for action recognition," Computers, Materials & Continua, vol. 63, no.3, pp. 1545–1561, 2020.
This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.