Open Access iconOpen Access



Deepfake Video Detection Based on Improved CapsNet and Temporal–Spatial Features

Tianliang Lu*, Yuxuan Bao, Lanting Li

People’s Public Security University of China, Beijing, 100038, China

* Corresponding Author: Tianliang Lu. Email: email

Computers, Materials & Continua 2023, 75(1), 715-740.


Rapid development of deepfake technology led to the spread of forged audios and videos across network platforms, presenting risks for numerous countries, societies, and individuals, and posing a serious threat to cyberspace security. To address the problem of insufficient extraction of spatial features and the fact that temporal features are not considered in the deepfake video detection, we propose a detection method based on improved CapsNet and temporal–spatial features (iCapsNet–TSF). First, the dynamic routing algorithm of CapsNet is improved using weight initialization and updating. Then, the optical flow algorithm is used to extract interframe temporal features of the videos to form a dataset of temporal–spatial features. Finally, the iCapsNet model is employed to fully learn the temporal–spatial features of facial videos, and the results are fused. Experimental results show that the detection accuracy of iCapsNet–TSF reaches 94.07%, 98.83%, and 98.50% on the Celeb-DF, FaceSwap, and Deepfakes datasets, respectively, displaying a better performance than most existing mainstream algorithms. The iCapsNet–TSF method combines the capsule network and the optical flow algorithm, providing a novel strategy for the deepfake detection, which is of great significance to the prevention of deepfake attacks and the preservation of cyberspace security.


Cite This Article

T. Lu, Y. Bao and L. Li, "Deepfake video detection based on improved capsnet and temporal–spatial features," Computers, Materials & Continua, vol. 75, no.1, pp. 715–740, 2023.

cc This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
  • 761


  • 444


  • 0


Share Link