Open Access iconOpen Access



Convolutional Neural Networks Based Video Reconstruction and Computation in Digital Twins

M. Kavitha1, B. Sankara Babu2, B. Sumathy3, T. Jackulin4, N. Ramkumar5, A. Manimaran6, Ranjan Walia7, S. Neelakandan8,*

1 Department of Computer Science and Engineering, Vel Tech Rangarajan Dr. Sagunthala R&D Institute of Science and Technology, Chennai, 600 062, India
2 Department of Computer Science and Engineering, Gokaraju Rangaraju Institute of Engineering and Technology, Hyderabad, 500 090, India
3 Department of Instrumentation and Control Engineering, Sri Sairam Engineering College, Chennai, 602 109, India
4 Department of Computer Science and Engineering, Panimalar Engineering College, Chennai, 600 123, India
5 Department of Statistics, Vishwakarma University, Pune, 411 048, India
6 Department of Computer Applications, Madanapalle Institute of Technology & Science, Madanapalle, 517 325, India
7 Department of Electrical Engineering, Model Institute of Engineering and Technology, Jammu, 181 122, India
8 Department of Computer Science and Engineering, R.M.K Engineering College, Kavaraipettai, 601 206, India

* Corresponding Author: S. Neelakandan. Email: email

Intelligent Automation & Soft Computing 2022, 34(3), 1571-1586.


With the advancement of communication and computing technologies, multimedia technologies involving video and image applications have become an important part of the information society and have become inextricably linked to people's daily productivity and lives. Simultaneously, there is a growing interest in super-resolution (SR) video reconstruction techniques. At the moment, the design of digital twins in video computing and video reconstruction is based on a number of difficult issues. Although there are several SR reconstruction techniques available in the literature, most of the works have not considered the spatio-temporal relationship between the video frames. With this motivation in mind, this paper presents VDCNN-SS, a novel very deep convolutional neural networks (VDCNN) with spatiotemporal similarity (SS) model for video reconstruction in digital twins. The VDCNN-SS technique proposed here maps the relationship between interconnected low resolution (LR) and high resolution (HR) image blocks. It also considers the spatiotemporal non-local complementary and repetitive data among nearby low-resolution video frames. Furthermore, the VDCNN technique is used to learn the LR–HR correlation mapping learning process. A series of simulations were run to examine the improved performance of the VDCNN-SS model, and the experimental results demonstrated the superiority of the VDCNN-SS technique over recent techniques.


Cite This Article

M. Kavitha, B. Sankara Babu, B. Sumathy, T. Jackulin, N. Ramkumar et al., "Convolutional neural networks based video reconstruction and computation in digital twins," Intelligent Automation & Soft Computing, vol. 34, no.3, pp. 1571–1586, 2022.

cc This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
  • 964


  • 531


  • 0


Share Link