Open Access iconOpen Access

ARTICLE

A Learning-Driven Visual Servoing Framework for Latency Compensation in Image-Guided Teleoperation

Junmin Lyu1, Feng Bao2,*, Guangyu Xu3, Siyu Lu4,*, Bo Yang5, Yuxin Liu5, Wenfeng Zheng5

1 School of Artificial Intelligence, Guangzhou Huashang University, Guangzhou, 511300, China
2 School of Biological and Environmental Engineering, Xi’an University, Xi’an, 710065, China
3 School of the Environment, The University of Queensland, St Lucia 2, Brisbane, 4072, Australia
4 Department of Geography, Texas A&M University, College Station, TX 77843, USA
5 School of Automation, University of Electronic Science and Technology of China, Chengdu, 611731, China

* Corresponding Authors: Feng Bao. Email: email; Siyu Lu. Email: email

Computer Modeling in Engineering & Sciences 2026, 146(2), 28 https://doi.org/10.32604/cmes.2025.075178

Abstract

Robust teleoperation in image-guided interventions faces critical challenges from latency, deformation, and the quasi-periodic nature of physiological motion. This paper presents a fully integrated, latency-aware visual servoing system leveraging stereo vision, hand–eye calibration, and learning-based prediction for motion-compensated teleoperation. The system combines a calibrated binocular camera setup, dual robotic arms, and a predictive control loop incorporating Long Short-Term Memory (LSTM) and Temporal Convolutional Network (TCN) models. Through experiments using both in vivo and phantom datasets, we quantitatively assess the prediction accuracy and motion-compensation performance of both models. Results show that TCNs deliver more stable and precise tracking, especially on regular trajectories, while LSTMs exhibit robustness under quasi-periodic dynamics. By matching prediction horizons to system latency, the approach significantly reduces peak and steady-state tracking errors, demonstrating practical feasibility for deploying prediction-augmented servoing in teleoperated surgical.

Keywords

Teleoperation system; motion prediction; surgical robot; visual servoing; learning-based control

Cite This Article

APA Style
Lyu, J., Bao, F., Xu, G., Lu, S., Yang, B. et al. (2026). A Learning-Driven Visual Servoing Framework for Latency Compensation in Image-Guided Teleoperation. Computer Modeling in Engineering & Sciences, 146(2), 28. https://doi.org/10.32604/cmes.2025.075178
Vancouver Style
Lyu J, Bao F, Xu G, Lu S, Yang B, Liu Y, et al. A Learning-Driven Visual Servoing Framework for Latency Compensation in Image-Guided Teleoperation. Comput Model Eng Sci. 2026;146(2):28. https://doi.org/10.32604/cmes.2025.075178
IEEE Style
J. Lyu et al., “A Learning-Driven Visual Servoing Framework for Latency Compensation in Image-Guided Teleoperation,” Comput. Model. Eng. Sci., vol. 146, no. 2, pp. 28, 2026. https://doi.org/10.32604/cmes.2025.075178



cc Copyright © 2026 The Author(s). Published by Tech Science Press.
This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
  • 732

    View

  • 220

    Download

  • 0

    Like

Share Link