TY - EJOU AU - Lyu, Junmin AU - Bao, Feng AU - Xu, Guangyu AU - Lu, Siyu AU - Yang, Bo AU - Liu, Yuxin AU - Zheng, Wenfeng TI - A Learning-Driven Visual Servoing Framework for Latency Compensation in Image-Guided Teleoperation T2 - Computer Modeling in Engineering \& Sciences PY - 2026 VL - 146 IS - 2 SN - 1526-1506 AB - Robust teleoperation in image-guided interventions faces critical challenges from latency, deformation, and the quasi-periodic nature of physiological motion. This paper presents a fully integrated, latency-aware visual servoing system leveraging stereo vision, hand–eye calibration, and learning-based prediction for motion-compensated teleoperation. The system combines a calibrated binocular camera setup, dual robotic arms, and a predictive control loop incorporating Long Short-Term Memory (LSTM) and Temporal Convolutional Network (TCN) models. Through experiments using both in vivo and phantom datasets, we quantitatively assess the prediction accuracy and motion-compensation performance of both models. Results show that TCNs deliver more stable and precise tracking, especially on regular trajectories, while LSTMs exhibit robustness under quasi-periodic dynamics. By matching prediction horizons to system latency, the approach significantly reduces peak and steady-state tracking errors, demonstrating practical feasibility for deploying prediction-augmented servoing in teleoperated surgical. KW - Teleoperation system; motion prediction; surgical robot; visual servoing; learning-based control DO - 10.32604/cmes.2025.075178