Open Access iconOpen Access

ARTICLE

crossmark

Motion In-Betweening via Frequency-Domain Diffusion Model

Qiang Zhang1, Shuo Feng1, Shanxiong Chen2, Teng Wan1, Ying Qi1,*

1 Department of Computer Science and Engineering, Northwest Normal University, Lanzhou, 730070, China
2 Department of Computer and Information Science, Southwest University, Chongqing, 400715, China

* Corresponding Author: Ying Qi. Email: email

Computers, Materials & Continua 2026, 86(1), 1-22. https://doi.org/10.32604/cmc.2025.068247

Abstract

Human motion modeling is a core technology in computer animation, game development, and human-computer interaction. In particular, generating natural and coherent in-between motion using only the initial and terminal frames remains a fundamental yet unresolved challenge. Existing methods typically rely on dense keyframe inputs or complex prior structures, making it difficult to balance motion quality and plausibility under conditions such as sparse constraints, long-term dependencies, and diverse motion styles. To address this, we propose a motion generation framework based on a frequency-domain diffusion model, which aims to better model complex motion distributions and enhance generation stability under sparse conditions. Our method maps motion sequences to the frequency domain via the Discrete Cosine Transform (DCT), enabling more effective modeling of low-frequency motion structures while suppressing high-frequency noise. A denoising network based on self-attention is introduced to capture long-range temporal dependencies and improve global structural awareness. Additionally, a multi-objective loss function is employed to jointly optimize motion smoothness, pose diversity, and anatomical consistency, enhancing the realism and physical plausibility of the generated sequences. Comparative experiments on the Human3.6M and LaFAN1 datasets demonstrate that our method outperforms state-of-the-art approaches across multiple performance metrics, showing stronger capabilities in generating intermediate motion frames. This research offers a new perspective and methodology for human motion generation and holds promise for applications in character animation, game development, and virtual interaction.

Keywords

Motion generation; diffusion model; frequency domain; human motion synthesis; self-attention network; 3D motion interpolation

Cite This Article

APA Style
Zhang, Q., Feng, S., Chen, S., Wan, T., Qi, Y. (2026). Motion In-Betweening via Frequency-Domain Diffusion Model. Computers, Materials & Continua, 86(1), 1–22. https://doi.org/10.32604/cmc.2025.068247
Vancouver Style
Zhang Q, Feng S, Chen S, Wan T, Qi Y. Motion In-Betweening via Frequency-Domain Diffusion Model. Comput Mater Contin. 2026;86(1):1–22. https://doi.org/10.32604/cmc.2025.068247
IEEE Style
Q. Zhang, S. Feng, S. Chen, T. Wan, and Y. Qi, “Motion In-Betweening via Frequency-Domain Diffusion Model,” Comput. Mater. Contin., vol. 86, no. 1, pp. 1–22, 2026. https://doi.org/10.32604/cmc.2025.068247



cc Copyright © 2026 The Author(s). Published by Tech Science Press.
This work is licensed under a Creative Commons Attribution 4.0 International License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
  • 1405

    View

  • 734

    Download

  • 0

    Like

Share Link