TriC-Motion: Tri-Domain Causal Modeling Grounded Text-to-Motion Generation

ICLR 2026
1Huazhong University of Science and Technology, Wuhan, China, 2Whiting School of Engineering, Johns Hopkins University, Baltimore, Maryland, USA, 3Ant Group, Beijing, China
Interpolate start reference image.

Abstract

Text-to-motion generation, a rapidly evolving field in computer vision, aims to produce realistic and text-aligned motion sequences. Current methods primarily focus on spatial-temporal modeling or independent frequency domain analysis, lacking a unified framework for joint optimization across spatial, temporal, and frequency domains. This limitation hinders the model's ability to leverage information from all domains simultaneously, leading to suboptimal generation quality. Additionally, in motion generation frameworks, motion-irrelevant cues caused by noise are often entangled with features that contribute positively to generation, thereby leading to motion distortion.

To address these issues, we propose Tri-Domain Causal Text-to-Motion Generation (TriC-Motion), a novel diffusion-based framework integrating spatial-temporal-frequency-domain modeling with causal intervention. TriC-Motion includes three core modeling modules for domain-specific modeling, namely Temporal Motion Encoding, Spatial Topology Modeling, and Hybrid Frequency Analysis. After comprehensive modeling, a Score-guided Tri-domain Fusion module integrates valuable information from the triple domains, simultaneously ensuring temporal consistency, spatial topology, motion trends, and dynamics. Moreover, the Causality-based Counterfactual Motion Disentangler is meticulously designed to expose motion-irrelevant cues to eliminate noise, disentangling the real modeling contributions of each domain for superior generation.

Extensive experimental results validate that TriC-Motion achieves superior performance compared to state-of-the-art methods, attaining an outstanding R@1 of 0.612 on the HumanML3D dataset. These results demonstrate its capability to generate high-fidelity, coherent, diverse, and text-aligned motion sequences.

Overview

Interpolate start reference image.

Comparison with other methods

A person stretched out his arms and then leaned forward slightly as if looking at something.

Ours

MARDM
(CVPR 2025)

MotionLCM-v2
(ECCV 2024)

MotionStreamer
(ICCV 2025)

SALAD
(CVPR 2025)

A person squats to lift something up then struggles to carry and put it down hardly.

Ours

MARDM
(CVPR 2025)

MotionLCM-v2
(ECCV 2024)

MotionStreamer
(ICCV 2025)

SALAD
(CVPR 2025)

A person jogs in place at a steady rhythm, slowly turns to the right and crouches down.

Ours

MARDM
(CVPR 2025)

MotionLCM-v2
(ECCV 2024)

MotionStreamer
(ICCV 2025)

SALAD
(CVPR 2025)

A person walks to the right, bends the upper body forward, straightens up, and then walks back to the left.

Ours

MARDM
(CVPR 2025)

MotionLCM-v2
(ECCV 2024)

MotionStreamer
(ICCV 2025)

SALAD
(CVPR 2025)