Project Page

Not All Frames Are Equal: Complexity-Aware Masked Motion Generation via Motion Spectral Descriptors

1Beihang University  ·   2Wuhan University

* co-first authors; † corresponding author

DynMask teaser
Left: Standard masked motion generation allocates uniform modeling budget across all frames, leading to degraded quality on dynamically complex motion. Right: DynMask uses the Motion Spectral Descriptor (MSD) to make generation complexity-aware: more supervision, stronger attention exchange, and broader decoding exploration are directed toward dynamically difficult frames.

📋 Abstract

Masked generative models have become a strong paradigm for text-to-motion synthesis, but they still treat motion frames too uniformly during masking, attention, and decoding. This is a poor match for motion, where local dynamic complexity varies sharply over time. We show that current masked motion generators degrade disproportionately on dynamically complex motions, and that frame-wise generation error is strongly correlated with motion dynamics. Motivated by this mismatch, we introduce the Motion Spectral Descriptor (MSD), a simple and parameter-free measure of local dynamic complexity computed from the short-time spectrum of motion velocity. Unlike learned difficulty predictors, MSD is deterministic, interpretable, and derived directly from the motion signal itself. We use MSD to make masked motion generation complexity-aware. In particular, MSD guides content-focused masking during training, provides a spectral similarity prior for self-attention, and can additionally modulate token-level sampling during iterative decoding. Built on top of masked motion generators, our method, DynMask, improves motion generation most clearly on dynamically complex motions while also yielding stronger overall FID on HumanML3D and KIT-ML. The code is available at https://github.com/Xiangyue-Zhang/DynMask.

🎬 Video

🔬 MSD Spectral Fingerprints

Representative MSD heatmaps for different motion types show distinct temporal-frequency patterns across time.

MSD spectral fingerprints
Walking and running exhibit concentrated low-frequency energy, while dance and jump produce richer temporal spectra reflecting their higher dynamic complexity. This confirms that MSD captures meaningful local motion structure rather than an arbitrary training heuristic.

🏗️ Framework

Overview of DynMask: MSD-guided complexity-aware masked motion generation.

DynMask framework
Overview of DynMask. (a) Core and full framework. Given motion tokens from a VQ-VAE and a text condition from a frozen CLIP encoder, DynMask computes one motion-grounded complexity signal, the Motion Spectral Descriptor (MSD), and reuses it throughout masked generation. In the core model, MSD guides content-focused mask selection and motion-aware attention inside the masked transformer. In the full model, the same signal is further used by complexity-aware decoding at inference time. (b) MSD computation: we compute token-embedding velocity, apply a sliding-window Type-II DCT, and obtain both a frame-level spectral descriptor \(\boldsymbol{\phi}_t\) and a scalar complexity summary \(\Omega(t)\). (c) Component details: Motion-aware attention blends learned attention logits with MSD-derived spectral similarity using a layer-decayed coefficient, while complexity-aware decoding assigns higher temperature and noise to dynamically harder frames.

🎨 Qualitative Comparisons

Qualitative comparison on challenging dynamic prompts.

Qualitative comparison
Compared with representative masked baselines (MoMask, BAMM), DynMask better preserves dynamic timing, side-specific limb actions, kick extension, running gait, jump takeoff, and airborne phases. Green circles highlight locally correct dynamic details, while red circles mark failure cases in the baselines.

👁️ Attention Visualization

Attention visualization on a multi-phase turning-and-spinning sequence.

Attention visualization
(a) The MoMask baseline produces comparatively diffuse attention and weak phase separation. (b) DynMask yields clearer phase-structured attention aligned with the underlying motion segments. (c) The MSD spectral similarity matrix shows strong within-phase consistency, explaining why motion-aware attention connects dynamically compatible regions more effectively.

📊 Quantitative Results

DynMask achieves the best FID within the masked motion generation family on both HumanML3D and KIT-ML benchmarks.

Quantitative results
On HumanML3D, it reduces FID from 0.045 (MoMask) to 0.028, a 38% improvement, while simultaneously achieving the best MM-Dist (2.879) and Diversity (9.73). On KIT-ML, it achieves the lowest FID (0.141) across all masked methods and competitive retrieval metrics.

📄 BibTeX

@misc{zhou2026framesequalcomplexityawaremasked,
      title={Not All Frames Are Equal: Complexity-Aware Masked Motion Generation via Motion Spectral Descriptors},
      author={Pengfei Zhou and Xiangyue Zhang and Xukun Shen and Yong Hu},
      year={2026},
      eprint={2603.29655},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2603.29655},
}