Project Page ACM MM 2025

EchoMask: Speech-Queried Attention-based Mask Modeling for Holistic Co-Speech Motion Generation

1Wuhan University  ·   2Tongyi Lab, Alibaba Group
EchoMask teaser

📋 Abstract

Masked modeling framework has shown promise in co-speech motion generation. However, it struggles to identify semantically significant frames for effective motion masking. In this work, we propose a speech-queried attention-based mask modeling framework for co-speech motion generation. Our key insight is to leverage motion-aligned speech features to guide the masked motion modeling process, selectively masking rhythm-related and semantically expressive motion frames. Specifically, we first propose a motion-audio alignment module (MAM) to construct a latent motion-audio joint space. In this space, both low-level and high-level speech features are projected, enabling motion-aligned speech representation using learnable speech queries. Then, a speech-queried attention mechanism (SQA) is introduced to compute frame-level attention scores through interactions between motion keys and speech queries, guiding selective masking toward motion frames with high attention scores. Finally, the motion-aligned speech features are also injected into the generation network to facilitate co-speech motion generation. Qualitative and quantitative evaluations confirm that our method outperforms existing state-of-the-art approaches, successfully producing high-quality co-speech motion. The code will be released at https://github.com/Xiangyue-Zhang/EchoMask.

🎬 Video

🔬 Comparison of Masked Motion Modeling Concepts

Can speech be used as a query to identify semantically important motion frames worth focusing on during masked modeling?

Comparison of masking strategies
Existing methods predominantly adopt random (a) or loss-based (b) masking strategies. Random masking often fails to target semantically meaningful regions. Loss-based masking prioritizes frames with high reconstruction error, but high loss may simply reflect abrupt yet uninformative transitions. Our EchoMask (c) uses speech-queried attention to identify semantically important frames.

🏗️ Framework

Architecture of EchoMask: speech-queried attention-based mask modeling for co-speech motion generation.

EchoMask framework
Architecture of EchoMask. (a) MAM projects motion and audio into a shared latent space. Learnable speech queries \(Q'\) are refined through hierarchical cross-attention with HuBERT features (\( \gamma_l, \gamma_h \)) and jointly processed with quantized latent motion \( \tilde{z}_m \) via a shared transformer, optimized with contrastive loss. (b) Given \( m \), mask transformer teacher computes a cross-attention map \( \mathcal{M} \) between latent poses \(p\) and motion-aligned speech features \( Q \), identifying semantically important frames. These frames are masked via a Soft2Hard strategy to produce \( \tilde{m} \), which the student transformer uses to generate motion tokens.

🎨 Qualitative Comparisons

Visual comparisons on body and facial motion generation.

Body comparison on BEAT2
Comparison on BEAT2 Dataset. Red boxes highlight implausible or uncoordinated motions, while green boxes indicate coherent and semantically appropriate results. Our EchoMask consistently generates co-speech motions that are semantically aligned with ground truth.
Facial comparison on BEAT2
Facial Comparison on BEAT2. Our approach tightly synchronizes facial expressions with both phonetic and semantic cues in speech, producing natural and articulate lip movements.

📊 Quantitative Comparisons

Comparison with state-of-the-art methods on standard benchmarks.

Quantitative results
Lower values indicate better performance for FMD, FGD, MSE, and LVD, while higher values are better for BC and DIV. Best results are shown in bold.

📄 BibTeX

@inproceedings{zhang2025echomask,
  title={EchoMask: Speech-Queried Attention-based Mask Modeling for Holistic Co-Speech Motion Generation},
  author={Zhang, Xiangyue and Li, Jianfang and Zhang, Jiaxu and Ren, Jianqiang and Bo, Liefeng and Tu, Zhigang},
  booktitle={Proceedings of the 33rd ACM International Conference on Multimedia},
  pages={10827--10836},
  year={2025}
}