The human meshes below are estimated at a throughput of 103 frames per second on an RTX 3090 GPU.
Recent transformer-based models for 3D Human Mesh Recovery (HMR) have achieved strong performance but often suffer from high computational cost and complexity due to deep transformer architectures and redundant tokens. In this paper, we introduce two HMR-specific merging strategies: Error-Constrained Layer Merging (ECLM) and Mask-guided Token Merging (Mask-ToMe). ECLM selectively merges transformer layers that have minimal impact on the Mean Per Joint Position Error (MPJPE), while Mask-ToMe focuses on merging background tokens that contribute little to the final prediction. To further address the potential performance drop caused by merging, we propose a diffusion-based decoder that incorporates temporal context and leverages pose priors learned from large-scale motion capture datasets. Experiments across multiple benchmarks demonstrate that our method achieves up to 2.3x speed-up while slightly improving performance over the baseline.
Left: Overview of the Mask-ToMe strategy. Tokens are split into sets A and B, and the most similar background token pairs are merged using similarity scores while masking out person tokens. The bold and underlined numbers represent the highest and second-highest similarity scores, respectively. The numbers shown are illustrative examples only. Right: The overview of ECLM method to merge layers without affecting the Mean-Per-Joint-Position-Error (MPJPE).
Diffusion Decoder Overview. (a) In first stage of training, a VAE model is trained to learn human motion priors. (b) The second stage includes training of a denoiser to recover pose latents conditioned on per-frame encodings extracted from encoder.