STARS: Self-supervised 3D Action Recognition with Contrastive Tuning

1KITE Research Institute, 2Institute of Biomedical Engineering, University of Toronto 3Department of Computer Science, University of Toronto
Method Image

Abstract

Self-supervised pretraining methods with masked prediction demonstrate remarkable within-dataset performance in skeleton-based action recognition. However, we show that, unlike contrastive learning approaches, they do not produce well-separated clusters. Additionally, these methods struggle with generalization in few-shot settings. To address these issues, we propose Self-supervised Tuning for 3D Action Recognition in Skeleton sequences (STARS). Specifically, STARS first uses a masked prediction stage using an encoder-decoder architecture. It then employs nearest-neighbor contrastive learning to partially tune the weights of the encoder, enhancing the formation of semantic clusters for different actions. By tuning the encoder for a few epochs, and without using hand-crafted data augmentations, STARS achieves state-of-the-art self-supervised results in various benchmarks, including NTU-60, NTU-120, and PKU-MMD. In addition, STARS exhibits significantly better results than masked prediction models in few-shot settings, where the model has not seen the actions throughout pretraining

Linear Evaluation Result

K-NN Evaluation Result (K=1)

few-shot Result

In this evaluation, model is pretrained on NTU-60 XSub dataset, and is evaluated on 60 new actions on NTU-120 XSub

T-SNE

BibTeX

@article{mehraban2024stars,
  author    = {Soroush Mehraban, Mohammad Javad Rajabi, Babak Taati},
  title     = {STARS: Self-supervised Tuning for 3D Action Recognition in Skeleton Sequences},
  journal   = {arXiv preprint arXiv:2407.10935},
  year      = {2024},
}