WebVision 2020 VIRTUAL

Paper Session - CVPR Main Conference Invited Speakers : Paper #2 (10:41 - 10:45)

Paper Title: Action Segmentation with Joint Self-Supervised Temporal Domain Adaptation

Authors: Min-Hung Chen, Baopu Li, Yingze Bao, Ghassan AlRegib, Zsolt Kira

Email: cmhungsteve@gatech.edu, baopuli@baidu.com, baoyingze@baidu.com, alregib@gatech.edu, zkira@gatech.edu

Short Description: A method for cross-domain action segmentation by aligning feature spaces across multiple temporal scales to reduce spatio-temporal variability.

Keywords: domain adaptation, action segmentation, self-supervised learning, video understanding, temporal dynamics, domain discrepancy, temporal variations, multi-scale.


Talk

Slides

Paper


Abstract : Despite the recent progress of fully-supervised action segmentation techniques, the performance is still not fully satisfactory. One main challenge is the problem of spatio-temporal variations (e.g. different people may perform the same activity in various ways). Therefore, we exploit unlabeled videos to address this problem by reformulating the action segmentation task as a cross-domain problem with domain discrepancy caused by spatio-temporal variations. To reduce the discrepancy, we propose Self-Supervised Temporal Domain Adaptation (SSTDA), which contains two self-supervised auxiliary tasks (binary and sequential domain prediction) to jointly align cross-domain feature spaces embedded with local and global temporal dynamics, achieving better performance than other Domain Adaptation (DA) approaches. On three challenging benchmark datasets (GTEA, 50Salads, and Breakfast), SSTDA outperforms the current state-of-the-art method by large margins (e.g. for the F1@25 score, from 59.6% to 69.1% on Breakfast, from 73.4% to 81.5% on 50Salads, and from 83.6% to 89.1% on GTEA), and requires only 65% of the labeled training data for comparable performance, demonstrating the usefulness of adapting to unlabeled target videos across variations. The source code is available at https://github.com/cmhungsteve/SSTDA.