• 设为首页
  • 点击收藏
  • 手机版
    手机扫一扫访问
    迪恩网络手机版
  • 关注官方公众号
    微信扫一扫关注
    迪恩网络公众号

MichiganCOG/A2CL-PT: Adversarial Background-Aware Loss for Weakly-supervised Tem ...

原作者: [db:作者] 来自: 网络 收藏 邀请

开源软件名称(OpenSource Name):

MichiganCOG/A2CL-PT

开源软件地址(OpenSource Url):

https://github.com/MichiganCOG/A2CL-PT

开源编程语言(OpenSource Language):

Python 100.0%

开源软件介绍(OpenSource Introduction):

A2CL-PT

Adversarial Background-Aware Loss for Weakly-supervised Temporal Activity Localization (ECCV 2020)
paper | poster | presentation

Overview

We argue that existing methods for weakly-supervised temporal activity localization are not able to sufficiently distinguish background information from activities of interest for each video even though such an ability is critical to strong temporal activity localization. To this end, we propose a novel method named Adversarial and Angular Center Loss with a Pair of Triplets (A2CL-PT). Our method outperforms all the previous state-of-the-art approaches. Specifically, the average mAP of IoU thresholds from 0.1 to 0.9 on THUMOS14 dataset is significantly improved from 27.9% to 30.0%.

Method \ mAP(%) @0.1 @0.2 @0.3 @0.4 @0.5 @0.6 @0.7 @0.8 @0.9 AVG
UntrimmedNet 44.4 37.7 28.2 21.1 13.7 - - - - -
STPN 52.0 44.7 35.5 25.8 16.9 9.9 4.3 1.2 0.1 21.2
W-TALC 55.2 49.6 40.1 31.1 22.8 - 7.6 - - -
AutoLoc - - 35.8 29.0 21.2 13.4 5.8 - - -
CleanNet - - 37.0 30.9 23.9 13.9 7.1 - - -
MAAN 59.8 50.8 41.1 30.6 20.3 12.0 6.9 2.6 0.2 24.9
BaS-Net 58.2 52.3 44.6 36.0 27.0 18.6 10.4 3.9 0.5 27.9
A2CL-PT (Ours) 61.2 56.1 48.1 39.0 30.1 19.2 10.6 4.8 1.0 30.0

Weakly-supervised Temporal Activity Localization

The main goal of temporal activity localization is to find the start and end times of activities from untrimmed videos. A weakly-supervised version has recently taken foot in the community: here, one assumes that only video-level groundtruth activity labels are available. These video-level activity annotations are easy to collect and already exist across many datasets, thus weakly-supervised methods can be applied to a broader range of situations.

Example

Full example video clip is included in examples folder. You can reproduce the detection results by using run_example.py

Code Usage

First, clone this repository and download these pre-extracted I3D features of the THUMOS14 dataset: feature_train.npy and feature_val.npy. Then, put these files in the dataset/THUMOS14 folder and just run

$ python main.py --mode val

This will reproduce the results reported in the paper. You can also train the model from scratch by running

$ python main.py --mode train

You can refer to the main.py file to play with the hyperparameters (margins, alpha, beta, gamma, omega, etc.).

Notes

  • We performed all the experiments with Python 3.6 and PyTorch 1.3.1 on a single GPU (TITAN Xp).

  • We also provide the pre-extracted features of ActivityNet-1.3 dataset: link. As described in our paper, you also need to add a 1D grouped convolutional layer (k=13, p=12, d=2). Please refer to this discussion.

Citation

@inproceedings{min2020adversarial,
  title={Adversarial Background-Aware Loss for Weakly-supervised Temporal Activity Localization},
  author={Min, Kyle and Corso, Jason J},
  booktitle={European Conference on Computer Vision},
  pages={283--299},
  year={2020},
  organization={Springer}
}



鲜花

握手

雷人

路过

鸡蛋
该文章已有0人参与评论

请发表评论

全部评论

专题导读
热门推荐
阅读排行榜

扫描微信二维码

查看手机版网站

随时了解更新最新资讯

139-2527-9053

在线客服(服务时间 9:00~18:00)

在线QQ客服
地址:深圳市南山区西丽大学城创智工业园
电邮:jeky_zhao#qq.com
移动电话:139-2527-9053

Powered by 互联科技 X3.4© 2001-2213 极客世界.|Sitemap