在线时间:8:00-16:00
迪恩网络APP
随时随地掌握行业动态
扫描二维码
关注迪恩网络微信公众号
开源软件名称(OpenSource Name):Alvin-Zeng/PGCN开源软件地址(OpenSource Url):https://github.com/Alvin-Zeng/PGCN开源编程语言(OpenSource Language):Jupyter Notebook 99.3%开源软件介绍(OpenSource Introduction):Graph Convolutional Networks for Temporal Action LocalizationThis repo holds the codes and models for the PGCN framework presented on ICCV 2019 Graph Convolutional Networks for Temporal Action Localization Runhao Zeng*, Wenbing Huang*, Mingkui Tan, Yu Rong, Peilin Zhao, Junzhou Huang, Chuang Gan, ICCV 2019, Seoul, Korea. Updates20/12/2019 We have uploaded the RGB features, trained models and evaluation results! We found that increasing the number of proposals to 800 in the testing further boosts the performance on THUMOS14. We have also updated the proposal list. 04/07/2020 We have uploaded the I3D features on Anet, the training configurations files in data/dataset_cfg.yaml and the proposal lists for Anet. ContentsUsage GuidePrerequisitesThe training and testing in PGCN is reimplemented in PyTorch for the ease of use. Other minor Python modules can be installed by running pip install -r requirements.txt Code and Data PreparationGet the codeClone this repo with git, please remember to use --recursive git clone --recursive https://github.com/Alvin-Zeng/PGCN Download DatasetsWe support experimenting with two publicly available datasets for temporal action detection: THUMOS14 & ActivityNet v1.3. Here are some steps to download these two datasets.
Download FeaturesHere, we provide the I3D features (RGB+Flow) for training and testing. THUMOS14: You can download it from Google Cloud or Baidu Cloud. Anet: You can download the I3D Flow features from Baidu Cloud (password: jbsa) and the I3D RGB features from Google Cloud (Note: set the interval to 16 in ops/I3D_Pooling_Anet.py when training with RGB features) Download Proposal Lists (ActivityNet)Here, we provide the proposal lists for ActivityNet 1.3. You can download them from Google Cloud Training PGCNPlesse first set the path of features in data/dataset_cfg.yaml train_ft_path: $PATH_OF_TRAINING_FEATURES
test_ft_path: $PATH_OF_TESTING_FEATURES Then, you can use the following commands to train PGCN python pgcn_train.py thumos14 --snapshot_pre $PATH_TO_SAVE_MODEL After training, there will be a checkpoint file whose name contains the information about dataset and the number of epoch. This checkpoint file contains the trained model weights and can be used for testing. Testing Trained ModelsYou can obtain the detection scores by running sh test.sh TRAINING_CHECKPOINT Here, The trained models and evaluation results are put in the "results" folder. You can obtain the two-stream results on THUMOS14 by running sh test_two_stream.sh THUMOS14
#####Here, 49.64% is obtained by setting the combination weights to Flow:RGB=1.2:1 and nms threshold to 0.32 Other InfoCitationPlease cite the following paper if you feel PGCN useful to your research
ContactFor any question, please file an issue or contact
|
2023-10-27
2022-08-15
2022-08-17
2022-09-23
2022-08-13
请发表评论