• 设为首页
  • 点击收藏
  • 手机版
    手机扫一扫访问
    迪恩网络手机版
  • 关注官方公众号
    微信扫一扫关注
    迪恩网络公众号

YapengTian/AVE-ECCV18: Audio-Visual Event Localization in Unconstrained Videos, ...

原作者: [db:作者] 来自: 网络 收藏 邀请

开源软件名称(OpenSource Name):

YapengTian/AVE-ECCV18

开源软件地址(OpenSource Url):

https://github.com/YapengTian/AVE-ECCV18

开源编程语言(OpenSource Language):

Python 100.0%

开源软件介绍(OpenSource Introduction):

Audio-Visual Event Localization in Unconstrained Videos (To appear in ECCV 2018)

Project ArXiv Demo Video

Watch the video

AVE Dataset & Features

AVE dataset can be downloaded from https://drive.google.com/open?id=1FjKwe79e0u96vdjIVwfRQ1V6SoDHe7kK.

Audio feature and visual feature (7.7GB) are also released. Please put videos of AVE dataset into /data/AVE folder and features into /data folder before running the code.

Scripts for generating audio and visual features: https://drive.google.com/file/d/1TJL3cIpZsPHGVAdMgyr43u_vlsxcghKY/view?usp=sharing (Feel free to modify and use it to precess your audio and visual data).

Requirements

Python-3.6, Pytorch-0.3.0, Keras, ffmpeg.

Visualize attention maps

Run: python attention_visualization.py to generate audio-guided visual attention maps.

image

Supervised audio-visual event localization

Testing:

A+V-att model in the paper: python supervised_main.py --model_name AV_att

DMRN model in the paper: python supervised_main.py --model_name DMRN

Training:

python supervised_main.py --model_name AV_att --train

Weakly-supervised audio-visual event localization

We add some videos without audio-visual events into training data. Therefore, the labels of these videos are background. Processed visual features can be found in visual_feature_noisy.h5. Put the feature into data folder.

Testing:

W-A+V-att model in the paper: python weak_supervised_main.py

Training:

python weak_supervised_main.py --train

Cross-modality localization

For this task, we developed a cross-modal matching network. Here, we used visual feature vectors via global average pooling, and you can find here. Please put the feature into data folder. Note that the code was implemented via Keras-2.0 with Tensorflow as the backend.

Testing:

python cmm_test.py

Training:

python cmm_train.py

Other Related or Follow-up works

[1] Rouditchenko, Andrew, et al. "Self-supervised Audio-visual Co-segmentation." ICASSP, 2019. [Paper]

[2] Lin, Yan-Bo, Yu-Jhe Li, and Yu-Chiang Frank Wang. "Dual-modality seq2seq network for audio-visual event localization." ICASSP, 2019 [Paper]

[3] Rana, Aakanksha, Cagri Ozcinar, and Aljosa Smolic. "Towards Generating Ambisonics Using Audio-visual Cue for Virtual Reality." ICASSP, 2019. [Paper]

[4] Yu Wu, Linchao Zhu, Yan Yan, Yi Yang. "Dual Attention Matching for Audio-Visual Event Localization", ICCV, 2019. (oral) [website]

[5] Jinxing Zhou, Liang Zheng, Yiran Zhong, Shijie Hao, Meng Wang. Positive Sample Propagation along the Audio-Visual Event Line, CVPR 2021. [paper][code]

Citation

If you find this work useful, please consider citing it.

@InProceedings{tian2018ave,
  author={Tian, Yapeng and Shi, Jing and Li, Bochen and Duan, Zhiyao and Xu, Chenliang},
  title={Audio-Visual Event Localization in Unconstrained Videos},
  booktitle = {ECCV},
  year = {2018}
}

Acknowledgements

Audio features are extracted using vggish and the audio-guided visual attention model was implemented highly based on adaptive attention. We thank the authors for sharing their codes. If you use our codes, please also cite their nice works.




鲜花

握手

雷人

路过

鸡蛋
该文章已有0人参与评论

请发表评论

全部评论

专题导读
上一篇:
CPDigitalDarkroom/Localizations: IfFound² Localizations发布时间:2022-08-15
下一篇:
PreMiD/Localization: 发布时间:2022-08-15
热门推荐
阅读排行榜

扫描微信二维码

查看手机版网站

随时了解更新最新资讯

139-2527-9053

在线客服(服务时间 9:00~18:00)

在线QQ客服
地址:深圳市南山区西丽大学城创智工业园
电邮:jeky_zhao#qq.com
移动电话:139-2527-9053

Powered by 互联科技 X3.4© 2001-2213 极客世界.|Sitemap