在线时间:8:00-16:00
迪恩网络APP
随时随地掌握行业动态
扫描二维码
关注迪恩网络微信公众号
开源软件名称(OpenSource Name):takumayagi/fpl开源软件地址(OpenSource Url):https://github.com/takumayagi/fpl开源编程语言(OpenSource Language):Python 99.7%开源软件介绍(OpenSource Introduction):Future Person Localization in First-Person Videos (CVPR2018)This repository contains the code and data (caution: no raw image provided) for the paper "Future Person Localization in First-Person Videos" by Takuma Yagi, Karttikeya Mangalam, Ryo Yonetani and Yoichi Sato. Prediction examplesRequirementsWe confirmed the code works correctly in below versions.
InstallationDownload dataYou can download our dataset from below link: If you wish downloading via terminal, consider using custom script. Extract the downloaded tar.gz file at the root directory.
Pseudo-videoSince we cannot release the raw images, we prepared sample pseudo-video below. Create datasetRun dataset generation script to preprocess raw locations/poses/egomotions.
Prepare training scriptModify the "in_data" arguments in scripts/5fold.json. Running the codeDirectory structure
TrainingIn our environment (a single TITAN X Pascal w/ CUDA 8, cuDNN 5.1), it took approximately 40 minutes per split.
Evaluation
Prediction visualization using pseudo-videoWe provided visualization code using pseudo-video.
License and CitationThe dataset provided in this repository is only to be used for non-commercial scientific purposes. If you used this dataset in scientific publication, cite the following paper: Takuma Yagi, Karttikeya Mangalam, Ryo Yonetani and Yoichi Sato. Future Person Localization in First-Person Videos. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2018.
|
2023-10-27
2022-08-15
2022-08-17
2022-09-23
2022-08-13
请发表评论