Unzip *zip files in turns and place images_part* into the same folder (Root/ProcessedData/NWPU/images).
Download the processing labels and val gt file from this link. Place them into Root/ProcessedData/NWPU/masks and Root/ProcessedData/NWPU, respectively.
If you want to reproduce the results on Shanghai Tech Part A/B , UCF-QNRF, and JHU datasets, you can follow the instructions in DATA.md to setup the datasets.
We test the pretrained HR Net model on the NWPU dataset in a real-world subway scene. Please visit bilibili or YouTube to watch the video demonstration.
Citation
If you find this project is useful for your research, please cite:
@article{gao2020learning,
title={Learning Independent Instance Maps for Crowd Localization},
author={Gao, Junyu and Han, Tao and Yuan, Yuan and Wang, Qi},
journal={arXiv preprint arXiv:2012.04164},
year={2020}
}
Our code borrows a lot from the C^3 Framework, and you may cite:
@article{gao2019c,
title={C$^3$ Framework: An Open-source PyTorch Code for Crowd Counting},
author={Gao, Junyu and Lin, Wei and Zhao, Bin and Wang, Dong and Gao, Chenyu and Wen, Jun},
journal={arXiv preprint arXiv:1907.02724},
year={2019}
}
If you use pre-trained models in this repo (HR Net, VGG, and FPN), please cite them.
请发表评论