在线时间:8:00-16:00
迪恩网络APP
随时随地掌握行业动态
扫描二维码
关注迪恩网络微信公众号
开源软件名称(OpenSource Name):vasgaowei/TS-CAM开源软件地址(OpenSource Url):https://github.com/vasgaowei/TS-CAM开源编程语言(OpenSource Language):Jupyter Notebook 99.7%开源软件介绍(OpenSource Introduction):TS-CAM: Token Semantic Coupled Attention Map for Weakly SupervisedObject LocalizationThis is the official implementaion of paper TS-CAM: Token Semantic Coupled Attention Map for Weakly Supervised Object Localization, which is accepted as ICCV 2021 poster. This repository contains Pytorch training code, evaluation code, pretrained models and jupyter notebook for more visualization. IllustrationBased on Deit, TS-CAM couples attention maps from visual image transformer with semantic-aware maps to obtain accurate localization maps (Token Semantic Coupled Attention Map, ts-cam). Updates
Model ZooWe provide pretrained TS-CAM models trained on CUB-200-2011 and ImageNet_ILSVRC2012 datasets. CUB-200-2011 dataset
ILSVRC2012 dataset
Note: the Extrate Code for Baidu Drive is gwg7
UsageFirst clone the repository locally:
Then install Pytorch 1.7.0+ and torchvision 0.8.1+ and pytorch-image-models 0.3.2:
Data preparationCUB-200-2011 datasetPlease download and extrate CUB-200-2011 dataset. The directory structure is the following:
ImageNet1kDownload ILSVRC2012 dataset and extract train and val images. The directory structure is organized as follows:
And the training and validation data is expected to be in the For training:On CUB-200-2011 dataset:
On ImageNet1k dataset:
Please note that pretrained model weights of Deit-tiny, Deit-small and Deit-base on ImageNet-1k model will be downloaded when you first train you model, so the Internet should be connected. For evaluation:On CUB-200-2011 dataset:
On ImageNet1k dataset:
VisualizationWe provided
Please download pretrained TS-CAM model weights and try more visualzation results((Attention maps using our method and Attention Rollout method)). You can try other interseting images you like to show the localization map(ts-cams). Visualize localization resultsWe provide some visualization results as follows. Visualize attention mapsWe can also visualize attention maps from different transformer layers. ContactsIf you have any question about our work or this repository, please don't hesitate to contact us by emails. You can also open an issue under this project. CitationIf you use this code for a paper please cite:
|
2023-10-27
2022-08-15
2022-08-17
2022-09-23
2022-08-13
请发表评论