• 设为首页
  • 点击收藏
  • 手机版
    手机扫一扫访问
    迪恩网络手机版
  • 关注官方公众号
    微信扫一扫关注
    迪恩网络公众号

daviswer/fewshotlocal: Few-Shot Learning with Localization in Realistic Settings

原作者: [db:作者] 来自: 网络 收藏 邀请

开源软件名称(OpenSource Name):

daviswer/fewshotlocal

开源软件地址(OpenSource Url):

https://github.com/daviswer/fewshotlocal

开源编程语言(OpenSource Language):

Jupyter Notebook 86.3%

开源软件介绍(OpenSource Introduction):

Few-Shot Learning with Localization in Realistic Settings

Code for the CVPR 2019 paper Few-Shot Learning with Localization in Realistic Settings. Due to the sheer number of independent moving parts and user-defined parameters, we are providing our code as a series of interactive Jupyter notebooks rather than automated Python scripts.

If you find this code or paper useful to your research work, please consider citing it using the following bibtex:

@InProceedings{Wertheimer_2019_CVPR,
  author = {Wertheimer, Davis and Hariharan, Bharath},
  title = {Few-Shot Learning With Localization in Realistic Settings},
  booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
  month = {June},
  year = {2019}
}

Setup

This code requires Pytorch and torchvision 1.0.0 or higher with cuda support, and Jupyter.

It has been tested on Ubuntu 16.04.

The meta-iNat and tiered meta-iNat ("Supercategory meta-iNat") datasets can be downloaded from here, or constructed manually.

To construct meta-iNat from scratch, you must download the iNat2017 dataset. Download and unpack the iNat2017 training/validation images, and the training bounding box annotations, to a directory of your choice. The images and bounding box annotations can be found here.

Running the Scripts

If you are constructing meta-iNat from scratch, begin by running the Setup notebook, which constructs the meta-iNat dataset or a variant according to user-defined parameters. The default parameters reproduce the meta-iNat dataset used in the paper. If you downloaded meta-iNat directly, you can skip this step.

The Train notebook trains an ensemble of learners in parallel, according to user-defined parameters. The default parameters reproduce the best-performing model in the paper (batch folding, covariance pooling, and few-shot localization).

The Evaluate notebook tests your trained models on the reference/query images, according to user-defined parameters. It is highly recommended that your parameters for evaluating a given model match the ones used to train it. The default parameters for the evaluation code match those for the training code.

Results

Three-digit model names indicate the presence or absence of batch folding, localization, and covariance pooling, in that order. For example, ‘101’ indicates a model with batch folding and covariance pooling, but no localization. '000' is a standard prototypical network. Because two versions of localization exist, we use ‘0’ to indicate no localization, ‘1’ for few-shot localization, and ‘2’ for unsupervised localization. A ‘*’ indicates a model presented in the main paper.




鲜花

握手

雷人

路过

鸡蛋
该文章已有0人参与评论

请发表评论

全部评论

专题导读
热门推荐
阅读排行榜

扫描微信二维码

查看手机版网站

随时了解更新最新资讯

139-2527-9053

在线客服(服务时间 9:00~18:00)

在线QQ客服
地址:深圳市南山区西丽大学城创智工业园
电邮:jeky_zhao#qq.com
移动电话:139-2527-9053

Powered by 互联科技 X3.4© 2001-2213 极客世界.|Sitemap