在线时间:8:00-16:00
迪恩网络APP
随时随地掌握行业动态
扫描二维码
关注迪恩网络微信公众号
开源软件名称(OpenSource Name):d-li14/mobilenetv2.pytorch开源软件地址(OpenSource Url):https://github.com/d-li14/mobilenetv2.pytorch开源编程语言(OpenSource Language):Python 100.0%开源软件介绍(OpenSource Introduction):PyTorch Implemention of MobileNet V2
+ Release of next generation of MobileNet in my repo *mobilenetv3.pytorch*
+ Release of advanced design of MobileNetV2 in my repo *HBONet* [ICCV 2019]
+ Release of better pre-trained model. See below for details. Reproduction of MobileNet V2 architecture as described in MobileNetV2: Inverted Residuals and Linear Bottlenecks by Mark Sandler, Andrew Howard, Menglong Zhu, Andrey Zhmoginov and Liang-Chieh Chen on ILSVRC2012 benchmark with PyTorch framework. This implementation provides an example procedure of training and validating any prevalent deep neural network architecture, with modular data processing, training, logging and visualization integrated. RequirementsDependencies
DatasetDownload the ImageNet dataset and move validation images to labeled subfolders. To do this, you can use the following script: https://raw.githubusercontent.com/soumith/imagenetloader.torch/master/valprep.sh Pretrained modelsThe pretrained MobileNetV2 1.0 achieves 72.834% top-1 accuracy and 91.060% top-5 accuracy on ImageNet validation set, which is higher than the statistics reported in the original paper and official TensorFlow implementation. MobileNetV2 with a spectrum of width multipliers
MobileNetV2 1.0 with a spectrum of input resolutions
Taking MobileNetV2 1.0 as an example, pretrained models can be easily imported using the following lines and then finetuned for other vision tasks or utilized in resource-aware platforms. from models.imagenet import mobilenetv2
net = mobilenetv2()
net.load_state_dict(torch.load('pretrained/mobilenetv2-c5e733a8.pth')) UsageTrainingConfiguration to reproduce our strong results efficiently, consuming around 2 days on 4x TiTan XP GPUs with non-distributed DataParallel and PyTorch dataloader.
The newly released model achieves even higher accuracy, with larger bacth size (1024) on 8 GPUs, higher initial learning rate (0.4) and longer training epochs (250). In addition, a dropout layer with the dropout rate of 0.2 is inserted before the final FC layer, no weight decay is imposed on biases and BN layers and the learning rate ramps up from 0.1 to 0.4 in the first five training epochs. python imagenet.py \
-a mobilenetv2 \
-d <path-to-ILSVRC2012-data> \
--epochs 150 \
--lr-decay cos \
--lr 0.05 \
--wd 4e-5 \
-c <path-to-save-checkpoints> \
--width-mult <width-multiplier> \
--input-size <input-resolution> \
-j <num-workers> Testpython imagenet.py \
-a mobilenetv2 \
-d <path-to-ILSVRC2012-data> \
--weight <pretrained-pth-file> \
--width-mult <width-multiplier> \
--input-size <input-resolution> \
-e CitationsThe following is a BibTeX entry for the MobileNet V2 paper that you should cite if you use this model.
If you find this implementation helpful in your research, please also consider citing:
LicenseThis repository is licensed under the Apache License 2.0. |
2023-10-27
2022-08-15
2022-08-17
2022-09-23
2022-08-13
请发表评论