在线时间:8:00-16:00
迪恩网络APP
随时随地掌握行业动态
扫描二维码
关注迪恩网络微信公众号
开源软件名称(OpenSource Name):implus/GFocalV2开源软件地址(OpenSource Url):https://github.com/implus/GFocalV2开源编程语言(OpenSource Language):Python 99.9%开源软件介绍(OpenSource Introduction):Generalized Focal Loss V2: Learning Reliable Localization Quality Estimation for Dense Object DetectionGFocalV2 (GFLV2) is a next generation of GFocalV1 (GFLV1), which utilizes the statistics of learned bounding box distributions to guide the reliable localization quality estimation. Again, GFLV2 improves over GFLV1 about ~1 AP without (almost) extra computing cost! Analysis of GFocalV2 in ZhiHu: 大白话 Generalized Focal Loss V2. You can see more comments about GFocalV1 in 大白话 Generalized Focal Loss(知乎) More news: [2021.3] GFocalV2 has been accepted by CVPR2021 (pre-review score: 113). [2020.11] GFocalV1 has been adopted in NanoDet, a super efficient object detector on mobile devices, achieving same performance but 2x faster than YoLoV4-Tiny! More details are in YOLO之外的另一选择,手机端97FPS的Anchor-Free目标检测模型NanoDet现已开源~. [2020.10] Good News! GFocalV1 has been accepted in NeurIPs 2020 and GFocalV2 is on the way. [2020.9] The winner (1st) of GigaVision (object detection and tracking) in ECCV 2020 workshop from DeepBlueAI team adopt GFocalV1 in their solutions. [2020.7] GFocalV1 is officially included in MMDetection V2, many thanks to @ZwwWayne and @hellock for helping migrating the code. IntroductionLocalization Quality Estimation (LQE) is crucial and popular in the recent advancement of dense object detectors since it can provide accurate ranking scores that benefit the Non-Maximum Suppression processing and improve detection performance. As a common practice, most existing methods predict LQE scores through vanilla convolutional features shared with object classification or bounding box regression. In this paper, we explore a completely novel and different perspective to perform LQE -- based on the learned distributions of the four parameters of the bounding box. The bounding box distributions are inspired and introduced as ''General Distribution'' in GFLV1, which describes the uncertainty of the predicted bounding boxes well. Such a property makes the distribution statistics of a bounding box highly correlated to its real localization quality. Specifically, a bounding box distribution with a sharp peak usually corresponds to high localization quality, and vice versa. By leveraging the close correlation between distribution statistics and the real localization quality, we develop a considerably lightweight Distribution-Guided Quality Predictor (DGQP) for reliable LQE based on GFLV1, thus producing GFLV2. To our best knowledge, it is the first attempt in object detection to use a highly relevant, statistical representation to facilitate LQE. Extensive experiments demonstrate the effectiveness of our method. Notably, GFLV2 (ResNet-101) achieves 46.2 AP at 14.6 FPS, surpassing the previous state-of-the-art ATSS baseline (43.6 AP at 14.6 FPS) by absolute 2.6 AP on COCO test-dev, without sacrificing the efficiency both in training and inference. For details see GFocalV2. The speed-accuracy trade-off is as follows: Get StartedPlease see GETTING_STARTED.md for the basic usage of MMDetection. Train# assume that you are under the root directory of this project,
# and you have activated your virtual environment if needed.
# and with COCO dataset in 'data/coco/'
./tools/dist_train.sh configs/gfocal/gfocal_r50_fpn_ms2x.py 8 --validate Inference./tools/dist_test.sh configs/gfocal/gfocal_r50_fpn_ms2x.py work_dirs/gfocal_r50_fpn_ms2x/epoch_24.pth 8 --eval bbox Speed Test (FPS)CUDA_VISIBLE_DEVICES=0 python3 ./tools/benchmark.py configs/gfocal/gfocal_r50_fpn_ms2x.py work_dirs/gfocal_r50_fpn_ms2x/epoch_24.pth ModelsFor your convenience, we provide the following trained models (GFocalV2). All models are trained with 16 images in a mini-batch with 8 GPUs.
[0] The reported numbers here are from new experimental trials (in the cleaned repo), which may be slightly different from the original paper. AcknowledgementThanks MMDetection team for the wonderful open source project! CitationIf you find GFocal useful in your research, please consider citing:
|
2023-10-27
2022-08-15
2022-08-17
2022-09-23
2022-08-13
请发表评论