I create a json with annotations and execute the following code:
import matplotlib.pyplot as plt
from pycocotools.coco import COCO
from pycocotools.cocoeval import COCOeval
import numpy as np
import skimage.io as io
import pylab
import json
pylab.rcParams['figure.figsize'] = (10.0, 8.0)
annType = ['segm','bbox','keypoints']
annType = annType[1] #specify type here
prefix = 'person_keypoints' if annType=='keypoints' else 'instances'
print ('Running demo for *%s* results.'%(annType))
# use the valadation labelme file
annFile = '/content/json_moded.json'
cocoGt=COCO(annFile)
#initialize COCO detections api
# use the generated results
resFile = '/content/test_data_normal.json'
cocoDt=cocoGt.loadRes(resFile)
'''
dts = json.load(open(resFile,'r'))
imgIds = [imid['image_id'] for imid in dts]
imgIds = sorted(list(set(imgIds)))
'''
imgIds=sorted(cocoGt.getImgIds())
'''
imgIds=sorted(cocoGt.getImgIds())
imgIds=imgIds[0:24]
imgId = imgIds[np.random.randint(24)]
'''
# running box evaluation
cocoEval = COCOeval(cocoGt,cocoDt,annType)
cocoEval.params.imgIds = imgIds
'''
cocoEval.params.catIds = [3] # 1 stands for the 'person' class, you can increase or decrease the category as needed
'''
cocoEval.evaluate()
cocoEval.accumulate()
cocoEval.summarize()
but the results are the following:
Running demo for *bbox* results.
loading annotations into memory...
Done (t=0.01s)
creating index...
index created!
Loading and preparing results...
DONE (t=0.00s)
creating index...
index created!
Running per image evaluation...
Evaluate annotation type *bbox*
DONE (t=0.11s).
Accumulating evaluation results...
DONE (t=0.04s).
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = -1.000
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = -1.000
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = -1.000
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = -1.000
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = -1.000
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = -1.000
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = -1.000
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = -1.000
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = -1.000
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = -1.000
why every mAP return -1?
I dont know why return these result, i think that both json are corrects.
I dont know why return these result, i think that both json are corrects.
I dont know why return these result, i think that both json are corrects.
question from:
https://stackoverflow.com/questions/66051351/cocoeval-summarize-return-all-map-1 与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…