I have a python script that uses the calibratecamera2 method to calibrate a camera from a few views of a checker board. After a successful calibration I go after all original points and do some plots and compute again the re-projection error. My surprise is that the reprojection error computed by opencv and mine are a bit different. I found it strange. Am I computing it in a wrong way?
obj_points = []# 3d point in real world space. List of arrays
img_points = []# 2d points in image plane. List of arrays
...
ret, camera_matrix, dist_coeffs, rvecs, tvecs = cv2.calibrateCamera(obj_points, img_points, (w, h), camera_matrix, dist_coeffs, rvecs, tvecs, calib_flags +cv2.CALIB_USE_INTRINSIC_GUESS, criteria)
print "Final reprojection error opencv: ", ret #Compute mean of reprojection error
tot_mean_error=0
mean_error_image = 0
for i in xrange(len(obj_points)):
reprojected_points, _ = cv2.projectPoints(obj_points[i], rvecs[i], tvecs[i], camera_matrix, dist_coeffs)
reprojected_points=reprojected_points.reshape(-1,2)
mean_error_image=np.sum(np.sum(np.abs(img_points[i]-reprojected_points)**2,axis=-1)**(1./2))/np.alen(reprojected_points)
tot_mean_error +=mean_error_image
mean_error=tot_mean_error/len(obj_points)
print "Mean reprojection error: ", mean_error
Final reprojection error opencv: 0.571030279037
Mean reprojection error: 0.438696960449
See Question&Answers more detail:
os 与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…