If you remove the alpha channel, you will see that the background of the template is a dark green. So it will only match the dark background squares. You are reading the template with alpha, but the alpha channel will not be used in the template matching. You need to extract the alpha channel of the template as a mask and use the mask option in matchTemplate? That should fix the issue.
You also seem to be converting the input to grayscale, but trying to match with a colored template. Note that you can do template matching on colored images.
Here is the template without alpha:
Here is the alpha channel from the template:
See https://docs.opencv.org/4.1.1/df/dfb/group__imgproc__object.html#ga586ebfb0a7fb604b35a23d85391329be
mask -- Mask of searched template. It must have the same datatype and size with templ. It is not set by default. Currently, only the TM_SQDIFF and TM_CCORR_NORMED methods are supported.
In case of a color image, template summation in the numerator and each sum in the denominator is done over all of the channels and separate mean values are used for each channel. That is, the function can take a color template and a color image. The result will still be a single-channel image, which is easier to analyze.
So here is your example in Python/OpenCV with color images and masked template matching.
Input:
Template:
import cv2
import numpy as np
# read chessboard image
img = cv2.imread('chessboard.png')
# read pawn image template
template = cv2.imread('pawn.png', cv2.IMREAD_UNCHANGED)
hh, ww = template.shape[:2]
# extract pawn base image and alpha channel and make alpha 3 channels
pawn = template[:,:,0:3]
alpha = template[:,:,3]
alpha = cv2.merge([alpha,alpha,alpha])
# do masked template matching and save correlation image
correlation = cv2.matchTemplate(img, pawn, cv2.TM_CCORR_NORMED, mask=alpha)
# set threshold and get all matches
threshhold = 0.89
loc = np.where(correlation >= threshhold)
# draw matches
result = img.copy()
for pt in zip(*loc[::-1]):
cv2.rectangle(result, pt, (pt[0]+ww, pt[1]+hh), (0,0,255), 1)
print(pt)
# save results
cv2.imwrite('chessboard_pawn.png', pawn)
cv2.imwrite('chessboard_alpha.png', alpha)
cv2.imwrite('chessboard_matches.jpg', result)
cv2.imshow('pawn',pawn)
cv2.imshow('alpha',alpha)
cv2.imshow('result',result)
cv2.waitKey(0)
cv2.destroyAllWindows()
Template without alpha channel:
Extracted alpha channel as mask:
Resulting match locations on input:
But note that each location is really several near-by matches. So one actually has too many matches.
(83, 1052)
(252, 1052)
(253, 1052)
(254, 1052)
(423, 1052)
(592, 1052)
(593, 1052)
(594, 1052)
(763, 1052)
(932, 1052)
(933, 1052)
(934, 1052)
(1103, 1052)
(1272, 1052)
(1273, 1052)
(1274, 1052)
(82, 1053)
(83, 1053)
(84, 1053)
(252, 1053)
(253, 1053)
(254, 1053)
(422, 1053)
(423, 1053)
(424, 1053)
(592, 1053)
(593, 1053)
(594, 1053)
(762, 1053)
(763, 1053)
(764, 1053)
(932, 1053)
(933, 1053)
(934, 1053)
(1102, 1053)
(1103, 1053)
(1104, 1053)
(1272, 1053)
(1273, 1053)
(1274, 1053)
(82, 1054)
(83, 1054)
(84, 1054)
(252, 1054)
(253, 1054)
(254, 1054)
(422, 1054)
(423, 1054)
(424, 1054)
(592, 1054)
(593, 1054)
(594, 1054)
(762, 1054)
(763, 1054)
(764, 1054)
(932, 1054)
(933, 1054)
(934, 1054)
(1102, 1054)
(1103, 1054)
(1104, 1054)
(1272, 1054)
(1273, 1054)
(1274, 1054)
(82, 1055)
(83, 1055)
(84, 1055)
(252, 1055)
(253, 1055)
(254, 1055)
(422, 1055)
(423, 1055)
(424, 1055)
(592, 1055)
(593, 1055)
(594, 1055)
(762, 1055)
(763, 1055)
(764, 1055)
(932, 1055)
(933, 1055)
(934, 1055)
(1102, 1055)
(1103, 1055)
(1104, 1055)
(1272, 1055)
(1273, 1055)
(1274, 1055)
The proper way to deal with multiple matches would be to mask out each match region in the correlation image in a loop, so that nearby non-peak matches that are above the threshold are avoided.
Here is one way to do that.
import cv2
import numpy as np
import math
# read chessboard image
img = cv2.imread('chessboard.png')
# read pawn image template
template = cv2.imread('pawn.png', cv2.IMREAD_UNCHANGED)
hh, ww = template.shape[:2]
# extract pawn base image and alpha channel and make alpha 3 channels
pawn = template[:,:,0:3]
alpha = template[:,:,3]
alpha = cv2.merge([alpha,alpha,alpha])
# set threshold
threshold = 0.89
# do masked template matching and save correlation image
corr_img = cv2.matchTemplate(img, pawn, cv2.TM_CCORR_NORMED, mask=alpha)
# search for max score
result = img.copy()
max_val = 1
rad = int(math.sqrt(hh*hh+ww*ww)/4)
while max_val > threshold:
# find max value of correlation image
min_val, max_val, min_loc, max_loc = cv2.minMaxLoc(corr_img)
print(max_val, max_loc)
if max_val > threshold:
# draw match on copy of input
cv2.rectangle(result, max_loc, (max_loc[0]+ww, max_loc[1]+hh), (0,0,255), 2)
# write black circle at max_loc in corr_img
cv2.circle(corr_img, (max_loc), radius=rad, color=0, thickness=cv2.FILLED)
else:
break
# save results
cv2.imwrite('chessboard_pawn.png', pawn)
cv2.imwrite('chessboard_alpha.png', alpha)
cv2.imwrite('chessboard_correlation.png', (255*corr_img).clip(0,255).astype(np.uint8))
cv2.imwrite('chessboard_matches2.jpg', result)
cv2.imshow('pawn',pawn)
cv2.imshow('alpha',alpha)
cv2.imshow('result',result)
cv2.waitKey(0)
cv2.destroyAllWindows()
Resulting Matches:
And here are the actual matches with their scores:
0.8956151008605957 (253, 1053)
0.8956151008605957 (593, 1053)
0.8956151008605957 (933, 1053)
0.8956151008605957 (1273, 1053)
0.89393150806427 (83, 1054)
0.89393150806427 (423, 1054)
0.89393150806427 (763, 1054)
0.89393150806427 (1103, 1054)
0.886812150478363 (1128, 1232)
Correlation Image with circlular masked out regions: