I'd take the advantage of the labels array and use that for segmentation.
First reshape it back to the same width/height of the input image.
labels = labels.reshape((img.shape[:-1]))
Now, let's say you want to grab all the pixels with label 2
.
mask = cv2.inRange(labels, 2, 2)
And simply use it with cv2.bitwise_and
to mask out the rest of the image.
mask = np.dstack([mask]*3) # Make it 3 channel
ex_img = cv2.bitwise_and(img, mask)
The nice thing about this approach is that you don't need to hardcode any colour ranges, so the same algorithm will work on many different images.
Sample Code:
Note: Written for OpenCV 3.x. Users of OpenCV 2.4.x need to change the call of cv2.kmeans
appropriately (see docs for the difference).
import numpy as np
import cv2
img = cv2.imread('watermelon.jpg')
Z = np.float32(img.reshape((-1,3)))
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 10, 1.0)
K = 4
_,labels,centers = cv2.kmeans(Z, K, None, criteria, 10, cv2.KMEANS_RANDOM_CENTERS)
labels = labels.reshape((img.shape[:-1]))
reduced = np.uint8(centers)[labels]
result = [np.hstack([img, reduced])]
for i, c in enumerate(centers):
mask = cv2.inRange(labels, i, i)
mask = np.dstack([mask]*3) # Make it 3 channel
ex_img = cv2.bitwise_and(img, mask)
ex_reduced = cv2.bitwise_and(reduced, mask)
result.append(np.hstack([ex_img, ex_reduced]))
cv2.imwrite('watermelon_out.jpg', np.vstack(result))
Sample Output:
Sample Output with different colours:
与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…