the following example is extracted from sklearn docs but modified a little bit so you can understand what are we doing in this case, for more details see:
https://scikit-learn.org/stable/modules/generated/sklearn.neighbors.KNeighborsClassifier.html
Let's say we have 6 points that are either class 0 or class 1:
#import libraries
from sklearn.neighbors import KNeighborsClassifier
# Data
X = [[5,5], [3,3],[4,4.5], [8,9.3], [8,9.1],[10,15]]
y = [0, 0,0, 1, 1,1]
# Define the knn and fit the model
neigh = KNeighborsClassifier(n_neighbors=4)
neigh.fit(X, y)
# Examples
# Print the probabilities 0 and 1 respecetively
print('Predict probabilities of 0 and 1 ',neigh.predict_proba([[4,4]]))
# Print the probabilities of 0 and 1
print('Predict probabilities of 0 and 1 ',neigh.predict_proba([[10,12]]))
What you are basically doing is saying ok we have n_neighbors=4, so
the closest four points to the point [4,4] are: 3 of class 0 and 1 of class
1 therefore you have 3/4 = 0.75 of being class 1 aand 1/4 =0.25 of being class two (this is the first print). Note that if you set n_neighbors=3 and fit the model you would get that the three closest elementsx are class 0 for the point [4,4] and therefore the probabilities will be [1,0] (3/3 = 1 of being class 1 and 0/3 = 0 of being class 0)
One question that may arise is : How do they define that a point is close to another one? Well, in the documentation says that they are using minkowski distance by default with p = 2 and that's equivalent to calculating euclidian distance between [4,4] and each of the points in your training data (see https://en.wikipedia.org/wiki/Minkowski_distance)
与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…