Let us assume we have sets
, these represent
classes,
and test sample
. We want to classify
as a member of one of classes
, k-NN does this very simply (figure 3.13 on page
):
- find
closest vectors to test vector
, let these are
- make a hypercycle
around
with radius
,
- classify
as
:
The last step says: classify input vector
as the member of the class which has the majority in hypercycle
.
Figure 3.13:
Classification using k-NN,
. The test sample T is being classified as
, because in the hypercycle surrounding T are 2 elements from
and only one from
.
|
The special case of k-NN is 1-NN, where we are classifying sample
as the closest vector to
.
The very important thing here is the metric used to find the closest vectors. This was discussed in section Error Function.
Kocurek
2007-12-17