k-NN -- The Nearest Neighbor

Let us assume we have sets $D_i$, these represent $c$ classes, $k>0$ and test sample $\mathbf{x}$. We want to classify $\mathbf{x}$ as a member of one of classes $D_i$, k-NN does this very simply (figure 3.13 on page [*]):

The last step says: classify input vector $\mathbf{x}$ as the member of the class which has the majority in hypercycle $C_n$.

Figure 3.13: Classification using k-NN, $k=3$. The test sample T is being classified as $\times $, because in the hypercycle surrounding T are 2 elements from $\times $ and only one from $\circ $.
\includegraphics[width=40mm,height=40mm]{kNN.eps}

The special case of k-NN is 1-NN, where we are classifying sample $\mathbf{x}$ as the closest vector to $\mathbf{x}$.

The very important thing here is the metric used to find the closest vectors. This was discussed in section Error Function.

Kocurek 2007-12-17