**Average reading time: 4 minutes, level: Advanced**

In this blog, we will continue to talk about **computer vision** in robotics and introduce a simple classification algorithm using supervised learning called as K-nearest neighbours or KNN algorithm.

**Supervised learning in robotics makes the robot take a reference to training data provided to label the outcome. **

The training data consist of a set of training examples. In supervised learning, each example is a pair consisting of an input object (typically a feature vector) and the desired output value (also called the supervisory signal).

KNN is a classification algorithm. Classification is the problem of identifying to which of a set of categories (sub-populations) a new observation belongs, on the basis of a training set of data containing observations (or instances) whose category membership is known.

Take an example: You have reached a new city for a job and there are two kinds of a sports-loving population in this new city. You have cricket lovers and then you have football lovers in the city. Now based on the flat allocated in the neighbourhood the algorithm would classify who your nearest neighbour is the algorithm may predict if you are a cricket or football lover. So if you are on the second floor in flat 1 and flat 2 is a football lover then initially you would be classified as a football lover by the algorithm but in each iteration, the algorithm will check if the first floor has many other football followers. Ideally, you may not belong to football just if your nearest neighbour like football and all other flats in floor 1 like cricket. So this interaction goes on till a result is obtained and you are labelled as a football or cricket lover.

Did this example confuse you more? We get that. Let us try a more simple one with a shape. Given an image with multiple shapes, the algorithm can label the shapes as circles, triangles and squares.

Or take the real life case of the previous **blog **where we look at a cat's image and use contour to draw the curves and then we may apply the KNN algorithm to label the image as a cat or a dog.

As you have understood by now, KNN relies heavily on the localisation of the data set and hence this is the major drawback of the algorithm and hence, KNN is also sometimes called as a 'lazy algorithm'. The method is also sometimes referred to as "learning by example" because for prediction it looks for the feature vector with a known response that is closest to the given vector.

Now let us go into Python functions of implementation of KNN.

**cv2.KNearest.train() - The method trains the K-Nearest model.**

**cv2.KNearest.find_nearest () - For each input vector (a row of the matrix samples), the method finds the k nearest neighbours.**

The details of the parameters can be found at this **link **in OpenCV site.

In the next blog, we will use the sample Python code to call these functions to detect a number written over an image and also over a video source as input. Robots as SID2 uses OpenCV to detect a Red ball and chase it in real time. Have a look at SID2 **here**. Please like, share and comment.

*Source : http://docs.opencv.org/2.4/modules/ml/doc/k_nearest_neighbors.html*