K-Nearest Neighbors for Machine Learning

In this post you will discover the k-Nearest Neighbors (KNN) algorithm for classification and regression. After reading this post you will know.

  • The model representation used by KNN.
  • How a model is learned using KNN (hint, it’s not).
  • How to make predictions using KNN
  • The many names for KNN including how different fields refer to it.
  • How to prepare your data to get the most from KNN.
  • Where to look to learn more about the KNN algorithm.

This post was written for developers and assumes no background in statistics or mathematics. The focus is on how the algorithm works and how to use it for predictive modeling problems. If you have any questions, leave a comment and I will do my best to answer.

Let’s get started.

K-Nearest Neighbors for Machine Learning

K-Nearest Neighbors for Machine Learning
Photo by Valentin Ottone, some rights reserved.

KNN Model Representation

The model representation for KNN is the entire training dataset.

It is as simple as that.

KNN has no model other than storing the entire dataset, so there is no learning required.

Efficient implementations can store the data using complex data structures like k-d trees to make look-up and matching of new patterns during prediction efficient.

Because the entire training dataset is stored, you may want to think carefully about the consistency of your training data. It might be a good idea to curate it, update it often as new data becomes available and remove erroneous and outlier data.

Get your FREE Algorithms Mind Map

Machine Learning Algorithms Mind Map

Sample of the handy machine learning algorithms mind map.

I've created a handy mind map of 60+ algorithms organized by type.

Download it, print it and use it. 

Download For Free


Also get exclusive access to the machine learning algorithms email mini-course.

 

 

Making Predictions with KNN

KNN makes predictions using the training dataset directly.

Predictions are made for a new instance (x) by searching through the entire training set for the K most similar instances (the neighbors) and summarizing the output variable for those K instances. For regression this might be the mean output variable, in classification this might be the mode (or most common) class value.

To determine which of the K instances in the training dataset are most similar to a new input a distance measure is used. For real-valued input variables, the most popular distance measure is Euclidean distance.

Euclidean distance is calculated as the square root of the sum of the squared differences between a new point (x) and an existing point (xi) across all input attributes j.

EuclideanDistance(x, xi) = sqrt( sum( (xj – xij)^2 ) )

Other popular distance measures include:

  • Hamming Distance: Calculate the distance between binary vectors (more).
  • Manhattan Distance: Calculate the distance between real vectors using the sum of their absolute difference. Also called City Block Distance (more).
  • Minkowski Distance: Generalization of Euclidean and Manhattan distance (more).

There are many other distance measures that can be used, such as Tanimoto, Jaccard, Mahalanobis and cosine distance. You can choose the best distance metric based on the properties of your data. If you are unsure, you can experiment with different distance metrics and different values of K together and see which mix results in the most accurate models.

Euclidean is a good distance measure to use if the input variables are similar in type (e.g. all measured widths and heights). Manhattan distance is a good measure to use if the input variables are not similar in type (such as age, gender, height, etc.).

The value for K can be found by algorithm tuning. It is a good idea to try many different values for K (e.g. values from 1 to 21) and see what works best for your problem.

The computational complexity of KNN increases with the size of the training dataset. For very large training sets, KNN can be made stochastic by taking a sample from the training dataset from which to calculate the K-most similar instances.

KNN has been around for a long time and has been very well studied. As such, different disciplines have different names for it, for example:

  • Instance-Based Learning: The raw training instances are used to make predictions. As such KNN is often referred to as instance-based learning or a case-based learning (where each training instance is a case from the problem domain).
  • Lazy Learning: No learning of the model is required and all of the work happens at the time a prediction is requested. As such, KNN is often referred to as a lazy learning algorithm.
  • Non-Parametric: KNN makes no assumptions about the functional form of the problem being solved. As such KNN is referred to as a non-parametric machine learning algorithm.

KNN can be used for regression and classification problems.

KNN for Regression

When KNN is used for regression problems the prediction is based on the mean or the median of the K-most similar instances.

KNN for Classification

When KNN is used for classification, the output can be calculated as the class with the highest frequency from the K-most similar instances. Each instance in essence votes for their class and the class with the most votes is taken as the prediction.

Class probabilities can be calculated as the normalized frequency of samples that belong to each class in the set of K most similar instances for a new data instance. For example, in a binary classification problem (class is 0 or 1):

p(class=0) = count(class=0) / (count(class=0)+count(class=1))

If you are using K and you have an even number of classes (e.g. 2) it is a good idea to choose a K value with an odd number to avoid a tie. And the inverse, use an even number for K when you have an odd number of classes.

Ties can be broken consistently by expanding K by 1 and looking at the class of the next most similar instance in the training dataset.

Curse of Dimensionality

KNN works well with a small number of input variables (p), but struggles when the number of inputs is very large.

Each input variable can be considered a dimension of a p-dimensional input space. For example, if you had two input variables x1 and x2, the input space would be 2-dimensional.

As the number of dimensions increases the volume of the input space increases at an exponential rate.

In high dimensions, points that may be similar may have very large distances. All points will be far away from each other and our intuition for distances in simple 2 and 3-dimensional spaces breaks down. This might feel unintuitive at first, but this general problem is called the “Curse of Dimensionality“.

Best Prepare Data for KNN

  • Rescale Data: KNN performs much better if all of the data has the same scale. Normalizing your data to the range [0, 1] is a good idea. It may also be a good idea to standardize your data if it has a Gaussian distribution.
  • Address Missing Data: Missing data will mean that the distance between samples can not be calculated. These samples could be excluded or the missing values could be imputed.
  • Lower Dimensionality: KNN is suited for lower dimensional data. You can try it on high dimensional data (hundreds or thousands of input variables) but be aware that it may not perform as well as other techniques. KNN can benefit from feature selection that reduces the dimensionality of the input feature space.

Further Reading

If you are interested in implementing KNN from scratch in Python, checkout the post:

Below are some good machine learning texts that cover the KNN algorithm from a predictive modeling perspective.

  1. Applied Predictive Modeling, Chapter 7 for regression, Chapter 13 for classification.
  2. Data Mining: Practical Machine Learning Tools and Techniques, page 76 and 128
  3. Doing Data Science: Straight Talk from the Frontline, page 71
  4. Machine Learning, Chapter 8

Also checkout K-Nearest Neighbors on Wikipedia.

Summary

In this post you discovered the KNN machine learning algorithm. You learned that:

  • KNN stores the entire training dataset which it uses as its representation.
  • KNN does not learn any model.
  • KNN makes predictions just-in-time by calculating the similarity between an input sample and each training instance.
  • There are many distance measures to choose from to match the structure of your input data.
  • That it is a good idea to rescale your data, such as using normalization, when using KNN.

If you have any questions about this post or the KNN algorithm ask in the comments and I will do my best to answer.


Frustrated With Machine Learning Math?

Mater Machine Learning Algorithms

See How Algorithms Work in Minutes

…with just arithmetic and simple examples

Discover how in my new Ebook: Master Machine Learning Algorithms

It covers explanations and examples of 10 top algorithms, like:
Linear Regression, k-Nearest Neighbors, Support Vector Machines and much more…

Finally, Pull Back the Curtain on
Machine Learning Algorithms

Skip the Academics. Just Results.

Click to learn more.


24 Responses to K-Nearest Neighbors for Machine Learning

  1. Roberto July 23, 2016 at 4:37 am #

    KNN is good to looking for nearest date in two sets of data, excluding the nearest neighbor used? if not, what algorithm you should suggest me to solve the issue.

    My doubt is it:

    A = [‘2016-05-01 00:17:10’, ‘2016-05-01 00:49:06’]

    B = [ ‘2016-05-01 01:49:02’, ‘2016-05-01 01:17:06’]

    With A[0] the nearest is neighbor is B[1], so B[1] is not used more in the next step, so the nearest neighbor for A[1] is B[0].

    Thanks for your advice.

  2. Bhavani Shanker K October 30, 2016 at 11:06 pm #

    Hi Jason,
    Kindly accept my encomiums for your illustrative article on kNN.
    I am extremely eager to know as to how the k has been computed to have a range between 1 and 21.
    Since the rationale is not explicit in respect the value of k = 21.
    I would be very obliged if can kindly through light on the methodology of arriving at the k value spanning between lower and upper noinds.

    • Jason Brownlee October 31, 2016 at 5:30 am #

      Hi Bhavani, great question.

      There is no formula to calculate the best k value for a problem.

      My advice is to use trial and error. Experiment with different k values on your problem and see which results in the model with the best performance.

  3. Georgina November 1, 2016 at 2:18 am #

    Hi Jason!

    I was wondering if there’s any way for me to retrieve the neighbour IDs?
    ie: If n = 10, I would like to specify data point x, and have the algorithm reply with the IDs of the 10 nearest neighbours of x.

    Is that a thing that can be done? I’m working in R with Caret. I’d like to stick with that if possible.

    • Jason Brownlee November 1, 2016 at 8:01 am #

      I’m not sure what you mean by ID, Georgina. Perhaps the specific row number or a column value.

      You may need to implement something custom for your requirements.

  4. Anand February 1, 2017 at 10:25 pm #

    Hello Jason

    I have a data set of time required for a state to complete.For example state 1- 5.2 sec,state 2 -5.5 sec,State 3 – 5.2 sec etc… Can I use KNN to match an input and say which state it belongs to if the input is not exactly close to the training data set?And how complex my data set should be to solve this kind of multi class matching?

    • Jason Brownlee February 2, 2017 at 1:57 pm #

      Sorry Anand, I’m not sure I understand the question.

  5. Shameema binth Umer February 2, 2017 at 8:08 pm #

    Thank you very much for your effort. May the God bless.

  6. Burak May 18, 2017 at 9:59 pm #

    Hello,
    ı wonder how should be the input?

    İ have a csv file that contains “Price” and “Score” for games, when i try to label the graph ,
    Python gives the error for my input

    thanks

  7. Bogdan June 1, 2017 at 1:38 am #

    Hello, and thanks for your useful article.

    Is it true that KNN leads to overfitting for a small number of neighbors (k=1, for example)?

    • Jason Brownlee June 2, 2017 at 12:51 pm #

      KNN will give different results on different problems.

  8. Shweta Tamrakar July 4, 2017 at 4:07 pm #

    Does KNN algorithm works on categorical variables too?

  9. Melwin Jose July 6, 2017 at 3:23 am #

    Is there a way to select features for/using KNN ? Like the way LASSO regression can be used for feature selection . I am assuming that LASSO cannot be prior to running KNN, as the former is a parametric model and the later a non-parametric.

  10. isaranja July 12, 2017 at 4:28 pm #

    How I can find the most influential features for the KNN implementation ? .
    I wanted to select few features from the list of features.

  11. bhagyashree August 27, 2017 at 12:48 am #

    hi jason,
    please clarify my doubt when i went through knn algorithm while implementing it in r we are not using lazy and eager learning techniques.when should we use it..
    Regards,
    Bhagyashree

    • Jason Brownlee August 27, 2017 at 5:49 am #

      KNN is a lazy learning algorithm.

      Practically all other common ML algorithms could be considered eager learning (e.g. preparing the model from training data).

  12. Stonehead Park September 24, 2017 at 3:03 pm #

    Awesome post. Thank you for your effort. 🙂

  13. Shrobon Biswas October 13, 2017 at 5:18 am #

    I am unable to download the mindmap.

Leave a Reply