A Tour of Machine Learning Algorithms

In this post we take a tour of the most popular machine learning algorithms. It is useful to tour the main algorithms in the field to get a feeling of what methods are available.

There are so many algorithms available and it can feel overwhelming when algorithm names are thrown around and you are expected to just know what they are and where they fit.

In this post I want to give you two ways to think about and categorize the algorithms you may come across in the field.

  • The first is a grouping of algorithms by the learning style.
  • The second is a grouping of algorithms by similarity in form or function (like grouping similar animals together).

Both approaches are useful, but we will focus in on the grouping of algorithms by similarity and go on a tour of a variety of different algorithm types.

After reading this post, you will have a much better understanding of the most popular machine learning algorithms for supervised learning and how they are related.

Ensemble Learning Method

A cool example of an ensemble of lines of best fit. Weak members are grey, the combined prediction is red.
Plot from Wikipedia, licensed under public domain.

Algorithms Grouped by Learning Style

There are different ways an algorithm can model a problem based on its interaction with the experience or environment or whatever we want to call the input data.

It is popular in machine learning and artificial intelligence textbooks to first consider the learning styles that an algorithm can adopt.

There are only a few main learning styles or learning models that an algorithm can have and we’ll go through them here with a few examples of algorithms and problem types that they suit.

This taxonomy or way of organizing machine learning algorithms is useful because it forces you to think about the the roles of the input data and the model preparation process and select one that is the most appropriate for your problem in order to get the best result.

Let’s take a look at four different learning styles in machine learning algorithms:

Supervised Learning

Supervised Learning AlgorithmsInput data is called training data and has a known label or result such as spam/not-spam or a stock price at a time.

A model is prepared through a training process where it is required to make predictions and is corrected when those predictions are wrong. The training process continues until the model achieves a desired level of accuracy on the training data.

Example problems are classification and regression.

Example algorithms include Logistic Regression and the Back Propagation Neural Network.

Unsupervised Learning

Unsupervised Learning AlgorithmsInput data is not labelled and does not have a known result.

A model is prepared by deducing structures present in the input data. This may be to extract general rules. It may through a mathematical process to systematically reduce redundancy, or it may be to organize data by similarity.

Example problems are clustering, dimensionality reduction and association rule learning.

Example algorithms include: the Apriori algorithm and k-Means.

Semi-Supervised Learning

Semi-supervised Learning AlgorithmsInput data is a mixture of labelled and unlabelled examples.

There is a desired prediction problem but the model must learn the structures to organize the data as well as make predictions.

Example problems are classification and regression.

Example algorithms are extensions to other flexible methods that make assumptions about how to model the unlabelled data.


When crunching data to model business decisions, you are most typically using supervised and unsupervised learning methods.

A hot topic at the moment is semi-supervised learning methods in areas such as image classification where there are large datasets with very few labelled examples.

Algorithms Grouped By Similarity

Algorithms are often grouped by similarity in terms of their function (how they work). For example, tree-based methods, and neural network inspired methods.

I think this is the most useful way to group algorithms and it is the approach we will use here.

This is a useful grouping method, but it is not perfect. There are still algorithms that could just as easily fit into multiple categories like Learning Vector Quantization that is both a neural network inspired method and an instance-based method. There are also categories that have the same name that describes the problem and the class of algorithm such as Regression and Clustering.

We could handle these cases by listing algorithms twice or by selecting the group that subjectively is the “best” fit. I like this latter approach of not duplicating algorithms to keep things simple.

In this section I list many of the popular machine leaning algorithms grouped the way I think is the most intuitive. It is not exhaustive in either the groups or the algorithms, but I think it is representative and will be useful to you to get an idea of the lay of the land.

Please Note: There is a strong bias towards algorithms used for classification and regression, the two most prevalent supervised machine learning problems you will encounter.

If you know of an algorithm or a group of algorithms not listed, put it in the comments and share it with us. Let’s dive in.

Regression Algorithms

Regression AlgorithmsRegression is concerned with modelling the relationship between variables that is iteratively refined using a measure of error in the predictions made by the model.

Regression methods are a workhorse of statistics and have been cooped into statistical machine learning. This may be confusing because we can use regression to refer to the class of problem and the class of algorithm. Really, regression is a process.

The most popular regression algorithms are:

  • Ordinary Least Squares Regression (OLSR)
  • Linear Regression
  • Logistic Regression
  • Stepwise Regression
  • Multivariate Adaptive Regression Splines (MARS)
  • Locally Estimated Scatterplot Smoothing (LOESS)

Instance-based Algorithms

Instance-based AlgorithmsInstance based learning model a decision problem with instances or examples of training data that are deemed important or required to the model.

Such methods typically build up a database of example data and compare new data to the database using a similarity measure in order to find the best match and make a prediction. For this reason, instance-based methods are also called winner-take-all methods and memory-based learning. Focus is put on representation of the stored instances and similarity measures used between instances.

The most popular instance-based algorithms are:

  • k-Nearest Neighbour (kNN)
  • Learning Vector Quantization (LVQ)
  • Self-Organizing Map (SOM)
  • Locally Weighted Learning (LWL)

Regularization Algorithms

Regularization AlgorithmsAn extension made to another method (typically regression methods) that penalizes models based on their complexity, favoring simpler models that are also better at generalizing.

I have listed regularization algorithms separately here because they are popular, powerful and generally simple modifications made to other methods.

The most popular regularization algorithms are:

  • Ridge Regression
  • Least Absolute Shrinkage and Selection Operator (LASSO)
  • Elastic Net
  • Least-Angle Regression (LARS)

Decision Tree Algorithms

Decision Tree AlgorithmsDecision tree methods construct a model of decisions made based on actual values of attributes in the data.

Decisions fork in tree structures until a prediction decision is made for a given record. Decision trees are trained on data for classification and regression problems. Decision trees are often fast and accurate and a big favorite in machine learning.

The most popular decision tree algorithms are:

  • Classification and Regression Tree (CART)
  • Iterative Dichotomiser 3 (ID3)
  • C4.5 and C5.0 (different versions of a powerful approach)
  • Chi-squared Automatic Interaction Detection (CHAID)
  • Decision Stump
  • M5
  • Conditional Decision Trees

Bayesian Algorithms

Bayesian AlgorithmsBayesian methods are those that are explicitly apply Bayes’ Theorem for problems such as classification and regression.

The most popular Bayesian algorithms are:

  • Naive Bayes
  • Gaussian Naive Bayes
  • Multinomial Naive Bayes
  • Averaged One-Dependence Estimators (AODE)
  • Bayesian Belief Network (BBN)
  • Bayesian Network (BN)

Clustering Algorithms

Clustering AlgorithmsClustering, like regression describes the class of problem and the class of methods.

Clustering methods are typically organized by the modelling approaches such as centroid-based and hierarchal. All methods are concerned with using the inherent structures in the data to best organize the data into groups of maximum commonality.

The most popular clustering algorithms are:

  • k-Means
  • k-Medians
  • Expectation Maximisation (EM)
  • Hierarchical Clustering

Association Rule Learning Algorithms

Assoication Rule Learning AlgorithmsAssociation rule learning are methods that extract rules that best explain observed relationships between variables in data.

These rules can discover important and commercially useful associations in large multidimensional datasets that can be exploited by an organisation.

The most popular association rule learning algorithms are:

  • Apriori algorithm
  • Eclat algorithm

Artificial Neural Network Algorithms

Artificial Neural Network AlgorithmsArtificial Neural Networks are models that are inspired by the structure and/or function of biological neural networks.

They are a class of pattern matching that are commonly used for regression and classification problems but are really an enormous subfield comprised of hundreds of algorithms and variations for all manner of problem types.

Note that I have separated out Deep Learning from neural networks because of the massive growth and popularity in the field. Here we are concerned with the more classical methods.

The most popular artificial neural network algorithms are:

  • Perceptron
  • Back-Propagation
  • Hopfield Network
  • Radial Basis Function Network (RBFN)

Deep Learning Algorithms

Deep Learning AlgorithmsDeep Learning methods are a modern update to Artificial Neural Networks that exploit abundant cheap computation.

They are concerned with building much larger and more complex neural networks, and as commented above, many methods are concerned with semi-supervised learning problems where large datasets contain very little labelled data.

The most popular deep learning algorithms are:

  • Deep Boltzmann Machine (DBM)
  • Deep Belief Networks (DBN)
  • Convolutional Neural Network (CNN)
  • Stacked Auto-Encoders

Dimensionality Reduction Algorithms

Dimensional Reduction AlgorithmsLike clustering methods, dimensionality reduction seek and exploit the inherent structure in the data, but in this case in an unsupervised manner or order to summarise or describe data using less information.

This can be useful to visualize dimensional data or to simplify data which can then be used in a supervized learning method. Many of these methods can be adapted for use in classification and regression.

  • Principal Component Analysis (PCA)
  • Principal Component Regression (PCR)
  • Partial Least Squares Regression (PLSR)
  • Sammon Mapping
  • Multidimensional Scaling (MDS)
  • Projection Pursuit
  • Linear Discriminant Analysis (LDA)
  • Mixture Discriminant Analysis (MDA)
  • Quadratic Discriminant Analysis (QDA)
  • Flexible Discriminant Analysis (FDA)

Ensemble Algorithms

Ensemble AlgorithmsEnsemble methods are models composed of multiple weaker models that are independently trained and whose predictions are combined in some way to make the overall prediction.

Much effort is put into what types of weak learners to combine and the ways in which to combine them. This is a very powerful class of techniques and as such is very popular.

  • Boosting
  • Bootstrapped Aggregation (Bagging)
  • AdaBoost
  • Stacked Generalization (blending)
  • Gradient Boosting Machines (GBM)
  • Gradient Boosted Regression Trees (GBRT)
  • Random Forest

Other Algorithms

Many algorithms were not covered.

For example, what group would Support Vector Machines go into? It’s own?

I did not cover algorithms from speciality tasks in the process of machine learning, such as:

  • Feature selection algorithms
  • Algorithm accuracy evaluation
  • Performance measures

I also did not cover algorithms from speciality sub-fields of machine learning, such as:

  • Computational intelligence (evolutionary algorithms, etc.)
  • Computer Vision (CV)
  • Natural Language Processing (NLP)
  • Recommender Systems
  • Reinforcement Learning
  • Graphical Models
  • And more…

These may feature in future posts.

Get your FREE Algorithms Mind Map

Machine Learning Algorithms Mind Map

Sample of the handy machine learning algorithms mind map.

I've created a handy mind map of 60+ algorithms organized by type.

Download it, print it and use it to jump-start your next machine learning project.

Download For Free


Also receive exclusive email tips and tricks.


Further Reading

This tour of machine learning algorithms was intended to give you an overview of what is out there and and some ideas on how to relate algorithms to each other.

I’ve collected together some resources for you to continue your reading on algorithms. If you have a specific question, please leave a comment.

Other Lists of Algorithms

There are other great lists of algorithms out there if you’re interested. Below are few hand selected examples.

How to Study Machine Learning Algorithms

Algorithms are a big part of machine learning. It’s a topic I am passionate about and write about a lot on this blog. Below are few hand selected posts that might interest you for further reading.

How to Run Machine Learning Algorithms

Sometimes you just want to dive into code. Below are some links you can use to run machine learning algorithms, code them up using standard libraries or implement them from scratch.

Final Word

I hope you have found this tour useful.

Please, leave a comment if you have any questions or ideas on how to improve the algorithm tour.

Update #1: Continue the discussion on HackerNews and reddit.

Update #2: I’ve added a bunch more resources and more algorithms. I’ve also added a handy mind map that you can download (see above).

57 Responses to A Tour of Machine Learning Algorithms

  1. Bruce December 20, 2013 at 5:10 pm #

    What about reinforcement learning algorithms in algorithm similarity classification?
    There is also one called Gibbs algorithm under Bayesian Learning

    • jasonb December 26, 2013 at 8:34 pm #

      Good point bruce, I left out those methods. Would you like me to write a post about reinforcement learning methods?

      • Jason's fan August 22, 2015 at 6:39 am #


        P.S. Please :0

  2. qnaguru February 17, 2014 at 5:46 pm #

    Where do newbies (with no analytics/stats background) start learning about this algorithms? And more so how does one use them with Big Data tools like Hadoop?

    • jasonb February 19, 2014 at 8:44 am #

      Hi qnaguru, I’d recommend starting small and experimenting with algorithms on small datasets using a tool like Weka. It’s a GUI tool and provides a bunch of standard datasets and algorithms out of the box.

      I’d suggest you build up some skill on small datasets before moving onto big data tools like Hadoop and Mahout.

    • swainjo June 9, 2014 at 6:24 pm #


      I would recommend the Coursera courses.

      I would also read a couple of books to give you some background into the possibilities and limitations. Nate Silver; The Signal and The Noise & Danial Kahneman; Thinking Fast and Slow.

  3. Ismo May 20, 2014 at 2:50 am #

    The best written one I have found is: “The Elements of Statistical Learning: Data Mining, Inference, and Prediction, Second Edition”. However you probably need to have some background on maths/stats/computing before reading that (especially if you are planning to implement them too). For general algorithms implementation I recommend reading also “Numerical Recipes 3rd Edition: The Art of Scientific Computing”.

    • jasonb May 23, 2014 at 8:01 am #

      I’m a huge fan of Numerical Recipes, thanks for the book refs.

  4. William May 23, 2014 at 1:37 am #

    Not a single one for recommender systems?

    • jasonb May 23, 2014 at 8:02 am #

      I would call recommender a higher-order system that internally is solving regression or classification problems. Do you agree?

  5. Jon May 23, 2014 at 2:47 am #

    genetic algorithms seem to be dying a slow death these days (discussed previously https://news.ycombinator.com/item?id=7712824 )

  6. Vinícius May 23, 2014 at 6:29 am #

    Hi guys, this is great! What about recommendation systems? I’m fascinated about, how netflix, amazon and others websites can recommend items based on my taste.

    • jasonb May 23, 2014 at 8:00 am #

      Good point.
      You can break a recommender down into a classification ore regression problem.

      • Rixi July 12, 2014 at 10:52 am #

        True, or even use rule induction like Apriori…

  7. mycall May 26, 2014 at 3:50 pm #

    Where does imagination lie? Would it be a Unsupervised Feedback Learning? Maybe its Neural Deep Essemble Networks. I presume dreaming = imagination while sleeping, hence daydreaming is imagining of new learning algorithms :-)

  8. vas May 27, 2014 at 5:28 am #

    I lot of people swear by this chart for helping you narrow down which machine learning approach to take: http://scikit-learn.org/stable/_static/ml_map.png. It doesn’t seem to cover all the types you list in your article. Perhaps a more thorough chart would be useful.

  9. Nevil Nayak May 27, 2014 at 7:22 am #

    Thid is great. I had always been looking for “all types” of ML algorithms available. I enjoyed reading this and look forward to further reading

  10. UD May 30, 2014 at 12:42 am #

    This is nice and useful…I have been feeling heady with too much data and this kinda gives me a menu from which to choose what all is on offer to help me make sense of stuff :) Thanks

    • Jason Brownlee August 22, 2015 at 4:43 pm #

      That is a great way to think about @UD, a menu of algorithms.

  11. Tim Browning May 30, 2014 at 4:15 am #

    You might want to include entropy-based methods in your summary. I use relative-entropy based monitoring in my work to identify anomalies in time series data. This approach has a better recall rate and lower false positive rates when tested with synthetic data using injected outliers. Just an idea, your summary is excellent for such a high level conceptual overview.

    • Bhaskar January 9, 2015 at 7:27 am #

      HI Tim
      Can you give me some reference from which I can learn about relative-entropy based monitoring ?

    • Jason Brownlee August 22, 2015 at 4:44 pm #

      Thanks @Tim, I’ll add a section on time series algorithms I think.

  12. Vincent June 9, 2014 at 7:50 pm #


    Thank’s for this tour, it is very useful ! But I disagree with you for the LDA method, which is in the Kernel Methods. First of all, by LDA, do you mean Linear Discriminant Analysis ? Because if it’s not, the next parts of my comment are useless :p

    If you are talking about this method, then you should put KLDA (which stand for Kernel LDA) and not simply LDA. Because LDA is more a dimension reduction method than a kernel method (It finds the best hyperplane that optimize the Fisher discriminant in order to project data on it).

    Next, I don’t know if we can view the RBF as a real machine learning method, it’s more a mapping function I think, but it is clearly used for mapping to a higher dimension.

    Except these two points, the post is awesome ! Thank’s again.

    • Jason Brownlee August 22, 2015 at 4:45 pm #

      Thanks @Vincent, I’ll look into moving the algorithms around a bit in their groupings.

  13. Rémi June 10, 2014 at 8:50 pm #

    Great post, but I agree with Vincent. Kernel Methods are not machine learning methods by themselve, but more an extension that allows to overcome some difficulties encountered when input data are not linearly separable. SVM and LDA are not Kernel-based, but their definition can be adapted to make use of the famous kernel-trick, giving birth to KSVM and KLDA, that are able to separate data linearly in a higher-dimensional space. Kernel trick can be applied to a wide variety of Machine learning methods:
    – LDA
    – SVM
    – PCA
    – KMeans
    and the list goes on…

    Moreover, I don’t think that RBF can be considered a machine learning method. It is a kernel function used alongside the kernel trick to project the data in a high-dimensional space. So the listing in “Kernel methods” seems to have a typing error :p

    Last point, don’t you think LDA could be added to the “Dimensionality Reduction” category ? In fact, it’s more an open question but, mixture methods (clustering) and factor analysis could be considered “Dimensionality Reduction methods” since data can be labeled either by it’s cluster id, or its factors.

    Thanks again for this post, giving an overview of machin learning methods is a great thing.

    • Jason Brownlee August 22, 2015 at 4:47 pm #

      Great comments @Rémi I’ll move things around a bit.

  14. Pranav Waila June 10, 2014 at 9:24 pm #

    Hi qnaguru, I have collected some nice reference books to start digging Machine learning. I would suggest you to start with “Introduction to statistical learning” and after that you can look into “The Elements of Statistical Learning: Data Mining, Inference, and Prediction, Second Edition”, “Probabilistic Machine Learning by David Barber”.

  15. Dean Abbott July 3, 2014 at 9:48 am #

    Very nice taxonomy of methods. Two small quibbles, both in the Decision Tree section.
    1) MARS isn’t a tree method, it’s a spline method. You list it already in the regression group, though could even go in the regularization group. (not a natural fit in any, IMHO).
    2) Random Forests is an ensemble method and sticks out a bit in the trees group. Yes, they are trees, but so is the MART (TreeNet) and some flavors of Adaboost. Since you already have an ensembles and RF is already there, I think you can safely remove it from the Trees.

    Again, you’ve done a great job with this list. Congrats!


    • Jason Brownlee August 22, 2015 at 4:49 pm #

      Thanks Dean, I’ll take your comments on board.

  16. sravan August 6, 2014 at 8:41 pm #

    Greate article. my knowledge in Machne learning is improving in bredth not in depth.how should i improve my learning.i have done some real time implementations with Regression analysis.and Random forest.and also i am atteding coursera courses.how would i get real time experience on ML R with Hadoop.

  17. lale November 23, 2014 at 9:16 pm #

    Thanks Mr.Brownly for your useful guide.Where can we find the implementations of all of these algorithms?I’ve installed weka but it doesnot have some of these algorithms

    • Jason Brownlee November 24, 2014 at 5:50 am #

      You may have to make use of other platforms like R and scikit-learn.

      Were you looking for an implementation of a specific algorithm?

  18. SHI XUDONG November 25, 2014 at 2:42 pm #

    Great Post!
    I am currently learning Sparse Coding. And I have difficulty putting Sparse Coding into the categories you created.
    –What is your idea about Sparse Coding?
    –Which category should it belong to?

    Can you provide some suggestions for learning sparse coding
    — what mathematical foundations should I have?
    — any good tutorial resources?
    — can you suggest a learning roadmap

    I am now taking convex optimization course. Is it a correct roadmap?

  19. Lee January 13, 2015 at 8:48 pm #

    Where does ranking fit into the machine learning algorithms? Is it by any chance under some of the categories mentioned in the article? The only time I find ranking mentioned in relation to machine learn is when I specifically search for ranking, none of the machine learning articles discuss it.

  20. Amelie February 3, 2015 at 10:41 am #

    which algorithm is the more efficient of the similarity algorithm .?

    • Jason Brownlee February 19, 2015 at 8:42 am #

      Assess similarity algorithms using computational complexity and empirically test them and see Amelie.

  21. Gudi February 13, 2015 at 3:31 pm #

    What methods/algorithms are suitable for applying to trading patterns analysis. I mean looking at the trading graphs of the last 6 months (e.g. SPY). Currently, I am looking at the graphs visually. Can an algorithm come to my aid (I am currently enrolled in an online data mining course) ?

    • Jason Brownlee February 19, 2015 at 8:42 am #

      Sounds like a timeseries problem, consider stating out with an auto-regression.

  22. saima May 25, 2015 at 4:23 pm #

    Hi Jason,
    Its a great article. I wish if you could give a list of machine learning algorithms popular in medical research domain.

    Saima Safdar

  23. Vicc May 27, 2015 at 7:22 pm #

    Great list. Definitely cleared things up for me, Jason! I do have a question concerning Batch Gradient Descent and the Normal Equation. Are these considered Estimators?

    I would love to see a post that addresses the different types of estimators / optimizers that could be used for each of these algorithms that is simple to understand. Also where does feature scaling (min max scaling & standardization) and other things fall into all of this? Are they also optimizers? So many things!

    Thanks so much for spreading your knowledge!

  24. Henry Thornton June 6, 2015 at 10:49 pm #

    Hi Jason

    Intrigued by your comments above about recommendation systems ie.

    “I would call recommender a higher-order system that internally is solving regression or classification problems.” and,

    “You can break a recommender down into a classification or a regression problem.”

    Could you please expand on your thought process? In general, I find that people talk about building or wanting a “classifier” since it is the de-jeure buzzword (and related to deep learning) when in fact, a recommender or something else will do the job. Anyway, great discussion.

  25. Aharon Robinson June 11, 2015 at 8:53 am #

    Great stuff here Jason! Regarding your comments on 12/26, I’ll vote yes to seeing a post on reinforcement learning methods

  26. Vijay Lingesh June 11, 2015 at 4:13 pm #

    Hi Jason,
    I’m trying to implement object detection through computer vision through Machine Learning but I’m hitting a wall when trying to find a suitable approach. Can you suggest which kind of algorithm will help me? I’d like to research more on it.

  27. Rajmohan July 16, 2015 at 3:44 pm #

    Hi.. i am working on finding the missing values by using machine learning approaches..
    Any body can suggest new methods to be used..
    I am a research scholar

  28. Oren August 5, 2015 at 7:04 pm #

    Hi Jason,

    just a small question: In my opinion k-NN, SVM, Naive Bayes, Decision Trees, MaxEnt (even if it’s not mentioned here) are all considered to be instance-based, isn’t it right?

  29. Vaibhav Agarwal September 10, 2015 at 3:20 am #

    Awesome post now I know where I stand.

  30. shani September 10, 2015 at 10:07 pm #

    i started reading and i feel i don’t succeed to understand it.
    I don’t understand which algorithm is good for which type of problem.
    I think that little example for each algorithm will be useful.

  31. Gian September 22, 2015 at 11:30 pm #


    How can I classify the support vector machines and its extensions in your list?

  32. Stephen Thompson October 7, 2015 at 1:25 am #

    Jason: Nice addition of the simple graphic to each of the “families” of machine learning algorithms. This is a change from what I recall was a previous version of this post. The diagram helps visualize the activity of the family and thus aid developing an internal model of how the members of the family operate.

    A simple but powerful effect.

  33. Kevin Keane October 28, 2015 at 5:45 am #

    The Bayesian Algorithms graphic should be reworked. In particular,
    1) the area under both density functions should integrate to one. While no scale is provided, the prior appears to integrate to a much smaller number than the posterior.
    2) in general, a posterior is narrower / more concentrated than a prior given an observation.
    3) (interpreting the baseline as zero density) a posterior typically concentrates the probability of the prior in a smaller range; it never “moves” probability to a range where the prior density was zero.

  34. Alvin November 11, 2015 at 8:11 pm #

    Hi jason,

    Can you recommend any algorithm to my problem below please?
    I need one that does time series analysis that does Bayesian analysis too.

    For test set,
    I’m given data for hourly price movements for half a day, and tasked to predict for the second half of day. Clearly a time series (TS) problems.

    But on top of that I’m also given information on 10 discrete factors for each day in the training and testing set.

    Do you know of any algo that creates multiple TS models conditional upon the values (or bands) of the various discrete factors at the onset?

Leave a Reply