A Tour of Machine Learning Algorithms

After we understand the type of machine learning problem we are working with, we can think about the type of data to collect and the types of machine learning algorithms we can try. In this post we take a tour of the most popular machine learning algorithms. It is useful to tour the main algorithms to get a general idea of what methods are available.

There are so many algorithms available. The difficulty is that there are classes of method and there are extensions to methods and it quickly becomes very difficult to determine what constitutes a canonical algorithm. In this post I want to give you two ways to think about and categorize the algorithms you may come across in the field.

The first is a grouping of algorithms by the learning style. The second is a grouping of algorithms by similarity in form or function (like grouping similar animals together). Both approaches are useful.

Learning Style

There are different ways an algorithm can model a problem based on its interaction with the experience or environment or whatever we want to call the input data. It is popular in machine learning and artificial intelligence text books to first consider the learning styles that an algorithm can adopt.

There are only a few main learning styles or learning models that an algorithm can have and we’ll go through them here with a few examples of algorithms and problem types that they suit. This taxonomy or way of organizing machine learning algorithms is useful because it forces you to think about the the roles of the input data and the model preparation process and select one that is the most appropriate for your problem in order to get the best result.

  • Supervised Learning: Input data is called training data and has a known label or result such as spam/not-spam or a stock price at a time. A model is prepared through a training process where it is required to make predictions and is corrected when those predictions are wrong. The training process continues until the model achieves a desired level of accuracy on the training data. Example problems are classification and regression. Example algorithms are Logistic Regression and the Back Propagation Neural Network.
  • Unsupervised Learning: Input data is not labelled and does not have a known result. A model is prepared by deducing structures present in the input data. Example problems are association rule learning and clustering. Example algorithms are the Apriori algorithm and k-means.
  • Semi-Supervised Learning: Input data is a mixture of labelled and unlabelled examples. There is a desired prediction problem but the model must learn the structures to organize the data as well as make predictions. Example problems are classification and regression. Example algorithms are extensions to other flexible methods that make assumptions about how to model the unlabelled data.
  • Reinforcement Learning: Input data is provided as stimulus to a model from an environment to which the model must respond and react. Feedback is provided not from of a teaching process as in supervised learning, but as punishments and rewards in the environment. Example problems are systems and robot control. Example algorithms are Q-learning and Temporal difference learning.

When crunching data to model business decisions, you are most typically using supervised and unsupervised learning methods. A hot topic at the moment is semi-supervised learning methods in areas such as image classification where there are large datasets with very few labelled examples. Reinforcement learning is more likely to turn up in robotic control and other control systems development.

Algorithm Similarity

Algorithms are universally presented in groups by similarity in terms of function or form. For example, tree based methods, and neural network inspired methods. This is a useful grouping method, but it is not perfect. There are still algorithms that could just as easily fit into multiple categories like Learning Vector Quantization that is both a neural network inspired method and an instance-based method. There are also categories that have the same name that describes the problem and the class of algorithm such as Regression and Clustering. As such, you will see variations on the way algorithms are grouped depending on the source you check. Like machine learning algorithms themselves, there is no perfect model, just a good enough model.

In this section I list many of the popular machine leaning algorithms grouped the way I think is the most intuitive. It is not exhaustive in either the groups or the algorithms, but I think it is representative and will be useful to you to get an idea of the lay of the land. If you know of an algorithm or a group of algorithms not listed, put it in the comments and share it with us. Let’s dive in.


Regression is concerned with modelling the relationship between variables that is iteratively refined using a measure of error in the predictions made by the model. Regression methods are a work horse of statistics and have been cooped into statistical machine learning. This may be confusing because we can use regression to refer to the class of problem and the class of algorithm. Really, regression is a process. Some example algorithms are:

  • Ordinary Least Squares
  • Logistic Regression
  • Stepwise Regression
  • Multivariate Adaptive Regression Splines (MARS)
  • Locally Estimated Scatterplot Smoothing (LOESS)

Instance-based Methods

Instance based learning model a decision problem with instances or examples of training data that are deemed important or required to the model. Such methods typically build up a database of example data and compare new data to the database using a similarity measure in order to find the best match and make a prediction. For this reason, instance-based methods are also called winner-take all methods and memory-based learning. Focus is put on representation of the stored instances and similarity measures used between instances.

  • k-Nearest Neighbour (kNN)
  • Learning Vector Quantization (LVQ)
  • Self-Organizing Map (SOM)

Regularization Methods

An extension made to another method (typically regression methods) that penalizes models based on their complexity, favoring simpler models that are also better at generalizing. I have listed Regularization methods here because they are popular, powerful and generally simple modifications made to other methods.

  • Ridge Regression
  • Least Absolute Shrinkage and Selection Operator (LASSO)
  • Elastic Net

Decision Tree Learning

Decision tree methods construct a model of decisions made based on actual values of attributes in the data. Decisions fork in tree structures until a prediction decision is made for a given record. Decision trees are trained on data for classification and regression problems.

  • Classification and Regression Tree (CART)
  • Iterative Dichotomiser 3 (ID3)
  • C4.5
  • Chi-squared Automatic Interaction Detection (CHAID)
  • Decision Stump
  • Random Forest
  • Multivariate Adaptive Regression Splines (MARS)
  • Gradient Boosting Machines (GBM)


Bayesian methods are those that are explicitly apply Bayes’ Theorem for problems such as classification and regression.

  • Naive Bayes
  • Averaged One-Dependence Estimators (AODE)
  • Bayesian Belief Network (BBN)

Kernel Methods

Kernel Methods are best known for the popular method Support Vector Machines which is really a constellation of methods in and of itself. Kernel Methods are concerned with mapping input data into a higher dimensional vector space where some classification or regression problems are easier to model.

  • Support Vector Machines (SVM)
  • Radial Basis Function (RBF)
  • Linear Discriminate Analysis (LDA)

Clustering Methods

Clustering, like regression describes the class of problem and the class of methods. Clustering methods are typically organized by the modelling approaches such as centroid-based and hierarchal. All methods are concerned with using the inherent structures in the data to best organize the data into groups of maximum commonality.

  • k-Means
  • Expectation Maximisation (EM)

Association Rule Learning

Association rule learning are methods that extract rules that best explain observed relationships between variables in data. These rules can discover important and commercially useful associations in large multidimensional datasets that can be exploited by an organisation.

  • Apriori algorithm
  • Eclat algorithm

Artificial Neural Networks

Artificial Neural Networks are models that are inspired by the structure and/or function of biological neural networks. They are a class of pattern matching that are commonly used for regression and classification problems but are really an enormous subfield comprised of hundreds of algorithms and variations for all manner of problem types. Some of the classically popular methods include (I have separated Deep Learning from this category):

  • Perceptron
  • Back-Propagation
  • Hopfield Network
  • Self-Organizing Map (SOM)
  • Learning Vector Quantization (LVQ)

Deep Learning

Deep Learning methods are a modern update to Artificial Neural Networks that exploit abundant cheap computation. They are concerned with building much larger and more complex neural networks, and as commented above, many methods are concerned with semi-supervised learning problems where large datasets contain very little labelled data.

  • Restricted Boltzmann Machine (RBM)
  • Deep Belief Networks (DBN)
  • Convolutional Network
  • Stacked Auto-encoders

Dimensionality Reduction

Like clustering methods, Dimensionality Reduction seek and exploit the inherent structure in the data, but in this case in an unsupervised manner or order to summarise or describe data using less information. This can be useful to visualize dimensional data or to simplify data which can then be used in a supervized learning method.

  • Principal Component Analysis (PCA)
  • Partial Least Squares Regression (PLS)
  • Sammon Mapping
  • Multidimensional Scaling (MDS)
  • Projection Pursuit

Ensemble Methods

Ensemble methods are models composed of multiple weaker models that are independently trained and whose predictions are combined in some way to make the overall prediction. Much effort is put into what types of weak learners to combine and the ways in which to combine them. This is a very powerful class of techniques and as such is very popular.

  • Boosting
  • Bootstrapped Aggregation (Bagging)
  • AdaBoost
  • Stacked Generalization (blending)
  • Gradient Boosting Machines (GBM)
  • Random Forest
Ensemble Learning Method

Example of an ensemble of lines of best fit. Weak members are grey, the combined prediction is red. Specifically shows Temperature/Ozone data and a plot of a model prepared with LOESS method.
Image is licensed public domain and is attributed to Wikipedia.


This tour of machine learning algorithms was intended to give you an overview of what is out there and some tools to relate algorithms that you may come across to each other.

The resources for this post are as you would expect, other great lists of machine learning algorithms. Try not to feel overwhelmed. It is useful to know about many algorithms, but it is also useful to be effective and have a deep knowledge of just a few key methods.

I hope you have found this tour useful. Leave a comment if you know of a better way to think about organizing algorithms or if you know of any other great lists of machine learning algorithms.

UPDATE: Continue the discussion on HackerNews and reddit.

machine learning foundations coverTake The Next Step

The field of machine learning can be confusing. There are so many algorithms and problems. Terms that are thrown around. Terms that you are expected to know.

You need to know the lay of the land before diving in.

You need the 28-page PDF guide:

Machine Learning Foundations

In this guide you will discover what machine learning is as well as the high-level concepts you need to know to get started.

, ,

24 Responses to A Tour of Machine Learning Algorithms

  1. Bruce December 20, 2013 at 5:10 pm #

    What about reinforcement learning algorithms in algorithm similarity classification?
    There is also one called Gibbs algorithm under Bayesian Learning

    • jasonb December 26, 2013 at 8:34 pm #

      Good point bruce, I left out those methods. Would you like me to write a post about reinforcement learning methods?

  2. qnaguru February 17, 2014 at 5:46 pm #

    Where do newbies (with no analytics/stats background) start learning about this algorithms? And more so how does one use them with Big Data tools like Hadoop?

    • jasonb February 19, 2014 at 8:44 am #

      Hi qnaguru, I’d recommend starting small and experimenting with algorithms on small datasets using a tool like Weka. It’s a GUI tool and provides a bunch of standard datasets and algorithms out of the box.

      I’d suggest you build up some skill on small datasets before moving onto big data tools like Hadoop and Mahout.

    • swainjo June 9, 2014 at 6:24 pm #


      I would recommend the Coursera courses.

      I would also read a couple of books to give you some background into the possibilities and limitations. Nate Silver; The Signal and The Noise & Danial Kahneman; Thinking Fast and Slow.

  3. Ismo May 20, 2014 at 2:50 am #

    The best written one I have found is: “The Elements of Statistical Learning: Data Mining, Inference, and Prediction, Second Edition”. However you probably need to have some background on maths/stats/computing before reading that (especially if you are planning to implement them too). For general algorithms implementation I recommend reading also “Numerical Recipes 3rd Edition: The Art of Scientific Computing”.

    • jasonb May 23, 2014 at 8:01 am #

      I’m a huge fan of Numerical Recipes, thanks for the book refs.

  4. William May 23, 2014 at 1:37 am #

    Not a single one for recommender systems?

    • jasonb May 23, 2014 at 8:02 am #

      I would call recommender a higher-order system that internally is solving regression or classification problems. Do you agree?

  5. Jon May 23, 2014 at 2:47 am #

    genetic algorithms seem to be dying a slow death these days (discussed previously https://news.ycombinator.com/item?id=7712824 )

  6. Vinícius May 23, 2014 at 6:29 am #

    Hi guys, this is great! What about recommendation systems? I’m fascinated about, how netflix, amazon and others websites can recommend items based on my taste.

    • jasonb May 23, 2014 at 8:00 am #

      Good point.
      You can break a recommender down into a classification ore regression problem.

      • Rixi July 12, 2014 at 10:52 am #

        True, or even use rule induction like Apriori…

  7. mycall May 26, 2014 at 3:50 pm #

    Where does imagination lie? Would it be a Unsupervised Feedback Learning? Maybe its Neural Deep Essemble Networks. I presume dreaming = imagination while sleeping, hence daydreaming is imagining of new learning algorithms :-)

  8. vas May 27, 2014 at 5:28 am #

    I lot of people swear by this chart for helping you narrow down which machine learning approach to take: http://scikit-learn.org/stable/_static/ml_map.png. It doesn’t seem to cover all the types you list in your article. Perhaps a more thorough chart would be useful.

  9. Nevil Nayak May 27, 2014 at 7:22 am #

    Thid is great. I had always been looking for “all types” of ML algorithms available. I enjoyed reading this and look forward to further reading

  10. UD May 30, 2014 at 12:42 am #

    This is nice and useful…I have been feeling heady with too much data and this kinda gives me a menu from which to choose what all is on offer to help me make sense of stuff :) Thanks

  11. Tim Browning May 30, 2014 at 4:15 am #

    You might want to include entropy-based methods in your summary. I use relative-entropy based monitoring in my work to identify anomalies in time series data. This approach has a better recall rate and lower false positive rates when tested with synthetic data using injected outliers. Just an idea, your summary is excellent for such a high level conceptual overview.

  12. Vincent June 9, 2014 at 7:50 pm #


    Thank’s for this tour, it is very useful ! But I disagree with you for the LDA method, which is in the Kernel Methods. First of all, by LDA, do you mean Linear Discriminant Analysis ? Because if it’s not, the next parts of my comment are useless :p

    If you are talking about this method, then you should put KLDA (which stand for Kernel LDA) and not simply LDA. Because LDA is more a dimension reduction method than a kernel method (It finds the best hyperplane that optimize the Fisher discriminant in order to project data on it).

    Next, I don’t know if we can view the RBF as a real machine learning method, it’s more a mapping function I think, but it is clearly used for mapping to a higher dimension.

    Except these two points, the post is awesome ! Thank’s again.

  13. Rémi June 10, 2014 at 8:50 pm #

    Great post, but I agree with Vincent. Kernel Methods are not machine learning methods by themselve, but more an extension that allows to overcome some difficulties encountered when input data are not linearly separable. SVM and LDA are not Kernel-based, but their definition can be adapted to make use of the famous kernel-trick, giving birth to KSVM and KLDA, that are able to separate data linearly in a higher-dimensional space. Kernel trick can be applied to a wide variety of Machine learning methods:
    - LDA
    - SVM
    - PCA
    - KMeans
    and the list goes on…

    Moreover, I don’t think that RBF can be considered a machine learning method. It is a kernel function used alongside the kernel trick to project the data in a high-dimensional space. So the listing in “Kernel methods” seems to have a typing error :p

    Last point, don’t you think LDA could be added to the “Dimensionality Reduction” category ? In fact, it’s more an open question but, mixture methods (clustering) and factor analysis could be considered “Dimensionality Reduction methods” since data can be labeled either by it’s cluster id, or its factors.

    Thanks again for this post, giving an overview of machin learning methods is a great thing.

  14. Pranav Waila June 10, 2014 at 9:24 pm #

    Hi qnaguru, I have collected some nice reference books to start digging Machine learning. I would suggest you to start with “Introduction to statistical learning” and after that you can look into “The Elements of Statistical Learning: Data Mining, Inference, and Prediction, Second Edition”, “Probabilistic Machine Learning by David Barber”.

  15. Dean Abbott July 3, 2014 at 9:48 am #

    Very nice taxonomy of methods. Two small quibbles, both in the Decision Tree section.
    1) MARS isn’t a tree method, it’s a spline method. You list it already in the regression group, though could even go in the regularization group. (not a natural fit in any, IMHO).
    2) Random Forests is an ensemble method and sticks out a bit in the trees group. Yes, they are trees, but so is the MART (TreeNet) and some flavors of Adaboost. Since you already have an ensembles and RF is already there, I think you can safely remove it from the Trees.

    Again, you’ve done a great job with this list. Congrats!


Leave a Reply