[New Book] Click to get The Beginner's Guide to Data Science!
Use the offer code 20offearlybird to get 20% off. Hurry, sale ends soon!

5 Reasons to Learn Probability for Machine Learning

Probability is a field of mathematics that quantifies uncertainty.

It is undeniably a pillar of the field of machine learning, and many recommend it as a prerequisite subject to study prior to getting started. This is misleading advice, as probability makes more sense to a practitioner once they have the context of the applied machine learning process in which to interpret it.

In this post, you will discover why machine learning practitioners should study probabilities to improve their skills and capabilities.

After reading this post, you will know:

  • Not everyone should learn probability; it depends where you are in your journey of learning machine learning.
  • Many algorithms are designed using the tools and techniques from probability, such as Naive Bayes and Probabilistic Graphical Models.
  • The maximum likelihood framework that underlies the training of many machine learning algorithms comes from the field of probability.

Kick-start your project with my new book Probability for Machine Learning, including step-by-step tutorials and the Python source code files for all examples.

Let’s get started.

5 Reasons to Learn Probability for Machine Learning

5 Reasons to Learn Probability for Machine Learning
Photo by Marco Verch, some rights reserved.

Overview

This tutorial is divided into seven parts; they are:

  1. Reasons to NOT Learn Probability
  2. Class Membership Requires Predicting a Probability
  3. Some Algorithms Are Designed Using Probability
  4. Models Are Trained Using a Probabilistic Framework
  5. Models Can Be Tuned With a Probabilistic Framework
  6. Probabilistic Measures Are Used to Evaluate Model Skill
  7. One More Reason

Reasons to NOT Learn Probability

Before we go through the reasons that you should learn probability, let’s start off by taking a small look at the reason why you should not.

I think you should not study probability if you are just getting started with applied machine learning.

  • It’s not required. Having an appreciation for the abstract theory that underlies some machine learning algorithms is not required in order to use machine learning as a tool to solve problems.
  • It’s slow. Taking months to years to study an entire related field before starting machine learning will delay you achieving your goals of being able to work through predictive modeling problems.
  • It’s a huge field. Not all of probability is relevant to theoretical machine learning, let alone applied machine learning.

I recommend a breadth-first approach to getting started in applied machine learning.

I call this the results-first approach. It is where you start by learning and practicing the steps for working through a predictive modeling problem end-to-end (e.g. how to get results) with a tool (such as scikit-learn and Pandas in Python).

This process then provides the skeleton and context for progressively deepening your knowledge, such as how algorithms work and, eventually, the math that underlies them.

After you know how to work through a predictive modeling problem, let’s look at why you should deepen your understanding of probability.

Want to Learn Probability for Machine Learning

Take my free 7-day email crash course now (with sample code).

Click to sign-up and also get a free PDF Ebook version of the course.

1. Class Membership Requires Predicting a Probability

Classification predictive modeling problems are those where an example is assigned a given label.

An example that you may be familiar with is the iris flowers dataset where we have four measurements of a flower and the goal is to assign one of three different known species of iris flower to the observation.

We can model the problem as directly assigning a class label to each observation.

  • Input: Measurements of a flower.
  • Output: One iris species.

A more common approach is to frame the problem as a probabilistic class membership, where the probability of an observation belonging to each known class is predicted.

  • Input: Measurements of a flower.
  • Output: Probability of membership to each iris species.

Framing the problem as a prediction of class membership simplifies the modeling problem and makes it easier for a model to learn. It allows the model to capture ambiguity in the data, which allows a process downstream, such as the user to interpret the probabilities in the context of the domain.

The probabilities can be transformed into a crisp class label by choosing the class with the largest probability. The probabilities can also be scaled or transformed using a probability calibration process.

This choice of a class membership framing of the problem interpretation of the predictions made by the model requires a basic understanding of probability.

2. Models Are Designed Using Probability

There are algorithms that are specifically designed to harness the tools and methods from probability.

These range from individual algorithms, like Naive Bayes algorithm, which is constructed using Bayes Theorem with some simplifying assumptions.

The linear regression algorithm can be seen as a probabilistic model that minimizes the mean squared error of predictions, and the logistic regression algorithm can be seen as a probabilistic model that minimizes the negative log likelihood of predicting the positive class label.

  • Linear Regression
  • Logistic Regression

It also extends to whole fields of study, such as probabilistic graphical models, often called graphical models or PGM for short, and designed around Bayes Theorem.

A notable graphical model is Bayesian Belief Networks or Bayes Nets, which are capable of capturing the conditional dependencies between variables.

3. Models Are Trained With Probabilistic Frameworks

Many machine learning models are trained using an iterative algorithm designed under a probabilistic framework.

Some examples of general probabilsitic modeling frameworks are:

Perhaps the most common is the framework of maximum likelihood estimation, sometimes shorted as MLE. This is a framework for estimating model parameters (e.g. weights) given observed data.

This is the framework that underlies the ordinary least squares estimate of a linear regression model and the log loss estimate for logistic regression.

The expectation-maximization algorithm, or EM for short, is an approach for maximum likelihood estimation often used for unsupervised data clustering, e.g. estimating k means for k clusters, also known as the k-Means clustering algorithm.

For models that predict class membership, maximum likelihood estimation provides the framework for minimizing the difference or divergence between an observed and predicted probability distribution. This is used in classification algorithms like logistic regression as well as deep learning neural networks.

It is common to measure this difference in probability distribution during training using entropy, e.g. via cross-entropy. Entropy, and differences between distributions measured via KL divergence, and cross-entropy are from the field of information theory that directly build upon probability theory. For example, entropy is calculated directly as the negative log of the probability.

As such, these tools from information theory such as minimising cross-entropy loss can be seen as another probabilistic framework for model estimation.

  • Minimum Cross-Entropy Loss Estimation

4. Models Are Tuned With a Probabilistic Framework

It is common to tune the hyperparameters of a machine learning model, such as k for kNN or the learning rate in a neural network.

Typical approaches include grid searching ranges of hyperparameters or randomly sampling hyperparameter combinations.

Bayesian optimization is a more efficient to hyperparameter optimization that involves a directed search of the space of possible configurations based on those configurations that are most likely to result in better performance. As its name suggests, the approach was devised from and harnesses Bayes Theorem when sampling the space of possible configurations.

For more on Bayesian optimization, see the tutorial:

5. Models Are Evaluated With Probabilistic Measures

For those algorithms where a prediction of probabilities is made, evaluation measures are required to summarize the performance of the model.

There are many measures used to summarize the performance of a model based on predicted probabilities. Common examples include:

  • Log Loss (also called cross-entropy).
  • Brier Score, and the Brier Skill Score

For more on metrics for evaluating predicted probabilities, see the tutorial:

For binary classification tasks where a single probability score is predicted, Receiver Operating Characteristic, or ROC, curves can be constructed to explore different cut-offs that can be used when interpreting the prediction that, in turn, result in different trade-offs. The area under the ROC curve, or ROC AUC, can also be calculated as an aggregate measure. A related method that couses on the positive class is the Precision-Recall Curve and area under curve.

  • ROC Curve and ROC AUC
  • Precision-Recall Curve and AUC

For more on these curves and when to use them see the tutorial:

Choice and interpretation of these scoring methods require a foundational understanding of probability theory.

One More Reason

If I could give one more reason, it would be: Because it is fun.

Seriously.

Learning probability, at least the way I teach it with practical examples and executable code, is a lot of fun. Once you can see how the operations work on real data, it is hard to avoid developing a strong intuition for a subject that is often quite unintuitive.

Do you have more reasons why it is critical for an intermediate machine learning practitioner to learn probability?

Let me know in the comments below.

Further Reading

This section provides more resources on the topic if you are looking to go deeper.

Books

Posts

Articles

Summary

In this post, you discovered why, as a machine learning practitioner, you should deepen your understanding of probability.

Specifically, you learned:

  • Not everyone should learn probability; it depends where you are in your journey of learning machine learning.
  • Many algorithms are designed using the tools and techniques from probability, such as Naive Bayes and Probabilistic Graphical Models.
  • The maximum likelihood framework that underlies the training of many machine learning algorithms comes from the field of probability.

Do you have any questions?
Ask your questions in the comments below and I will do my best to answer.

Get a Handle on Probability for Machine Learning!

Probability for Machine Learning

Develop Your Understanding of Probability

...with just a few lines of python code

Discover how in my new Ebook:
Probability for Machine Learning

It provides self-study tutorials and end-to-end projects on:
Bayes Theorem, Bayesian Optimization, Distributions, Maximum Likelihood, Cross-Entropy, Calibrating Models
and much more...

Finally Harness Uncertainty in Your Projects

Skip the Academics. Just Results.

See What's Inside

24 Responses to 5 Reasons to Learn Probability for Machine Learning

  1. Avatar
    Weever September 11, 2019 at 8:05 am #

    Hello Jason, from Kenya here, I just want to say thank you for making me a lazy academic but a ruthless Applied Machine learning engineer, i learn tonnes from you.
    if i can put in a request, could you still put up content in an app? i dont know if it will serve many more but an app with a daily update other than my email would go a long way, i can sync and read later, even set a notification on when i can read it, or sometimes need to read or confirm something from your content etc.

    I do not know its feasibility, it may only be me having this feeling but could you try it out or ask your audience and see if its a good idea? thank you so much again.

    • Avatar
      Jason Brownlee September 11, 2019 at 2:27 pm #

      Thanks!

      Thanks for the suggestion.

      How will reading the tutorials in the app help you exactly?

  2. Avatar
    ashfaq September 11, 2019 at 9:27 pm #

    you are doing great work sir.

  3. Avatar
    James September 12, 2019 at 2:43 am #

    Thank you for the wonderful post, I enjoy reading your posts. I was just getting overwhelmed with the math/probability that I need to master before starting machine learning courses. Your post has really helped me to forge ahead. However, I doing a linear algebra course before starting on Machine learning probably next month. Is it possible to write something on linear algebra and how one should go about it, the way you have done for probability. Thank you.

  4. Avatar
    Tamoor Malik September 13, 2019 at 3:34 pm #

    in your heading ” Probabilistic Measures Are Used to Evaluate Model Skill ”

    i guess you missed CONFUSION MATRIX which is also used in Probablity based classifiers performance…

    • Avatar
      Jason Brownlee September 14, 2019 at 6:11 am #

      I would not consider a confusion matrix as useful for evaluating probabilities.

      I would instead recommend logloss, cross entropy and brier score.

  5. Avatar
    Anthony The Koala September 15, 2019 at 4:51 pm #

    Dear Dr Jason,
    Thank you for your article. In section 3 you mention the “Bayesian Belief Network” (‘BBN’) . I had a look at the Wikipedia article particularly the example of the conditional conditional (yes I wrote it twice and it is the first time I saw a conditional conditional) probability of the grass getting wet either by the sprinkler and/or rain or both.

    I have seen reference to ‘BBNs’ on your site. Do your books cover a practical example of ‘BBNs’ and/or do you intend to do an example of a BBN in python?

    Thank you,
    Anthony of Sydney

    • Avatar
      Jason Brownlee September 16, 2019 at 6:33 am #

      I don’t have tutorials on BBN. I will have a little more in the future, and one day I will have a book on probabilistic graphical models.

      Until then, you can start here:

      Probabilistic Graphical Models: Principles and Techniques https://amzn.to/324l0tT

  6. Avatar
    Sam September 17, 2019 at 4:16 am #

    I don’t really agree with your statement that probability isn’t necessary for ML. In fact, I didn’t really like your section on why NOT to learn probability. How can people justify that a model is going to be stable, give consistent results in the future, etc. if you can’t even define what MLE is? Or have some understanding of how you got the predicted values you did?

    Just my opinion, interested to hear what you think.

    • Avatar
      Jason Brownlee September 17, 2019 at 6:35 am #

      Thanks for sharing your opinion Sam.

      I think it is better to get started and learn the basic process of working through a problem first, then circle back to probability.

      You can understand concepts like mean and variance broadly as part of that first step.

      Probability is the scaffold for machine learning, like computability/discrete math for programming. You can do programming without it, but you get much better after learning about it. And if you start with it, you will give up.

      • Avatar
        Sam September 19, 2019 at 3:41 am #

        Thanks for the response Jason. I appreciate that it’s always good to get going as quickly as possible, I just worry that in today’s day and age, people will create models that could have real impact on people’s decisions. If we don’t fundamentally understand how we got a given prediction/recommendation, then it certain edge cases we could have problematic results. Here lies the importance of understanding the fundamentals of what you are doing.

        • Avatar
          Jason Brownlee September 19, 2019 at 6:05 am #

          Perhaps.

          It is no more or less dangerous than developers writing software used by thousands of people where those developers have little background as engineers.

          We develop structures around the project that add guard rails, like TDD, user testing, system testing, etc.

          • Avatar
            Sam September 19, 2019 at 7:55 am #

            Fair enough. I think it’s less common to write software with no experience as an engineer than it is to create models without any fundamental probability/ML understanding, but I understand your point.

            Appreciate your thoughts.

            By the way, I’ve been reading your posts for a while now and really enjoy them—just thought this deserved some attention.

          • Avatar
            Jason Brownlee September 19, 2019 at 1:47 pm #

            Thanks Sam.

            I don’t think we can be black and white on these topics, the industry is full of all types of curious and creative people looking to deliver value.

  7. Avatar
    Daniel November 5, 2019 at 9:02 am #

    Probability for Machine Learning is a good book but it’s pure theory (wich I’m sure it’s really important) but there’s no examples about real world aplications on real datasets. Just some simple examples with random generated values or arbitrary values.

    • Avatar
      Jason Brownlee November 5, 2019 at 9:24 am #

      Thanks for your thoughts.

      It is not theory, e.g. like a textbook, it is applied like a field guide.

      Do have a suggestion of a better way to learn the concepts Daniel?

  8. Avatar
    Dr. D.K. Sharma June 13, 2020 at 3:09 pm #

    Three reasons why I want to learn probability in the context of machine learning
    1. Uncertainty is fundamental to the field of machine learning.
    2. To remove the noise existing channels.
    3. To develop newly information measures on the basis of Probability.

Leave a Reply