[New Book] Click to get The Beginner's Guide to Data Science!
Use the offer code 20offearlybird to get 20% off. Hurry, sale ends soon!

Why Optimization Is Important in Machine Learning

Machine learning involves using an algorithm to learn and generalize from historical data in order to make predictions on new data.

This problem can be described as approximating a function that maps examples of inputs to examples of outputs. Approximating a function can be solved by framing the problem as function optimization. This is where a machine learning algorithm defines a parameterized mapping function (e.g. a weighted sum of inputs) and an optimization algorithm is used to fund the values of the parameters (e.g. model coefficients) that minimize the error of the function when used to map inputs to outputs.

This means that each time we fit a machine learning algorithm on a training dataset, we are solving an optimization problem.

In this tutorial, you will discover the central role of optimization in machine learning.

After completing this tutorial, you will know:

  • Machine learning algorithms perform function approximation, which is solved using function optimization.
  • Function optimization is the reason why we minimize error, cost, or loss when fitting a machine learning algorithm.
  • Optimization is also performed during data preparation, hyperparameter tuning, and model selection in a predictive modeling project.

Kick-start your project with my new book Optimization for Machine Learning, including step-by-step tutorials and the Python source code files for all examples.

Let’s get started.

Why Optimization Is Important in Machine Learning

Why Optimization Is Important in Machine Learning
Photo by Marco Verch, some rights reserved.

Tutorial Overview

This tutorial is divided into three parts; they are:

  1. Machine Learning and Optimization
  2. Learning as Optimization
  3. Optimization in a Machine Learning Project
    1. Data Preparation as Optimization
    2. Hyperparameter Tuning as Optimization
    3. Model Selection as Optimization

Machine Learning and Optimization

Function optimization is the problem of finding the set of inputs to a target objective function that result in the minimum or maximum of the function.

It can be a challenging problem as the function may have tens, hundreds, thousands, or even millions of inputs, and the structure of the function is unknown, and often non-differentiable and noisy.

  • Function Optimization: Find the set of inputs that results in the minimum or maximum of an objective function.

Machine learning can be described as function approximation. That is, approximating the unknown underlying function that maps examples of inputs to outputs in order to make predictions on new data.

It can be challenging as there is often a limited number of examples from which we can approximate the function, and the structure of the function that is being approximated is often nonlinear, noisy, and may even contain contradictions.

  • Function Approximation: Generalize from specific examples to a reusable mapping function for making predictions on new examples.

Function optimization is often simpler than function approximation.

Importantly, in machine learning, we often solve the problem of function approximation using function optimization.

At the core of nearly all machine learning algorithms is an optimization algorithm.

In addition, the process of working through a predictive modeling problem involves optimization at multiple steps in addition to learning a model, including:

  • Choosing the hyperparameters of a model.
  • Choosing the transforms to apply to the data prior to modeling
  • Choosing the modeling pipeline to use as the final model.

Now that we know that optimization plays a central role in machine learning, let’s look at some examples of learning algorithms and how they use optimization.

Want to Get Started With Optimization Algorithms?

Take my free 7-day email crash course now (with sample code).

Click to sign-up and also get a free PDF Ebook version of the course.

Learning as Optimization

Predictive modeling problems involve making a prediction from an example of input.

A numeric quantity must be predicted in the case of a regression problem, whereas a class label must be predicted in the case of a classification problem.

The problem of predictive modeling is sufficiently challenging that we cannot write code to make predictions. Instead, we must use a learning algorithm applied to historical data to learn a “program” called a predictive model that we can use to make predictions on new data.

In statistical learning, a statistical perspective on machine learning, the problem is framed as the learning of a mapping function (f) given examples of input data (X) and associated output data (y).

  • y = f(X)

Given new examples of input (Xhat), we must map each example onto the expected output value (yhat) using our learned function (fhat).

  • yhat = fhat(Xhat)

The learned mapping will be imperfect. No model is perfect, and some prediction error is expected given the difficulty of the problem, noise in the observed data, and the choice of learning algorithm.

Mathematically, learning algorithms solve the problem of approximating the mapping function by solving a function optimization problem.

Specifically, given examples of inputs and outputs, find the set of inputs to the mapping function that results in the minimum loss, minimum cost, or minimum prediction error.

The more biased or constrained the choice of mapping function, the easier the optimization is to solve.

Let’s look at some examples to make this clear.

A linear regression (for regression problems) is a highly constrained model and can be solved analytically using linear algebra. The inputs to the mapping function are the coefficients of the model.

We can use an optimization algorithm, like a quasi-Newton local search algorithm, but it will almost always be less efficient than the analytical solution.

  • Linear Regression: Function inputs are model coefficients, optimization problems that can be solved analytically.

A logistic regression (for classification problems) is slightly less constrained and must be solved as an optimization problem, although something about the structure of the optimization function being solved is known given the constraints imposed by the model.

This means a local search algorithm like a quasi-Newton method can be used. We could use a global search like stochastic gradient descent, but it will almost always be less efficient.

  • Logistic Regression: Function inputs are model coefficients, optimization problems that require an iterative local search algorithm.

A neural network model is a very flexible learning algorithm that imposes few constraints. The inputs to the mapping function are the network weights. A local search algorithm cannot be used given the search space is multimodal and highly nonlinear; instead, a global search algorithm must be used.

A global optimization algorithm is commonly used, specifically stochastic gradient descent, and the updates are made in a way that is aware of the structure of the model (backpropagation and the chain rule). We could use a global search algorithm that is oblivious of the structure of the model, like a genetic algorithm, but it will almost always be less efficient.

  • Neural Network: Function inputs are model weights, optimization problems that require an iterative global search algorithm.

We can see that each algorithm makes different assumptions about the form of the mapping function, which influences the type of optimization problem to be solved.

We can also see that the default optimization algorithm used for each machine learning algorithm is not arbitrary; it represents the most efficient algorithm for solving the specific optimization problem framed by the algorithm, e.g. stochastic gradient descent for neural nets instead of a genetic algorithm. Deviating from these defaults requires a good reason.

Not all machine learning algorithms solve an optimization problem. A notable example is the k-nearest neighbors algorithm that stores the training dataset and does a lookup for the k best matches to each new example in order to make a prediction.

Now that we are familiar with learning in machine learning algorithms as optimization, let’s look at some related examples of optimization in a machine learning project.

Optimization in a Machine Learning Project

Optimization plays an important part in a machine learning project in addition to fitting the learning algorithm on the training dataset.

The step of preparing the data prior to fitting the model and the step of tuning a chosen model also can be framed as an optimization problem. In fact, an entire predictive modeling project can be thought of as one large optimization problem.

Let’s take a closer look at each of these cases in turn.

Data Preparation as Optimization

Data preparation involves transforming raw data into a form that is most appropriate for the learning algorithms.

This might involve scaling values, handling missing values, and changing the probability distribution of variables.

Transforms can be made to change representation of the historical data to meet the expectations or requirements of specific learning algorithms. Yet, sometimes good or best results can be achieved when the expectations are violated or when an unrelated transform to the data is performed.

We can think of choosing transforms to apply to the training data as a search or optimization problem of best exposing the unknown underlying structure of the data to the learning algorithm.

  • Data Preparation: Function inputs are sequences of transforms, optimization problems that require an iterative global search algorithm.

This optimization problem is often performed manually with human-based trial and error. Nevertheless, it is possible to automate this task using a global optimization algorithm where the inputs to the function are the types and order of transforms applied to the training data.

The number and permutations of data transforms are typically quite limited and it may be possible to perform an exhaustive search or a grid search of commonly used sequences.

For more on this topic, see the tutorial:

Hyperparameter Tuning as Optimization

Machine learning algorithms have hyperparameters that can be configured to tailor the algorithm to a specific dataset.

Although the dynamics of many hyperparameters are known, the specific effect they will have on the performance of the resulting model on a given dataset is not known. As such, it is a standard practice to test a suite of values for key algorithm hyperparameters for a chosen machine learning algorithm.

This is called hyperparameter tuning or hyperparameter optimization.

It is common to use a naive optimization algorithm for this purpose, such as a random search algorithm or a grid search algorithm.

  • Hyperparameter Tuning: Function inputs are algorithm hyperparameters, optimization problems that require an iterative global search algorithm.

For more on this topic, see the tutorial:

Nevertheless, it is becoming increasingly common to use an iterative global search algorithm for this optimization problem. A popular choice is a Bayesian optimization algorithm that is capable of simultaneously approximating the target function that is being optimized (using a surrogate function) while optimizing it.

This is desirable as evaluating a single combination of model hyperparameters is expensive, requiring fitting the model on the entire training dataset one or many times, depending on the choice of model evaluation procedure (e.g. repeated k-fold cross-validation).

For more on this topic, see the tutorial:

Model Selection as Optimization

Model selection involves choosing one from among many candidate machine learning models for a predictive modeling problem.

Really, it involves choosing the machine learning algorithm or machine learning pipeline that produces a model. This is then used to train a final model that can then be used in the desired application to make predictions on new data.

This process of model selection is often a manual process performed by a machine learning practitioner involving tasks such as preparing data, evaluating candidate models, tuning well-performing models, and finally choosing the final model.

This can be framed as an optimization problem that subsumes part of or the entire predictive modeling project.

  • Model Selection: Function inputs are data transform, machine learning algorithm, and algorithm hyperparameters; optimization problem that requires an iterative global search algorithm.

Increasingly, this is the case with automated machine learning (AutoML) algorithms being used to choose an algorithm, an algorithm and hyperparameters, or data preparation, algorithm and hyperparameters, with very little user intervention.

For more on AutoML see the tutorial:

Like hyperparameter tuning, it is common to use a global search algorithm that also approximates the objective function, such as Bayesian optimization, given that each function evaluation is expensive.

This automated optimization approach to machine learning also underlies modern machine learning as a service (MLaaS) products provided by companies such as Google, Microsoft, and Amazon.

Although fast and efficient, such approaches are still unable to outperform hand-crafted models prepared by highly skilled experts, such as those participating in machine learning competitions.

Further Reading

This section provides more resources on the topic if you are looking to go deeper.

Tutorials

Articles

Summary

In this tutorial, you discovered the central role of optimization in machine learning.

Specifically, you learned:

  • Machine learning algorithms perform function approximation, which is solved using function optimization.
  • Function optimization is the reason why we minimize error, cost, or loss when fitting a machine learning algorithm.
  • Optimization is also performed during data preparation, hyperparameter tuning, and model selection in a predictive modeling project.

Do you have any questions?
Ask your questions in the comments below and I will do my best to answer.

Get a Handle on Modern Optimization Algorithms!

Optimization for Maching Learning

Develop Your Understanding of Optimization

...with just a few lines of python code

Discover how in my new Ebook:
Optimization for Machine Learning

It provides self-study tutorials with full working code on:
Gradient Descent, Genetic Algorithms, Hill Climbing, Curve Fitting, RMSProp, Adam, and much more...

Bring Modern Optimization Algorithms to
Your Machine Learning Projects


See What's Inside

11 Responses to Why Optimization Is Important in Machine Learning

  1. Avatar
    Senthilkumar Radhakrishnan June 2, 2021 at 5:17 pm #

    Hi Jason,

    I am not going to ask specific to the topic but could you help me, can we track the changes made in a Database table and what can we infer from the recorded changes with the help of Machine learning

    Thanks in advance

    • Avatar
      Jason Brownlee June 3, 2021 at 5:32 am #

      I think this is a feature of modern databases, e.g. track changes / logging.

  2. Avatar
    Anil Verma June 4, 2021 at 10:31 pm #

    What is test set error in ML testing dataset? What is the range of acceptable test set error value?

  3. Avatar
    Shianmiin June 5, 2021 at 12:39 am #

    This is the best article I’ve seen to explain the nature of what a ML is. Thanks Jason.

  4. Avatar
    farzam June 5, 2021 at 12:40 am #

    thank you,
    that was so useful.

  5. Avatar
    ANWAR Ali Sathio June 6, 2021 at 5:51 am #

    Would you extend your kind guidance and help for solving timetable schedule problem using Genetic algorithm as ML hyper heuristic approach . guidance.

  6. Avatar
    Robin Rai June 14, 2021 at 3:28 pm #

    lovely article on the importance of the requirement of optimization of machine learning

Leave a Reply