[New Book] Click to get The Beginner's Guide to Data Science!
Use the offer code 20offearlybird to get 20% off. Hurry, sale ends soon!

Two-Dimensional (2D) Test Functions for Function Optimization

Function optimization is a field of study that seeks an input to a function that results in the maximum or minimum output of the function.

There are a large number of optimization algorithms and it is important to study and develop intuitions for optimization algorithms on simple and easy-to-visualize test functions.

Two-dimensional functions take two input values (x and y) and output a single evaluation of the input. They are among the simplest types of test functions to use when studying function optimization. The benefit of two-dimensional functions is that they can be visualized as a contour plot or surface plot that shows the topography of the problem domain with the optima and samples of the domain marked with points.

In this tutorial, you will discover standard two-dimensional functions you can use when studying function optimization.

Kick-start your project with my new book Optimization for Machine Learning, including step-by-step tutorials and the Python source code files for all examples.

Let’s get started.

Two-Dimensional (2D) Test Functions for Function Optimization

Two-Dimensional (2D) Test Functions for Function Optimization
Photo by DomWphoto, some rights reserved.

Tutorial Overview

A two-dimensional function is a function that takes two input variables and computes the objective value.

We can think of the two input variables as two axes on a graph, x and y. Each input to the function is a single point on the graph and the outcome of the function can be taken as the height on the graph.

This allows the function to be conceptualized as a surface and we can characterize the function based on the structure of the surface. For example, hills for input points that result in large relative outcomes of the objective function and valleys for input points that result in small relative outcomes of the objective function.

A surface may have one major feature or global optima, or it may have many with lots of places for an optimization to get stuck. The surface may be smooth, noisy, convex, and all manner of other properties that we may care about when testing optimization algorithms.

There are many different types of simple two-dimensional test functions we could use.

Nevertheless, there are standard test functions that are commonly used in the field of function optimization. There are also specific properties of test functions that we may wish to select when testing different algorithms.

We will explore a small number of simple two-dimensional test functions in this tutorial and organize them by their properties with two different groups; they are:

  1. Unimodal Functions
    1. Unimodal Function 1
    2. Unimodal Function 2
    3. Unimodal Function 3
  2. Multimodal Functions
    1. Multimodal Function 1
    2. Multimodal Function 2
    3. Multimodal Function 3

Each function will be presented using Python code with a function implementation of the target objective function and a sampling of the function that is shown as a surface plot.

All functions are presented as a minimization function, e.g. find the input that results in the minimum (smallest value) output of the function. Any maximizing function can be made a minimization function by adding a negative sign to all output. Similarly, any minimizing function can be made maximizing in the same way.

I did not invent these functions; they are taken from the literature. See the further reading section for references.

You can then choose and copy-paste the code one or more functions to use in your own project to study or compare the behavior of optimization algorithms.

Unimodal Functions

Unimodal means that the function has a single global optima.

A unimodal function may or may not be convex. A convex function is a function where a line can be drawn between any two points in the domain and the line remains in the domain. For a two-dimensional function shown as a contour or surface plot, this means the function has a bowl shape and the line between two remains above or in the bowl.

Let’s look at a few examples of unimodal functions.

Unimodal Function 1

The range is bounded to -5.0 and 5.0 and one global optimal at [0.0, 0.0].

Running the example creates a surface plot of the function.

Surface Plot of Unimodal Optimization Function 1

Surface Plot of Unimodal Optimization Function 1

Unimodal Function 2

The range is bounded to -10.0 and 10.0 and one global optimal at [0.0, 0.0].

Running the example creates a surface plot of the function.

Surface Plot of Unimodal Optimization Function 2

Surface Plot of Unimodal Optimization Function 2

Unimodal Function 3

The range is bounded to -10.0 and 10.0 and one global optimal at [pi, pi]. This function is known as Easom’s function.

Running the example creates a surface plot of the function.

Surface Plot of Unimodal Optimization Function 3

Surface Plot of Unimodal Optimization Function 3

Want to Get Started With Optimization Algorithms?

Take my free 7-day email crash course now (with sample code).

Click to sign-up and also get a free PDF Ebook version of the course.

Multimodal Functions

A multi-modal function means a function with more than one “mode” or optima (e.g. valley).

Multimodal functions are non-convex.

There may be one global optima and one or more local or deceptive optima. Alternately, there may be multiple global optima, i.e. multiple different inputs that result in the same minimal output of the function.

Let’s look at a few examples of multimodal functions.

Multimodal Function 1

The range is bounded to -5.0 and 5.0 and one global optimal at [0.0, 0.0]. This function is known as Ackley’s function.

Running the example creates a surface plot of the function.

Surface Plot of Multimodal Optimization Function 1

Surface Plot of Multimodal Optimization Function 1

Multimodal Function 2

The range is bounded to -5.0 and 5.0 and the function as four global optima at [3.0, 2.0], [-2.805118, 3.131312], [-3.779310, -3.283186], [3.584428, -1.848126]. This function is known as Himmelblau’s function.

Running the example creates a surface plot of the function.

Surface Plot of Multimodal Optimization Function 2

Surface Plot of Multimodal Optimization Function 2

Multimodal Function 3

The range is bounded to -10.0 and 10.0 and the function as four global optima at [8.05502, 9.66459], [-8.05502, 9.66459], [8.05502, -9.66459], [-8.05502, -9.66459]. This function is known as Holder’s table function.

Running the example creates a surface plot of the function.

Surface Plot of Multimodal Optimization Function 3

Surface Plot of Multimodal Optimization Function 3

Further Reading

This section provides more resources on the topic if you are looking to go deeper.

Articles

Summary

In this tutorial, you discovered standard two-dimensional functions you can use when studying function optimization.

Are you using any of the above functions?
Let me know which one in the comments below.

Do you have any questions?
Ask your questions in the comments below and I will do my best to answer.

Get a Handle on Modern Optimization Algorithms!

Optimization for Maching Learning

Develop Your Understanding of Optimization

...with just a few lines of python code

Discover how in my new Ebook:
Optimization for Machine Learning

It provides self-study tutorials with full working code on:
Gradient Descent, Genetic Algorithms, Hill Climbing, Curve Fitting, RMSProp, Adam, and much more...

Bring Modern Optimization Algorithms to
Your Machine Learning Projects


See What's Inside

9 Responses to Two-Dimensional (2D) Test Functions for Function Optimization

  1. Avatar
    Ahgay Gabon March 26, 2021 at 10:52 am #

    Thanks for this awesome addition. But with optimization algorithms, there be one solution to be passed to this function. How to accomplish that? and how to plot these benchmarks given only a solution that is outputted in an algorithm trial?
    Thanks!

  2. Avatar
    Terrence May 11, 2021 at 6:11 am #

    Hi, for Easom’s function should the location of the global minimum not be [pi,pi]? I

  3. Avatar
    Farzad May 17, 2021 at 11:10 pm #

    What is the best algorithm for optimizing the cutting of two-dimensional plates?
    Algorithm with the least amount of waste, I want to use this algorithm to calculate the MDF cut

    • Avatar
      Jason Brownlee May 18, 2021 at 6:15 am #

      Perhaps try a suite of algorithms and discover what works well or best for your objective problem.

    • Avatar
      Adrian Tam August 10, 2021 at 6:53 am #

      Why not try CNN-LSTM?

  4. Avatar
    ali September 3, 2021 at 5:00 pm #

    in eggholder,
    i want to calculate inputs on specific values …
    like if im passing output=-959.6
    it will gives input =[512,404]
    kindly tell me how it possible

    • Avatar
      Jason Brownlee September 4, 2021 at 5:17 am #

      You are talking about reverse optimization, or inverting the problem. I don’t believe it is possible/tractable.

Leave a Reply