TPOT for Automated Machine Learning in Python

Automated Machine Learning (AutoML) refers to techniques for automatically discovering well-performing models for predictive modeling tasks with very little user involvement.

TPOT is an open-source library for performing AutoML in Python. It makes use of the popular Scikit-Learn machine learning library for data transforms and machine learning algorithms and uses a Genetic Programming stochastic global search procedure to efficiently discover a top-performing model pipeline for a given dataset.

In this tutorial, you will discover how to use TPOT for AutoML with Scikit-Learn machine learning algorithms in Python.

After completing this tutorial, you will know:

  • TPOT is an open-source library for AutoML with scikit-learn data preparation and machine learning models.
  • How to use TPOT to automatically discover top-performing models for classification tasks.
  • How to use TPOT to automatically discover top-performing models for regression tasks.

Let’s get started.

TPOT for Automated Machine Learning in Python

TPOT for Automated Machine Learning in Python
Photo by Gwen, some rights reserved.

Tutorial Overview

This tutorial is divided into four parts; they are:

  1. TPOT for Automated Machine Learning
  2. Install and Use TPOT
  3. TPOT for Classification
  4. TPOT for Regression

TPOT for Automated Machine Learning

Tree-based Pipeline Optimization Tool, or TPOT for short, is a Python library for automated machine learning.

TPOT uses a tree-based structure to represent a model pipeline for a predictive modeling problem, including data preparation and modeling algorithms and model hyperparameters.

… an evolutionary algorithm called the Tree-based Pipeline Optimization Tool (TPOT) that automatically designs and optimizes machine learning pipelines.

Evaluation of a Tree-based Pipeline Optimization Tool for Automating Data Science, 2016.

An optimization procedure is then performed to find a tree structure that performs best for a given dataset. Specifically, a genetic programming algorithm, designed to perform a stochastic global optimization on programs represented as trees.

TPOT uses a version of genetic programming to automatically design and optimize a series of data transformations and machine learning models that attempt to maximize the classification accuracy for a given supervised learning data set.

Evaluation of a Tree-based Pipeline Optimization Tool for Automating Data Science, 2016.

The figure below taken from the TPOT paper shows the elements involved in the pipeline search, including data cleaning, feature selection, feature processing, feature construction, model selection, and hyperparameter optimization.

Overview of the TPOT Pipeline Search

Overview of the TPOT Pipeline Search
Taken from: Evaluation of a Tree-based Pipeline Optimization Tool for Automating Data Science, 2016.

Now that we are familiar with what TPOT is, let’s look at how we can install and use TPOT to find an effective model pipeline.

Install and Use TPOT

The first step is to install the TPOT library, which can be achieved using pip, as follows:

Once installed, we can import the library and print the version number to confirm it was installed successfully:

Running the example prints the version number.

Your version number should be the same or higher.

Using TPOT is straightforward.

It involves creating an instance of the TPOTRegressor or TPOTClassifier class, configuring it for the search, and then exporting the model pipeline that was found to achieve the best performance on your dataset.

Configuring the class involves two main elements.

The first is how models will be evaluated, e.g. the cross-validation scheme and performance metric. I recommend explicitly specifying a cross-validation class with your chosen configuration and the performance metric to use.

For example, RepeatedKFold for regression with ‘neg_mean_absolute_error‘ metric for regression:

Or a RepeatedStratifiedKFold for regression with ‘accuracy‘ metric for classification:

The other element is the nature of the stochastic global search procedure.

As an evolutionary algorithm, this involves setting configuration, such as the size of the population, the number of generations to run, and potentially crossover and mutation rates. The former importantly control the extent of the search; the latter can be left on default values if evolutionary search is new to you.

For example, a modest population size of 100 and 5 or 10 generations is a good starting point.

At the end of a search, a Pipeline is found that performs the best.

This Pipeline can be exported as code into a Python file that you can later copy-and-paste into your own project.

Now that we are familiar with how to use TPOT, let’s look at some worked examples with real data.

TPOT for Classification

In this section, we will use TPOT to discover a model for the sonar dataset.

The sonar dataset is a standard machine learning dataset comprised of 208 rows of data with 60 numerical input variables and a target variable with two class values, e.g. binary classification.

Using a test harness of repeated stratified 10-fold cross-validation with three repeats, a naive model can achieve an accuracy of about 53 percent. A top-performing model can achieve accuracy on this same test harness of about 88 percent. This provides the bounds of expected performance on this dataset.

The dataset involves predicting whether sonar returns indicate a rock or simulated mine.

No need to download the dataset; we will download it automatically as part of our worked examples.

The example below downloads the dataset and summarizes its shape.

Running the example downloads the dataset and splits it into input and output elements. As expected, we can see that there are 208 rows of data with 60 input variables.

Next, let’s use TPOT to find a good model for the sonar dataset.

First, we can define the method for evaluating models. We will use a good practice of repeated stratified k-fold cross-validation with three repeats and 10 folds.

We will use a population size of 50 for five generations for the search and use all cores on the system by setting “n_jobs” to -1.

Finally, we can start the search and ensure that the best-performing model is saved at the end of the run.

Tying this together, the complete example is listed below.

Running the example may take a few minutes, and you will see a progress bar on the command line.

Note: Your results may vary given the stochastic nature of the algorithm or evaluation procedure, or differences in numerical precision. Consider running the example a few times and compare the average outcome.

The accuracy of top-performing models will be reported along the way.

In this case, we can see that the top-performing pipeline achieved the mean accuracy of about 86.6 percent. This is a skillful model, and close to a top-performing model on this dataset.

The top-performing pipeline is then saved to a file named “tpot_sonar_best_model.py“.

Opening this file, you can see that there is some generic code for loading a dataset and fitting the pipeline. An example is listed below.

Note: as-is, this code does not execute, by design. It is a template that you can copy-and-paste into your project.

In this case, we can see that the best-performing model is a pipeline comprised of a Naive Bayes model and a Gradient Boosting model.

We can adapt this code to fit a final model on all available data and make a prediction for new data.

The complete example is listed below.

Running the example fits the best-performing model on the dataset and makes a prediction for a single row of new data.

TPOT for Regression

In this section, we will use TPOT to discover a model for the auto insurance dataset.

The auto insurance dataset is a standard machine learning dataset comprised of 63 rows of data with one numerical input variable and a numerical target variable.

Using a test harness of repeated stratified 10-fold cross-validation with three repeats, a naive model can achieve a mean absolute error (MAE) of about 66. A top-performing model can achieve a MAE on this same test harness of about 28. This provides the bounds of expected performance on this dataset.

The dataset involves predicting the total amount in claims (thousands of Swedish Kronor) given the number of claims for different geographical regions.

No need to download the dataset; we will download it automatically as part of our worked examples.

The example below downloads the dataset and summarizes its shape.

Running the example downloads the dataset and splits it into input and output elements. As expected, we can see that there are 63 rows of data with one input variable.

Next, we can use TPOT to find a good model for the auto insurance dataset.

First, we can define the method for evaluating models. We will use a good practice of repeated k-fold cross-validation with three repeats and 10 folds.

We will use a population size of 50 for 5 generations for the search and use all cores on the system by setting “n_jobs” to -1.

Finally, we can start the search and ensure that the best-performing model is saved at the end of the run.

Tying this together, the complete example is listed below.

Running the example may take a few minutes, and you will see a progress bar on the command line.

Note: Your results may vary given the stochastic nature of the algorithm or evaluation procedure, or differences in numerical precision. Consider running the example a few times and compare the average outcome.

The MAE of top-performing models will be reported along the way.

In this case, we can see that the top-performing pipeline achieved the mean MAE of about 29.14. This is a skillful model, and close to a top-performing model on this dataset.

The top-performing pipeline is then saved to a file named “tpot_insurance_best_model.py“.

Opening this file, you can see that there is some generic code for loading a dataset and fitting the pipeline. An example is listed below.

Note: as-is, this code does not execute, by design. It is a template that you can copy-paste into your project.

In this case, we can see that the best-performing model is a pipeline comprised of a linear support vector machine model.

We can adapt this code to fit a final model on all available data and make a prediction for new data.

The complete example is listed below.

Running the example fits the best-performing model on the dataset and makes a prediction for a single row of new data.

Further Reading

This section provides more resources on the topic if you are looking to go deeper.

Summary

In this tutorial, you discovered how to use TPOT for AutoML with Scikit-Learn machine learning algorithms in Python.

Specifically, you learned:

  • TPOT is an open-source library for AutoML with scikit-learn data preparation and machine learning models.
  • How to use TPOT to automatically discover top-performing models for classification tasks.
  • How to use TPOT to automatically discover top-performing models for regression tasks.

Do you have any questions?
Ask your questions in the comments below and I will do my best to answer.

Discover Fast Machine Learning in Python!

Master Machine Learning With Python

Develop Your Own Models in Minutes

...with just a few lines of scikit-learn code

Learn how in my new Ebook:
Machine Learning Mastery With Python

Covers self-study tutorials and end-to-end projects like:
Loading data, visualization, modeling, tuning, and much more...

Finally Bring Machine Learning To
Your Own Projects

Skip the Academics. Just Results.

See What's Inside

41 Responses to TPOT for Automated Machine Learning in Python

  1. Avatar
    xiaoning September 9, 2020 at 9:06 am #

    perfect!!!

  2. Avatar
    Prox September 9, 2020 at 9:11 am #

    Thank you for your tutorial! Extremely useful!
    The only thing that I get the following result for TPOT for Regression-code:
    WARNING:stopit:Code block execution exceeded 2 seconds timeout
    Traceback (most recent call last):

    stopit.utils.TimeoutException

    Is there something wrong?

    • Avatar
      Jason Brownlee September 9, 2020 at 1:31 pm #

      Thanks.

      Looks like a warning, perhaps ignore for now.

  3. Avatar
    Hutudi September 9, 2020 at 12:28 pm #

    Is it good for high dimensions?

  4. Avatar
    Piotr September 9, 2020 at 4:41 pm #

    Jason, very good tutorial. I like the AutoML series. The TPOT and Auto-Sklearn were one of the first AutoML packages. The ability to search for the best models is a really helpful and speed-up the data science process. Today, other aspects of ML become important, like explainability. The ML model cant be a black-box and should provide information about how it works and why is doing such predictions. This greatly helps to understand data and the model. There is an AutoML package that is producing extensive explanations for models: https://github.com/mljar/mljar-supervised I hope you will find it valuable and will present for your readers. Cheers.

  5. Avatar
    Michael Klein September 9, 2020 at 8:24 pm #

    Thanks for advancing my (non-technical) understanding of concepts used by developers.

    For complex real world challenges such as climate change or urban logistics, how high an accuracy may result given that naivity is contrary to currently accepted theory in Physics, and that underlying technology and people’s and social philosophy changes over time?

    • Avatar
      Jason Brownlee September 10, 2020 at 6:29 am #

      You’re welcome.

      I don’t follow your question, sorry. Not sure we can address climate change with simple predictive models.

  6. Avatar
    ndcharles September 10, 2020 at 4:00 pm #

    I really appreciate this especially the fact that hyperparameter tuning is giving me headache right now.

    However, in your classification model, you encoded y as a string. Any reasons for that? (I thought all model parameters are meant to be numeric.

    # minimally prepare dataset
    X = X.astype(‘float32’)
    y = LabelEncoder().fit_transform(y.astype(‘str’))

    • Avatar
      Jason Brownlee September 11, 2020 at 5:50 am #

      Thanks.

      Yes, I’m ensuring the variables provided to the label encoder prior to ordinal encoding are a string. It’s an old habit.

  7. Avatar
    shaheen mohammed saleh September 12, 2020 at 5:23 pm #

    How many algorithms or models within TPOT are there many or few? and do you prefer automatically discovering well-performing models or manually

    • Avatar
      Jason Brownlee September 13, 2020 at 6:01 am #

      It will search many combinations of sklearn models.

  8. Avatar
    shaheen mohammed saleh September 12, 2020 at 5:28 pm #

    If you prefer automatically discovering well-performing models which one do you prefer and why? thank you.

    1- Autosklearn
    2- TPOT
    3- Hyperopt-sklearn

    • Avatar
      Jason Brownlee September 13, 2020 at 6:01 am #

      Perhaps try each on your project and use the one you prefer or that best meets your requirements.

  9. Avatar
    Grzegorz Kępisty September 15, 2020 at 10:41 pm #

    Good afternoon Jason,

    Great article and examples!

    Question: I understand the idea of stacking as: data -> several algorithms -> intermediate outputs -> next algorithm -> final prediction. In your classification example there is optimal model : stacking of GaussianNaiveBayes and later GradientBoosting. Is there inside only one GNB model (which looks too simple) or do I miss something?

    Best regards!

    • Avatar
      Jason Brownlee September 16, 2020 at 6:22 am #

      Sometimes a simple model performs well or best.

  10. Avatar
    Anthony The Koala September 17, 2020 at 5:19 pm #

    Dear Dr Jason,
    I ran the first example and the output was not the same.

    Compared to your example

    In other words why did I get MLPClassifier as my best classifier with score 0.877 and you got LinearSVR with score 0.8667 YET I am running the same code.

    Thank you
    Anthony of Sydney

  11. Avatar
    Anthony The Koala September 17, 2020 at 7:05 pm #

    Dear Dr Jason,
    Similarly for the regression I got:

    and your experiment produced LinearBestSVR with score of -29.148

    Same code used on my computer but slightly different results.

    Thank you,
    Anthony of Sydney

    • Avatar
      Jason Brownlee September 18, 2020 at 6:42 am #

      Yes, this is to be expected given the stochastic nature of the optimization algorithm.

  12. Avatar
    ahmed December 5, 2020 at 8:58 am #

    AttributeError: ‘TPOTClassifier’ object has no attribute ‘_optimized_pipeline’

  13. Avatar
    Max March 25, 2021 at 9:05 pm #

    Dear Dr Jason,
    Is it possible to optimize also pipeline steps like scaler or encoder? Or it can optimize only model hyperparameters?

    • Avatar
      Jason Brownlee March 26, 2021 at 6:25 am #

      I think so – I believe TPOT does or supports this.

  14. Avatar
    Kate April 20, 2021 at 7:24 pm #

    How does one now put this results from the tpot into production?

  15. Avatar
    Ben Bartling May 23, 2021 at 10:13 pm #

    Hi Jason, in the TPOT regressor example as with my trial of the code from sklearn.model_selection import train_test_split is automatically imported from the tpot boiler plate generated .py file it generates. Would it be good to make use of this to validate the model? Or in your example without using this what could be some other validations that can be used for regression? Also can tpot be applied regression problem that is made up of times series data?

    • Avatar
      Jason Brownlee May 24, 2021 at 5:45 am #

      It might help as a starting point code for validation of the model.

      It would be risky using this framework for time series as I don’t think the temporal ordering of samples would be respected making the evaluation invalid.

  16. Avatar
    Mehdi July 6, 2021 at 12:05 am #

    Hi Jason,
    Thanks for the tutorial
    I used TPOT for hyperparameter optimization for GradientBoostingRegressor. But I ran to the following error.

    Terminals are required to have a unique name. Consider using the argument ‘name’ to rename your second GradientBoostingRegressor__learning_rate=0 terminal.

    Could you please share your thoughts on that?

    Cheers.

    • Avatar
      Jason Brownlee July 6, 2021 at 5:49 am #

      Perhaps try an alternate model type?
      Perhaps check all your libraries are up to date?
      Perhaps contact the tpot project?
      Perhaps post code and error on stackoverflow?

  17. Avatar
    Maryam Zeinolabedini Rezaabad August 25, 2021 at 8:46 pm #

    Hi Jason,

    Thank you very much for the tutorial.

    I have two questions.
    1. Is there any solution to see the details (For example, the tree structure of genetic programming etc.) of the best model or all the models which are built in each generation.

    2. in genetic programming, the initial random population are generated from the defame given by us (real data) or not?

    Thank you in advance,

    • Adrian Tam
      Adrian Tam August 27, 2021 at 5:34 am #

      If you want to get detail of each generation, consider the checkpoint parameter in tpot. But you may need to write some code to visualize the details from the checkpoints.

  18. Avatar
    Jaret May 9, 2022 at 11:35 pm #

    Dear Dr. Jason,
    Your idea helps me a lot. However, I wonder How I can get the result with MSE instead of MAE. Looking forward to your reply
    I appreciate any help you can provide.

    • Avatar
      James Carmichael May 10, 2022 at 12:09 pm #

      Hi Jaret…You could modify the following:

      scoring=’neg_mean_absolute_error’

      to

      scoring=’neg_mean_squared_error’

      • Avatar
        Jaret May 11, 2022 at 12:28 am #

        Dear Dr.Jason,
        Thank you for your reply, now I have mastered scoring, but how to see the first model that reads CSV files ((n_splits=10, n_repeats=3, random_state=1 What does it mean)? As a new machine learning scholar, Your suggestion was really helpful to me, thank you for your generosity.

  19. Avatar
    John White August 30, 2022 at 3:11 am #

    Hello,

    Would it be necessary or make sense to run TPOT after receiving new training data or when we decide to retrain a current model? Thank you

    -John

  20. Avatar
    Akansha October 25, 2023 at 12:09 am #

    Dear Dr. Jason,
    I tried the example for classification, but I am not getting the same output. Instead, I am getting the error “A pipeline has not yet been optimized. Please call fit() first”.

  21. Avatar
    Akansha October 30, 2023 at 5:35 pm #

    Hi James,
    Thank you for your reply. I checked the resource. They have mentioned that there might be a problem with the data itself. So I checked the sonar dataset link and it was not working. I will try to use a dataset from my computer to see If it works.

Leave a Reply