How to Use XGBoost for Time Series Forecasting

Last Updated on August 27, 2020

XGBoost is an efficient implementation of gradient boosting for classification and regression problems.

It is both fast and efficient, performing well, if not the best, on a wide range of predictive modeling tasks and is a favorite among data science competition winners, such as those on Kaggle.

XGBoost can also be used for time series forecasting, although it requires that the time series dataset be transformed into a supervised learning problem first. It also requires the use of a specialized technique for evaluating the model called walk-forward validation, as evaluating the model using k-fold cross validation would result in optimistically biased results.

In this tutorial, you will discover how to develop an XGBoost model for time series forecasting.

After completing this tutorial, you will know:

  • XGBoost is an implementation of the gradient boosting ensemble algorithm for classification and regression.
  • Time series datasets can be transformed into supervised learning using a sliding-window representation.
  • How to fit, evaluate, and make predictions with an XGBoost model for time series forecasting.

Kick-start your project with my new book XGBoost With Python, including step-by-step tutorials and the Python source code files for all examples.

Let’s get started.

  • Update Aug/2020: Fixed bug in the calculation of MAE, updated model config to make better predictions (thanks Kaustav!)
How to Use XGBoost for Time Series Forecasting

How to Use XGBoost for Time Series Forecasting
Photo by gothopotam, some rights reserved.

Tutorial Overview

This tutorial is divided into three parts; they are:

  1. XGBoost Ensemble
  2. Time Series Data Preparation
  3. XGBoost for Time Series Forecasting

XGBoost Ensemble

XGBoost is short for Extreme Gradient Boosting and is an efficient implementation of the stochastic gradient boosting machine learning algorithm.

The stochastic gradient boosting algorithm, also called gradient boosting machines or tree boosting, is a powerful machine learning technique that performs well or even best on a wide range of challenging machine learning problems.

Tree boosting has been shown to give state-of-the-art results on many standard classification benchmarks.

XGBoost: A Scalable Tree Boosting System, 2016.

It is an ensemble of decision trees algorithm where new trees fix errors of those trees that are already part of the model. Trees are added until no further improvements can be made to the model.

XGBoost provides a highly efficient implementation of the stochastic gradient boosting algorithm and access to a suite of model hyperparameters designed to provide control over the model training process.

The most important factor behind the success of XGBoost is its scalability in all scenarios. The system runs more than ten times faster than existing popular solutions on a single machine and scales to billions of examples in distributed or memory-limited settings.

XGBoost: A Scalable Tree Boosting System, 2016.

XGBoost is designed for classification and regression on tabular datasets, although it can be used for time series forecasting.

For more on the gradient boosting and XGBoost implementation, see the tutorial:

First, the XGBoost library must be installed.

You can install it using pip, as follows:

Once installed, you can confirm that it was installed successfully and that you are using a modern version by running the following code:

Running the code, you should see the following version number or higher.

Although the XGBoost library has its own Python API, we can use XGBoost models with the scikit-learn API via the XGBRegressor wrapper class.

An instance of the model can be instantiated and used just like any other scikit-learn class for model evaluation. For example:

Now that we are familiar with XGBoost, let’s look at how we can prepare a time series dataset for supervised learning.

Time Series Data Preparation

Time series data can be phrased as supervised learning.

Given a sequence of numbers for a time series dataset, we can restructure the data to look like a supervised learning problem. We can do this by using previous time steps as input variables and use the next time step as the output variable.

Let’s make this concrete with an example. Imagine we have a time series as follows:

We can restructure this time series dataset as a supervised learning problem by using the value at the previous time step to predict the value at the next time-step.

Reorganizing the time series dataset this way, the data would look as follows:

Note that the time column is dropped and some rows of data are unusable for training a model, such as the first and the last.

This representation is called a sliding window, as the window of inputs and expected outputs is shifted forward through time to create new “samples” for a supervised learning model.

For more on the sliding window approach to preparing time series forecasting data, see the tutorial:

We can use the shift() function in Pandas to automatically create new framings of time series problems given the desired length of input and output sequences.

This would be a useful tool as it would allow us to explore different framings of a time series problem with machine learning algorithms to see which might result in better-performing models.

The function below will take a time series as a NumPy array time series with one or more columns and transform it into a supervised learning problem with the specified number of inputs and outputs.

We can use this function to prepare a time series dataset for XGBoost.

For more on the step-by-step development of this function, see the tutorial:

Once the dataset is prepared, we must be careful in how it is used to fit and evaluate a model.

For example, it would not be valid to fit the model on data from the future and have it predict the past. The model must be trained on the past and predict the future.

This means that methods that randomize the dataset during evaluation, like k-fold cross-validation, cannot be used. Instead, we must use a technique called walk-forward validation.

In walk-forward validation, the dataset is first split into train and test sets by selecting a cut point, e.g. all data except the last 12 months is used for training and the last 12 months is used for testing.

If we are interested in making a one-step forecast, e.g. one month, then we can evaluate the model by training on the training dataset and predicting the first step in the test dataset. We can then add the real observation from the test set to the training dataset, refit the model, then have the model predict the second step in the test dataset.

Repeating this process for the entire test dataset will give a one-step prediction for the entire test dataset from which an error measure can be calculated to evaluate the skill of the model.

For more on walk-forward validation, see the tutorial:

The function below performs walk-forward validation.

It takes the entire supervised learning version of the time series dataset and the number of rows to use as the test set as arguments.

It then steps through the test set, calling the xgboost_forecast() function to make a one-step forecast. An error measure is calculated and the details are returned for analysis.

The train_test_split() function is called to split the dataset into train and test sets.

We can define this function below.

We can use the XGBRegressor class to make a one-step forecast.

The xgboost_forecast() function below implements this, taking the training dataset and test input row as input, fitting a model, and making a one-step prediction.

Now that we know how to prepare time series data for forecasting and evaluate an XGBoost model, next we can look at using XGBoost on a real dataset.

XGBoost for Time Series Forecasting

In this section, we will explore how to use XGBoost for time series forecasting.

We will use a standard univariate time series dataset with the intent of using the model to make a one-step forecast.

You can use the code in this section as the starting point in your own project and easily adapt it for multivariate inputs, multivariate forecasts, and multi-step forecasts.

We will use the daily female births dataset, that is the monthly births across three years.

You can download the dataset from here, place it in your current working directory with the filename “daily-total-female-births.csv“.

The first few lines of the dataset look as follows:

First, let’s load and plot the dataset.

The complete example is listed below.

Running the example creates a line plot of the dataset.

We can see there is no obvious trend or seasonality.

Line Plot of Monthly Births Time Series Dataset

Line Plot of Monthly Births Time Series Dataset

A persistence model can achieve a MAE of about 6.7 births when predicting the last 12 months. This provides a baseline in performance above which a model may be considered skillful.

Next, we can evaluate the XGBoost model on the dataset when making one-step forecasts for the last 12 months of data.

We will use only the previous 6 time steps as input to the model and default model hyperparameters, except we will change the loss to ‘reg:squarederror‘ (to avoid a warning message) and use a 1,000 trees in the ensemble (to avoid underlearning).

The complete example is listed below.

Running the example reports the expected and predicted values for each step in the test set, then the MAE for all predicted values.

Note: Your results may vary given the stochastic nature of the algorithm or evaluation procedure, or differences in numerical precision. Consider running the example a few times and compare the average outcome.

We can see that the model performs better than a persistence model, achieving a MAE of about 5.9 births, compared to 6.7 births.

Can you do better?
You can test different XGBoost hyperparameters and numbers of time steps as input to see if you can achieve better performance. Share your results in the comments below.

A line plot is created comparing the series of expected values and predicted values for the last 12 months of the dataset.

This gives a geometric interpretation of how well the model performed on the test set.

Line Plot of Expected vs. Births Predicted Using XGBoost

Line Plot of Expected vs. Births Predicted Using XGBoost

Once a final XGBoost model configuration is chosen, a model can be finalized and used to make a prediction on new data.

This is called an out-of-sample forecast, e.g. predicting beyond the training dataset. This is identical to making a prediction during the evaluation of the model: as we always want to evaluate a model using the same procedure that we expect to use when the model is used to make prediction on new data.

The example below demonstrates fitting a final XGBoost model on all available data and making a one-step prediction beyond the end of the dataset.

Running the example fits an XGBoost model on all available data.

A new row of input is prepared using the last 6 months of known data and the next month beyond the end of the dataset is predicted.

Further Reading

This section provides more resources on the topic if you are looking to go deeper.

Related Tutorials

Summary

In this tutorial, you discovered how to develop an XGBoost model for time series forecasting.

Specifically, you learned:

  • XGBoost is an implementation of the gradient boosting ensemble algorithm for classification and regression.
  • Time series datasets can be transformed into supervised learning using a sliding-window representation.
  • How to fit, evaluate, and make predictions with an XGBoost model for time series forecasting.

Do you have any questions?
Ask your questions in the comments below and I will do my best to answer.

Discover The Algorithm Winning Competitions!

XGBoost With Python

Develop Your Own XGBoost Models in Minutes

...with just a few lines of Python

Discover how in my new Ebook:
XGBoost With Python

It covers self-study tutorials like:
Algorithm Fundamentals, Scaling, Hyperparameters, and much more...

Bring The Power of XGBoost To Your Own Projects

Skip the Academics. Just Results.

See What's Inside

37 Responses to How to Use XGBoost for Time Series Forecasting

  1. Tatiana August 5, 2020 at 7:51 pm #

    The result does not really look convincing

    • Jason Brownlee August 6, 2020 at 6:11 am #

      Fair enough.

      Consider the model a template that you can apply on your own projects.

  2. B s kambo August 6, 2020 at 2:10 am #

    Excellent explained very nicely
    Keep it up

  3. Rahul Kalluri August 6, 2020 at 7:15 am #

    This article doesn’t make a cogent argument for using XGBoost for time-series or time dependent data.

    Without any sort of weighting based on time, the algorithm has no way of knowing how to incorporate time – it just looks at isolated points e.g. A yields 400, B yields 510 with no chronological relationship between A and B. The expected vs predicted graph you show clearly indicates that the model fails to establish a proper relationship between time and predictions.

    I’ve tried to implement XGBoost in financial forecasting with 2 years historical data, it just doesn’t work well. Sometimes you can get better accuracies with ensembling techniques, but nothing really beats a true time series model. In that case, I’d use the pmdarima package and the auto.arima function is fantastic.

    I get that you could use this as an example template, but I think it’s not really instructional until you measure this against a time-series model or apply some sort of time weights to non-time series models to get a clear idea of what options exist.

    At worst, this article is misleading. At best, it’s flawed and requires more testing and examples.

    I appreciate you putting this out there because it brings up some good questions on how to approach time series problems with some more flexibility, I’d look forward to a more thorough article on this topic.

    • Jason Brownlee August 6, 2020 at 7:55 am #

      Thanks for sharing, sorry it does not work for your specific datasets.

      I disagree that it is misleading.

    • Hovanes August 8, 2020 at 4:50 am #

      I agree with Rahul, in that this does not seem to account for things that time-series models are designed to address, such as seasonality, whether the data is stationary or not, etc.

      • Jason Brownlee August 8, 2020 at 6:07 am #

        Sure, only try it on your data if you think it offers some benefit over other methods.

      • Daniel September 17, 2020 at 6:17 am #

        You can engineer some new features that will potentially account for seasonality if you are creative enough.

  4. Anthony The Koala August 7, 2020 at 8:53 am #

    Dear Dr Jason,
    I have a question on single variable data such as “sunspot” data. There are no X values or features. It is called “univariate” as shown in your blog https://machinelearningmastery.com/time-series-datasets-for-machine-learning/. Yes univariate datasets have date information as in dd-mm-yyyy info or it could be derived by an array of x = [i for i in range(len(mydataset))].

    Can model evaluation such as train-test-splitting, model, RepeatedClassifiedKFold and cross_val_score be performed on univariate time series with time a feature of X and y the univariate data series, for example sunspot data?

    Thank you,
    Anthony of Sydney

    • Jason Brownlee August 7, 2020 at 1:29 pm #

      It is univariate. Date/times are dropped.

      Cross-validation is generally invalid for time series data, see this:
      https://machinelearningmastery.com/backtest-machine-learning-models-time-series-forecasting/

      • Anthony The Koala August 8, 2020 at 2:54 am #

        Dear Dr Jason,
        Thank you for averting to the sitehttps://machinelearningmastery.com/backtest-machine-learning-models-time-series-forecasting/.

        Despite no cross-validation, nevertheless train/test split is still performed and the sunspot data is used as an example dataset to “experiment” with.

        Thank you, it is appreciated.
        Anthony of Sydney

  5. siegfried Vanaverbeke August 8, 2020 at 2:39 am #

    Obviously, if you have the birth rate from 1960 and from then on, we can really test how good the model is.
    Also, since the time is left out, you cannot treat gappy time series, as often happens for natural phemonena like variable star research.

  6. Alex August 8, 2020 at 7:25 am #

    Hi Jason! do you think we could have a multivariate variation of this?

  7. Cooper Chastain August 11, 2020 at 11:16 pm #

    Do you need to detrend and deseasonalize the data when using XGBoost?

    • Jason Brownlee August 12, 2020 at 6:10 am #

      Depends. Try with and without and compare the results.

  8. Ben August 21, 2020 at 10:37 am #

    Hi Jason, can the code be modified to make more than a one step prediction at the end of the dataset? If so any tips 🙂

    For example could I go further out than yhat = model.predict(asarray([row]))? I am also running my own dataset, one month of a building electricity usage (kW) on 15 minute intervals… My results are pretty good for the expected & predicted plots.

    I was just curious about being able to predict more data than 1 minute 15 data point.. Thanks

  9. Ben August 22, 2020 at 4:25 am #

    Jason one other question. If I create some plots with the code for expected & predicted analysis. Can I save this model in like a pickle to use on the prediction code? OR would the models be the same parameters between the
    # forecast monthly births with xgboost
    and # forecast monthly births with xgboost scripts?

    • Jason Brownlee August 22, 2020 at 6:21 am #

      I believe you can pickle an xgboost model. Perhaps test it to confirm.

  10. Kaustav Datta August 24, 2020 at 11:33 am #

    Wondering why have you returned test[:,1] in the walk_forward_validation() function, and why is that being used to calculate error? Shouldnt it be test[:,-1]? We are predicting the last column right, hence it should be compared with the last column

    • Jason Brownlee August 24, 2020 at 1:53 pm #

      You’re right, looks like a typo.

      Fixed. Thanks!

      • Kaustav Datta August 24, 2020 at 2:53 pm #

        Even what you’re returning should be corrected then right?

        • Jason Brownlee August 25, 2020 at 6:37 am #

          Correct!

          No idea what I was thinking. More coffee is needed…

          Thanks for pointing out these dumb errors.

  11. Gibram September 1, 2020 at 11:46 am #

    Hi Jason. Thanks for the material.

    I think there is an error on error calculus:

    mean_absolute_error(test[:, -1], predictions)

    If you pay attention on “test[:, -1]” you is notice the array isn’t align with the correct values.

    I’m right?

    • Gibram September 1, 2020 at 11:59 am #

      Sorry. I made a confusion. It’s ok!

    • Jason Brownlee September 1, 2020 at 1:46 pm #

      I believe the code is correct.

      It is common for models to mostly forecast the previous value as the next value, called a persistence forecast. When plotted, it looks like the forecast is one step behind the observations.

  12. dangou September 16, 2020 at 12:09 am #

    Hi Jason. Thanks for the material. After reading your explanation about xgboost, I want to try to use this method for time series forecasting. You mentioned in the article that this method can be extended to multivariate input. I want to use several parameters to predict the cyclical trend of another correlated parameter. How should I adjust the existing method?

    • Jason Brownlee September 16, 2020 at 6:26 am #

      You’re welcome.

      The same function for preparing the data can be used directly I believe. Try it and see.

  13. Suhwan September 22, 2020 at 2:49 pm #

    Hi Jason.
    Thanks for providing helpful tutorials. I am subscribing your super bundle package, and all of them are very useful for self training.

    Wonder if you have solutions for multivariate, multi-timesteps forecast using XGBoost. I could not find if in your book, xgboost_with_python. Thanks !

Leave a Reply