How to Model Residual Errors to Correct Time Series Forecasts with Python

The residual errors from forecasts on a time series provide another source of information that we can model.

Residual errors themselves form a time series that can have temporal structure. A simple autoregression model of this structure can be used to predict the forecast error, which in turn can be used to correct forecasts. This type of model is called a moving average model, the same name but very different from moving average smoothing.

In this tutorial, you will discover how to model a residual error time series and use it to correct predictions with Python.

After completing this tutorial, you will know:

  • About how to model residual error time series using an autoregressive model.
  • How to develop and evaluate a model of residual error time series.
  • How to use a model of residual error to correct predictions and improve forecast skill.

Let’s get started.

  • Update Jan/2017: Improved some of the code examples to be more complete.

Model of Residual Errors

The difference between what was expected and what was predicted is called the residual error.

It is calculated as:

Just like the input observations themselves, the residual errors from a time series can have temporal structure like trends, bias, and seasonality.

Any temporal structure in the time series of residual forecast errors is useful as a diagnostic as it suggests information that could be incorporated into the predictive model. An ideal model would leave no structure in the residual error, just random fluctuations that cannot be modeled.

Structure in the residual error can also be modeled directly. There may be complex signals in the residual error that are difficult to directly incorporate into the model. Instead, you can create a model of the residual error time series and predict the expected error for your model.

The predicted error can then be subtracted from the model prediction and in turn provide an additional lift in performance.

A simple and effective model of residual error is an autoregression. This is where some number of lagged error values are used to predict the error at the next time step. These lag errors are combined in a linear regression model, much like an autoregression model of the direct time series observations.

An autoregression of the residual error time series is called a Moving Average (MA) model. This is confusing because it has nothing to do with the moving average smoothing process. Think of it as the sibling to the autoregressive (AR) process, except on lagged residual error rather than lagged raw observations.

In this tutorial, we will develop an autoregression model of the residual error time series.

Before we dive in, let’s look at a univariate dataset for which we will develop a model.

Stop learning Time Series Forecasting the slow way

Sign-up and get a FREE 7-day Time Series Forecasting Mini-Course

You will get:
...one lesson each day delivered to your inbox
...exclusive PDF ebook containing all lessons
...confidence and skills to work through your own projects

Download Your FREE Mini-Course

Daily Female Births Dataset

This dataset describes the number of daily female births in California in 1959.

The units are a count and there are 365 observations. The source of the dataset is credited to Newton (1988).

Download and learn more about the dataset here.

Download the dataset and place it in your current working directory with the filename “daily-total-female-births.csv“.

Below is an example of loading the Daily Female Births dataset from CSV.

Running the example prints the first 5 rows of the loaded file.

The dataset is also shown in a line plot of observations over time.

Daily Total Female Births Plot

Daily Total Female Births Plot

We can see that there is no obvious trend or seasonality. The dataset looks stationary, which is an expectation of using an autoregression model.

Persistence Forecast Model

The simplest forecast that we can make is to forecast that what happened in the previous time step will be the same as what will happen in the next time step.

This is called the “naive forecast” or the persistence forecast model. This model will provide the predictions from which we can calculate the residual error time series. Alternately, we could develop an autoregression model of the time series and use that as our model. We will not develop an autoregression model in this case for brevity and to focus on the model of residual error.

We can implement the persistence model in Python.

After the dataset is loaded, it is phrased as a supervised learning problem. A lagged version of the dataset is created where the prior time step (t-1) is used as the input variable and the next time step (t+1) is taken as the output variable.

Next, the dataset is split into training and test sets. A total of 66% of the data is kept for training and the remaining 34% is held for the test set. No training is required for the persistence model; this is just a standard test harness approach.

Once split, the train and test sets are separated into their input and output components.

The persistence model is applied by predicting the output value (y) as a copy of the input value (x).

The residual errors are then calculated as the difference between the expected outcome (test_y) and the prediction (predictions).

The example puts this all together and gives us a set of residual forecast errors that we can explore this tutorial.

The example then prints the first 5 rows of the forecast residual errors.

We now have a residual error time series that we can model.

Autoregression of Residual Error

We can model the residual error time series using an autoregression model.

This is a linear regression model that creates a weighted linear sum of lagged residual error terms. For example:

We can use the autoregression model (AR) provided by the statsmodels library.

Building on the persistence model in the previous section, we can first train the model on the residual errors calculated on the training dataset. This requires that we make persistence predictions for each observation in the training dataset, then create the AR model, as follows.

Running this piece prints the chosen lag of 15 and the 16 coefficients (intercept and one for each lag) of the trained linear regression model.

Next, we can step through the test dataset and for each time step we must:

  1. Calculate the persistence prediction (t+1 = t-1).
  2. Predict the residual error using the autoregression model.

The autoregression model requires the residual error of the 15 previous time steps. Therefore, we must keep these values handy.

As we step through the test dataset timestep by timestep making predictions and estimating error, we can then calculate the actual residual error and update the residual error time series lag values (history) so that we can calculate the error at the next time step.

This is a walk forward forecast, or a rolling forecast, model.

We end up with a time series of the residual forecast error from the train dataset and a predicted residual error on the test dataset.

We can plot these and get a quick idea of how skillful the model is at predicting residual error. The complete example is listed below.

Running the example first prints the predicted and expected residual error for each time step in the test dataset.

Next, the actual residual error for the time series is plotted (blue) compared to the predicted residual error (red).

Prediction of Residual Error Time Series

Prediction of Residual Error Time Series

Now that we know how to model residual error, next we will look at how we can go about correcting forecasts and improving model skill.

Correct Predictions with a Model of Residual Errors

A model of forecast residual error is interesting, but it can also be useful to make better predictions.

With a good estimate of forecast error at a time step, we can make better predictions.

For example, we can add the expected forecast error to a prediction to correct it and in turn improve the skill of the model.

Let’s make this concrete with an example.

Let’s say that the expected value for a time step is 10. The model predicts 8 and estimates the error to be 3. The improved forecast would be:

This takes the actual forecast error from 2 units to 1 unit.

We can update the example from the previous section to add the estimated forecast error to the persistence forecast as follows:

The complete example is listed below.

Running the example prints the predictions and the expected outcome for each time step in the test dataset.

The mean squared error of the corrected forecasts is calculated to be 56.234, which is much better than the score of 83.744 for the persistence model alone.

Finally, the expected values for the test dataset are plotted (blue) compared to the corrected forecast (red).

We can see that the persistence model has been aggressively corrected back to a time series that looks something like a moving average.

Corrected Persistence Forecast for Daily Female Births

Corrected Persistence Forecast for Daily Female Births

Summary

In this tutorial, you discovered how to model residual error time series and use it to correct predictions with Python.

Specifically, you learned:

  • About the Moving Average (MA) approach to developing an autoregressive model to residual error.
  • How to develop and evaluate a model of residual error to predict forecast error.
  • How to use the predictions of forecast error to correct predictions and improve model skill.

Do you have any questions about Moving Average models, or about this tutorial?
Ask your questions in the comments below and I will do my best to answer.

Want to Develop Time Series Forecasts with Python?

Introduction to Time Series Forecasting With Python

Develop Your Own Forecasts in Minutes

...with just a few lines of python code

Discover how in my new Ebook:
Introduction to Time Series Forecasting With Python

It covers self-study tutorials and end-to-end projects on topics like:
Loading data, visualization, modeling, algorithm tuning, and much more...

Finally Bring Time Series Forecasting to
Your Own Projects

Skip the Academics. Just Results.

Click to learn more.

2 Responses to How to Model Residual Errors to Correct Time Series Forecasts with Python

  1. dj January 17, 2017 at 2:34 pm #

    Is anyone else finding that these tutorials just don’t work?

    • Jason Brownlee January 17, 2017 at 2:38 pm #

      Sorry to hear that, what problem are you getting exactly?

      Update: I updated the first two code examples to be more complete/easier to run directly.

Leave a Reply