Autoregression Models for Time Series Forecasting With Python

Autoregression is a time series model that uses observations from previous time steps as input to a regression equation to predict the value at the next time step.

It is a very simple idea that can result in accurate forecasts on a range of time series problems.

In this tutorial, you will discover how to implement an autoregressive model for time series forecasting with Python.

After completing this tutorial, you will know:

  • How to explore your time series data for autocorrelation.
  • How to develop an autocorrelation model and use it to make predictions.
  • How to use a developed autocorrelation model to make rolling predictions.

Let’s get started.

Autoregression Models for Time Series Forecasting With Python

Autoregression Models for Time Series Forecasting With Python
Photo by Umberto Salvagnin, some rights reserved.

Autoregression

A regression model, such as linear regression, models an output value based on a linear combination of input values.

For example:

Where yhat is the prediction, b0 and b1 are coefficients found by optimizing the model on training data, and X is an input value.

This technique can be used on time series where input variables are taken as observations at previous time steps, called lag variables.

For example, we can predict the value for the next time step (t+1) given the observations at the last two time steps (t-1 and t-2). As a regression model, this would look as follows:

Because the regression model uses data from the same input variable at previous time steps, it is referred to as an autoregression (regression of self).

Stop learning Time Series Forecasting the slow way

Sign-up and get a FREE 7-day Time Series Forecasting Mini-Course

You will get:
...one lesson each day delivered to your inbox
...exclusive PDF ebook containing all lessons
...confidence and skills to work through your own projects

Download Your FREE Mini-Course

Autocorrelation

An autoregression model makes an assumption that the observations at previous time steps are useful to predict the value at the next time step.

This relationship between variables is called correlation.

If both variables change in the same direction (e.g. go up together or down together), this is called a positive correlation. If the variables move in opposite directions as values change (e.g. one goes up and one goes down), then this is called negative correlation.

We can use statistical measures to calculate the correlation between the output variable and values at previous time steps at various different lags. The stronger the correlation between the output variable and a specific lagged variable, the more weight that autoregression model can put on that variable when modeling.

Again, because the correlation is calculated between the variable and itself at previous time steps, it is called an autocorrelation. It is also called serial correlation because of the sequenced structure of time series data.

The correlation statistics can also help to choose which lag variables will be useful in a model and which will not.

Interestingly, if all lag variables show low or no correlation with the output variable, then it suggests that the time series problem may not be predictable. This can be very useful when getting started on a new dataset.

In this tutorial, we will investigate the autocorrelation of a univariate time series then develop an autoregression model and use it to make predictions.

Before we do that, let’s first review the Minimum Daily Temperatures data that will be used in the examples.

Minimum Daily Temperatures Dataset

This dataset describes the minimum daily temperatures over 10 years (1981-1990) in the city Melbourne, Australia.

The units are in degrees Celsius and there are 3,650 observations. The source of the data is credited as the Australian Bureau of Meteorology.

Learn more about the dataset here.

Download the dataset into your current working directory with the filename “daily-minimum-temperatures.csv“.

Note: The downloaded file contains some question mark (“?”) characters that must be removed before you can use the dataset. Open the file in a text editor and remove the “?” characters. Also remove any footer information in the file.

The code below will load the dataset as a Pandas Series.

Running the example prints the first 5 rows from the loaded dataset.

A line plot of the dataset is then created.

Minimum Daily Temperature Dataset Plot

Minimum Daily Temperature Dataset Plot

Quick Check for Autocorrelation

There is a quick, visual check that we can do to see if there is an autocorrelation in our time series dataset.

We can plot the observation at the previous time step (t-1) with the observation at the next time step (t+1) as a scatter plot.

This could be done manually by first creating a lag version of the time series dataset and using a built-in scatter plot function in the Pandas library.

But there is an easier way.

Pandas provides a built-in plot to do exactly this, called the lag_plot() function.

Below is an example of creating a lag plot of the Minimum Daily Temperatures dataset.

Running the example plots the temperature data (t) on the x-axis against the temperature on the previous day (t-1) on the y-axis.

Minimum Daily Temperature Dataset Lag Plot

Minimum Daily Temperature Dataset Lag Plot

We can see a large ball of observations along a diagonal line of the plot. It clearly shows a relationship or some correlation.

This process could be repeated for any other lagged observation, such as if we wanted to review the relationship with the last 7 days or with the same day last month or last year.

Another quick check that we can do is to directly calculate the correlation between the observation and the lag variable.

We can use a statistical test like the Pearson correlation coefficient. This produces a number to summarize how correlated two variables are between -1 (negatively correlated) and +1 (positively correlated) with small values close to zero indicating low correlation and high values above 0.5 or below -0.5 showing high correlation.

Correlation can be calculated easily using the corr() function on the DataFrame of the lagged dataset.

The example below creates a lagged version of the Minimum Daily Temperatures dataset and calculates a correlation matrix of each column with other columns, including itself.

This is a good confirmation for the plot above.

It shows a strong positive correlation (0.77) between the observation and the lag=1 value.

This is good for one-off checks, but tedious if we want to check a large number of lag variables in our time series.

Next, we will look at a scaled-up version of this approach.

Autocorrelation Plots

We can plot the correlation coefficient for each lag variable.

This can very quickly give an idea of which lag variables may be good candidates for use in a predictive model and how the relationship between the observation and its historic values changes over time.

We could manually calculate the correlation values for each lag variable and plot the result. Thankfully, Pandas provides a built-in plot called the autocorrelation_plot() function.

The plot provides the lag number along the x-axis and the correlation coefficient value between -1 and 1 on the y-axis. The plot also includes solid and dashed lines that indicate the 95% and 99% confidence interval for the correlation values. Correlation values above these lines are more significant than those below the line, providing a threshold or cutoff for selecting more relevant lag values.

Running the example shows the swing in positive and negative correlation as the temperature values change across summer and winter seasons each previous year.

Pandas Autocorrelation Plot

Pandas Autocorrelation Plot

The statsmodels library also provides a version of the plot in the plot_acf() function as a line plot.

In this example, we limit the lag variables evaluated to 31 for readability.

Statsmodels Autocorrelation Plot

Statsmodels Autocorrelation Plot

Now that we know how to review the autocorrelation in our time series, let’s look at modeling it with an autoregression.

Before we do that, let’s establish a baseline performance.

Persistence Model

Let’s say that we want to develop a model to predict the last 7 days of minimum temperatures in the dataset given all prior observations.

The simplest model that we could use to make predictions would be to persist the last observation. We can call this a persistence model and it provides a baseline of performance for the problem that we can use for comparison with an autoregression model.

We can develop a test harness for the problem by splitting the observations into training and test sets, with only the last 7 observations in the dataset assigned to the test set as “unseen” data that we wish to predict.

The predictions are made using a walk-forward validation model so that we can persist the most recent observations for the next day. This means that we are not making a 7-day forecast, but 7 1-day forecasts.

Running the example prints the mean squared error (MSE).

The value provides a baseline performance for the problem.

The expected values for the next 7 days are plotted (blue) compared to the predictions from the model (red).

Predictions From Persistence Model

Predictions From Persistence Model

Autoregression Model

An autoregression model is a linear regression model that uses lagged variables as input variables.

We could calculate the linear regression model manually using the LinearRegession class in scikit-learn and manually specify the lag input variables to use.

Alternately, the statsmodels library provides an autoregression model that automatically selects an appropriate lag value using statistical tests and trains a linear regression model. It is provided in the AR class.

We can use this model by first creating the model AR() and then calling fit() to train it on our dataset. This returns an ARResult object.

Once fit, we can use the model to make a prediction by calling the predict() function for a number of observations in the future. This creates 1 7-day forecast, which is different from the persistence example above.

The complete example is listed below.

Running the example first prints the chosen optimal lag and the list of coefficients in the trained linear regression model.

We can see that a 29-lag model was chosen and trained. This is interesting given how close this lag is to the average number of days in a month.

The 7 day forecast is then printed and the mean squared error of the forecast is summarized.

A plot of the expected (blue) vs the predicted values (red) is made.

The forecast does look pretty good (about 1 degree Celsius out each day), with big deviation on day 5.

Predictions From Fixed AR Model

Predictions From Fixed AR Model

The statsmodels API does not make it easy to update the model as new observations become available.

One way would be to re-train the AR model each day as new observations become available, and that may be a valid approach, if not computationally expensive.

An alternative would be to use the learned coefficients and manually make predictions. This requires that the history of 29 prior observations be kept and that the coefficients be retrieved from the model and used in the regression equation to come up with new forecasts.

The coefficients are provided in an array with the intercept term followed by the coefficients for each lag variable starting at t-1 to t-n. We simply need to use them in the right order on the history of observations, as follows:

Below is the complete example.

Again, running the example prints the forecast and the mean squared error.

We can see a small improvement in the forecast when comparing the error scores.

Predictions From Rolling AR Model

Predictions From Rolling AR Model

Further Reading

This section provides some resources if you are looking to dig deeper into autocorrelation and autoregression.

Summary

In this tutorial, you discovered how to make autoregression forecasts for time series data using Python.

Specifically, you learned:

  • About autocorrelation and autoregression and how they can be used to better understand time series data.
  • How to explore the autocorrelation in a time series using plots and statistical tests.
  • How to train an autoregression model in Python and use it to make short-term and rolling forecasts.

Do you have any questions about autoregression, or about this tutorial?
Ask your questions in the comments below and I will do my best to answer.

Want to Develop Time Series Forecasts with Python?

Develop Your Own Forecasts in Minutes

...with just a few lines of python code

Discover how in my new Ebook:
Introduction to Time Series Forecasting With Python

It covers self-study tutorials and end-to-end projects on topics like:
Loading data, visualization, modeling, algorithm tuning, and much more...

Finally Bring Time Series Forecasting to
Your Own Projects

Skip the Academics. Just Results.

Click to learn more.

 

9 Responses to Autoregression Models for Time Series Forecasting With Python

  1. Gary Bake January 5, 2017 at 11:35 pm #

    Thank you Jason for the awesome article

    In case anyone hits the same problem I had –
    I downloaded the data from the link above as a csv file.
    It was failing to be imported due to three rows in the temperature column containing ‘?’.
    Once these were removed the data imported ok.

  2. Tim Melino January 14, 2017 at 10:28 am #

    Hey Jason, thanks for the article. How would you go about forecasting from the end of the file when expected value is not known?

    • Jason Brownlee January 15, 2017 at 5:27 am #

      Hi Tim, you can use mode.predict() as in the example and specify the index of the time step to be predicted.

  3. Farrukh Jalali January 25, 2017 at 4:16 pm #

    Hi Jason,

    Thanks for all of your wonderful blogs. They are really helping a lot. One question regarding this post is that I believe that AR modeling also presume that time series is stationary as the observations should be i.i.d. .Does that AR function from statsmodels library checks for stationary and use the de-trended de-seasonalized time series by itself if required? Also, if we use sckit learn library for AR model as you described do we need to check for and make adjustments by ourselfs for this?

    • Jason Brownlee January 26, 2017 at 4:45 am #

      Hi Farrukh, great question.

      The AR in statsmodels does assume that the data is stationary.

      If your data is not stationary, you must make it stationary (e.g. differencing and other transforms).

      • Farrukh Jalali January 31, 2017 at 12:19 pm #

        Thanks for the answer. Though we did not conduct proper test here for trend/seasonal stationarity check in the example above but from figure apparently it seems like that there is a seasonal effect. So in that case whether applying AR model is good to go?

        • Jason Brownlee February 1, 2017 at 10:39 am #

          Great question Farrukh.

          AR is designed to be used on stationary data, meaning data with no seasonal or trend information.

  4. Farrukh Jalali January 31, 2017 at 11:07 pm #

    Or to be specific, is it OK to apply AR model direct here on the given data without checking the seasonality and removing it if present which is showing some signs in first graph apparently?

Leave a Reply