Time Series Forecasting Performance Measures With Python

Last Updated on

Time series prediction performance measures provide a summary of the skill and capability of the forecast model that made the predictions.

There are many different performance measures to choose from. It can be confusing to know which measure to use and how to interpret the results.

In this tutorial, you will discover performance measures for evaluating time series forecasts with Python.

Time series generally focus on the prediction of real values, called regression problems. Therefore the performance measures in this tutorial will focus on methods for evaluating real-valued predictions.

After completing this tutorial, you will know:

  • Basic measures of forecast performance, including residual forecast error and forecast bias.
  • Time series forecast error calculations that have the same units as the expected outcomes such as mean absolute error.
  • Widely used error calculations that punish large errors, such as mean squared error and root mean squared error.

Discover how to prepare and visualize time series data and develop autoregressive forecasting models in my new book, with 28 step-by-step tutorials, and full python code.

Let’s get started.

  • Jun/2019: Fixed typo in forecast bias (thanks Francisco).
Time Series Forecasting Performance Measures With Python

Time Series Forecasting Performance Measures With Python
Photo by Tom Hall, some rights reserved.

Forecast Error (or Residual Forecast Error)

The forecast error is calculated as the expected value minus the predicted value.

This is called the residual error of the prediction.

The forecast error can be calculated for each prediction, providing a time series of forecast errors.

The example below demonstrates how the forecast error can be calculated for a series of 5 predictions compared to 5 expected values. The example was contrived for demonstration purposes.

Running the example calculates the forecast error for each of the 5 predictions. The list of forecast errors is then printed.

The units of the forecast error are the same as the units of the prediction. A forecast error of zero indicates no error, or perfect skill for that forecast.

Stop learning Time Series Forecasting the slow way!

Take my free 7-day email course and discover how to get started (with sample code).

Click to sign-up and also get a free PDF Ebook version of the course.

Start Your FREE Mini-Course Now!

Mean Forecast Error (or Forecast Bias)

Mean forecast error is calculated as the average of the forecast error values.

Forecast errors can be positive and negative. This means that when the average of these values is calculated, an ideal mean forecast error would be zero.

A mean forecast error value other than zero suggests a tendency of the model to over forecast (negative error) or under forecast (positive error). As such, the mean forecast error is also called the forecast bias.

The forecast error can be calculated directly as the mean of the forecast values. The example below demonstrates how the mean of the forecast errors can be calculated manually.

Running the example prints the mean forecast error, also known as the forecast bias.

In this case the result is negative, meaning that we have over forecast.

The units of the forecast bias are the same as the units of the predictions. A forecast bias of zero, or a very small number near zero, shows an unbiased model.

Mean Absolute Error

The mean absolute error, or MAE, is calculated as the average of the forecast error values, where all of the forecast values are forced to be positive.

Forcing values to be positive is called making them absolute. This is signified by the absolute function abs() or shown mathematically as two pipe characters around the value: |value|.

Where abs() makes values positive, forecast_error is one or a sequence of forecast errors, and mean() calculates the average value.

We can use the mean_absolute_error() function from the scikit-learn library to calculate the mean absolute error for a list of predictions. The example below demonstrates this function.

Running the example calculates and prints the mean absolute error for a list of 5 expected and predicted values.

These error values are in the original units of the predicted values. A mean absolute error of zero indicates no error.

Mean Squared Error

The mean squared error, or MSE, is calculated as the average of the squared forecast error values. Squaring the forecast error values forces them to be positive; it also has the effect of putting more weight on large errors.

Very large or outlier forecast errors are squared, which in turn has the effect of dragging the mean of the squared forecast errors out resulting in a larger mean squared error score. In effect, the score gives worse performance to those models that make large wrong forecasts.

We can use the mean_squared_error() function from scikit-learn to calculate the mean squared error for a list of predictions. The example below demonstrates this function.

Running the example calculates and prints the mean squared error for a list of expected and predicted values.

The error values are in squared units of the predicted values. A mean squared error of zero indicates perfect skill, or no error.

Root Mean Squared Error

The mean squared error described above is in the squared units of the predictions.

It can be transformed back into the original units of the predictions by taking the square root of the mean squared error score. This is called the root mean squared error, or RMSE.

This can be calculated by using the sqrt() math function on the mean squared error calculated using the mean_squared_error() scikit-learn function.

Running the example calculates the root mean squared error.

The RMES error values are in the same units as the predictions. As with the mean squared error, an RMSE of zero indicates no error.

Further Reading

Below are some references for further reading on time series forecast error measures.

Summary

In this tutorial, you discovered a suite of 5 standard time series performance measures in Python.

Specifically, you learned:

  • How to calculate forecast residual error and how to estimate the bias in a list of forecasts.
  • How to calculate mean absolute forecast error to describe error in the same units as the predictions.
  • How to calculate the widely used mean squared error and root mean squared error for forecasts.

Do you have any questions about time series forecast performance measures, or about this tutorial
Ask your questions in the comments below and I will do my best to answer.

Want to Develop Time Series Forecasts with Python?

Introduction to Time Series Forecasting With Python

Develop Your Own Forecasts in Minutes

...with just a few lines of python code

Discover how in my new Ebook:
Introduction to Time Series Forecasting With Python

It covers self-study tutorials and end-to-end projects on topics like: Loading data, visualization, modeling, algorithm tuning, and much more...

Finally Bring Time Series Forecasting to
Your Own Projects

Skip the Academics. Just Results.

See What's Inside

34 Responses to Time Series Forecasting Performance Measures With Python

  1. Peter Marelas February 1, 2017 at 2:24 pm #

    I’ve seen MAPE used a few times to evaluate our forecasting models. Do you see this used often and when would you use one over the other?

    • Jason Brownlee February 2, 2017 at 1:55 pm #

      Hi Peter, MAPE is a good metric and I do see it used.

      I prefer RMSE myself.

  2. Ian February 3, 2017 at 3:21 am #

    First line of code of Forecast Error should be forecast_error = expected_value “-” predicted_value.
    I believe this is a typo.

  3. Jasem February 6, 2017 at 9:39 pm #

    Dr.Jason,

    Can you provide us simple way to split the data with 10 fold cross validation for train and test set with large csv file. Then apply different algorithms to train model after that test model to check how does model accurate. we also want to see ROC curve to combine different algorithms

    my second question does ROC curve show precision of model?? can you show me a mathematical formula for ROC curve?

  4. Devakar Kumar Verma August 8, 2017 at 6:44 pm #

    What should be range of values for all different measures of performance for a acceptable model.

    • Jason Brownlee August 9, 2017 at 6:25 am #

      Good question, it really depends on your problem and the units of your variable.

      • Devakar Kumar Verma August 9, 2017 at 2:14 pm #

        Suppose, variable values range from 0-100, then what will be range?

        • Jason Brownlee August 10, 2017 at 6:48 am #

          If you have accuracy scores between 0 and 100, maybe 60% is good because the problem is hard, maybe 98% is good because the problem is easy.

          I cannot answer this question generically, sorry.

          A good way to figure out if a model is skillful is to compare it to a lot of other models or against a solid base line model (e.g. relative measure of good).

  5. Irati October 10, 2017 at 8:43 pm #

    Hi Jason,

    And what about if we perform multivariate time series forecasting?
    Imagine we forecast 3 time series with the same model, how would you provide the results? per time series? the mean of the errors ?

    Thanks for your time 🙂

  6. Carlos May 19, 2018 at 1:45 am #

    Good evening, one question , if i want to get max error, how could it be?

  7. Kate August 15, 2018 at 11:33 pm #

    Hi, thanks for the post. If I understand correctly, the method mentioned here is useful for correcting predictions if the ground truths of the test examples are readily available and are included in the correction process. I was wondering if there are similar approaches for situations where there is a noticeable trend for residuals in your training/testing data, and I’d like to create a model utilizing these trends in an environment where ground truths for new examples are not available?

    • Jason Brownlee August 16, 2018 at 6:06 am #

      ARIMA and ETS models can handle the trend in your data.

  8. Parth Gadoya October 31, 2018 at 6:32 pm #

    Hi Sir,

    I am forecasting sales for each product on each retail store. I want the accuracy of more than 70% on 85% of store-product combination. So, I am calculating Absolute Percentage Error for each forecast. But I have lots of zeros and I am unable to evaluate the model completely.

    According to my internet search, I found that Mean Absolute Scaled Error is a perfect measure for sales forecasting. But I didn’t found any concrete explanations on how to use and calculate it. As I am working with multiple stores and multiple products, I have multiple time series in dataset. I have all the predictions but don’t know how to evaluate?

    Please give some details on how to do this and calculate MASE for multiple time series.

    Thank you very much in advance.

    • Jason Brownlee November 1, 2018 at 6:04 am #

      Sorry, I don’t have material on MASE.

      Perhaps search on scholar.google.com for examples?

      • Parth Gadoya November 2, 2018 at 5:49 pm #

        Thanks for the suggestion.

    • Carla December 7, 2018 at 1:58 am #

      Hi,
      in “Statistical ans Machine Learning Forecasting Methods: Concerns and ways forward” by Spyros, Makridakis, they used this Code for sMAPE. Add this as def and use it in the same way as you use mse. I assume it should work.

  9. bobby November 7, 2018 at 9:03 am #

    Hi,

    Do you know any error metrics that punish longer lasting errors in time series more than large magnitude errors?

    thanks,
    bobby

  10. Daniël Muysken December 4, 2018 at 12:06 am #

    Hey, I was wondering if you know of an error measure that is not so sensitive to outliers? I have some high peaks in my timeseries that are difficult to predict and I want this error to not carry to much weight, when evaluating my prediction.

  11. Atharva February 4, 2019 at 5:50 pm #

    hey, can u tell me that how can i know the accuracy of my model from rmse value

  12. Abid Mehmood February 20, 2019 at 8:53 pm #

    How to know which error(RMSE,MSE, MAE) can we use in our time series predictions?

    • Jason Brownlee February 21, 2019 at 7:55 am #

      You can talk to project stakeholders and discover what they would like to know about the performance of a model on the problem – then choose a metric accordingly.

      If unsure, use RMSE as the units will be in the scale of the target variable and it’s easy to understand.

  13. Dav June 8, 2019 at 2:59 am #

    Hi

    Once again, great articles and sorry, I just asked you a question on another topic as well.

    Tracking Error = Standard deviation of difference between Actual and Predicted values

    I am thinking about using Tracking Error to measure Time Series Forecasting Performance. Any reason I shouldn’t use it?

    Thanks
    Dav

  14. Francisco June 28, 2019 at 5:59 pm #

    Hi Jason,

    I’m confused with the Forecast bias: “A mean forecast error value other than zero suggests a tendency of the model to over forecast (positive error) or under forecast (negative error)”

    actual – prediction > 0 if prediction is below, and it’d understand that’s under forecast, but in your example the bias is negative and the prediction is above:

    expected = [0.0, 0.5, 0.0, 0.5, 0.0]
    predictions = [0.2, 0.4, 0.1, 0.6, 0.2]

    Is there a mistake somewhere or maybe I’m missing or not understanding something?

    Thanks a lot

    • Jason Brownlee June 29, 2019 at 6:45 am #

      Yes, I have it the wrong way around, thanks.

      Negative is over forecast, positive is under forecast.

      Fixed.

  15. duderino July 20, 2019 at 5:52 am #

    I really enjoyed reading your post, thank you for this. one question if I may:

    let’s say we are working with a dataset when you are forecasting population growth (number of people) and your dataset’s most recent value shows roughly 37mil population.

    Assuming we do all of the forecasting and calculations correctly, and I (we) are currently sitting at

    Mean Absolute Error: 52,386
    Mean Squared Error: 3,650,276,091
    Root Mean Squared Error: 60,417
    (and just for fun) Mean Absolute Percentage Error: 0.038

    How does one interpret these numbers when working with a dataset of this scale? I’ve read that “closer to zero is best” but I feel like the size of my dataset means that 60,417 is actually a pretty good number, but I’m not sure.

    (not sure if this is enough data to go off of or not)

Leave a Reply