Last Updated on September 10, 2020
Time series prediction performance measures provide a summary of the skill and capability of the forecast model that made the predictions.
There are many different performance measures to choose from. It can be confusing to know which measure to use and how to interpret the results.
In this tutorial, you will discover performance measures for evaluating time series forecasts with Python.
Time series generally focus on the prediction of real values, called regression problems. Therefore the performance measures in this tutorial will focus on methods for evaluating real-valued predictions.
After completing this tutorial, you will know:
- Basic measures of forecast performance, including residual forecast error and forecast bias.
- Time series forecast error calculations that have the same units as the expected outcomes such as mean absolute error.
- Widely used error calculations that punish large errors, such as mean squared error and root mean squared error.
Kick-start your project with my new book Time Series Forecasting With Python, including step-by-step tutorials and the Python source code files for all examples.
Let’s get started.
- Jun/2019: Fixed typo in forecast bias (thanks Francisco).

Time Series Forecasting Performance Measures With Python
Photo by Tom Hall, some rights reserved.
Forecast Error (or Residual Forecast Error)
The forecast error is calculated as the expected value minus the predicted value.
This is called the residual error of the prediction.
1 |
forecast_error = expected_value - predicted_value |
The forecast error can be calculated for each prediction, providing a time series of forecast errors.
The example below demonstrates how the forecast error can be calculated for a series of 5 predictions compared to 5 expected values. The example was contrived for demonstration purposes.
1 2 3 4 |
expected = [0.0, 0.5, 0.0, 0.5, 0.0] predictions = [0.2, 0.4, 0.1, 0.6, 0.2] forecast_errors = [expected[i]-predictions[i] for i in range(len(expected))] print('Forecast Errors: %s' % forecast_errors) |
Running the example calculates the forecast error for each of the 5 predictions. The list of forecast errors is then printed.
1 |
Forecast Errors: [-0.2, 0.09999999999999998, -0.1, -0.09999999999999998, -0.2] |
The units of the forecast error are the same as the units of the prediction. A forecast error of zero indicates no error, or perfect skill for that forecast.
Stop learning Time Series Forecasting the slow way!
Take my free 7-day email course and discover how to get started (with sample code).
Click to sign-up and also get a free PDF Ebook version of the course.
Mean Forecast Error (or Forecast Bias)
Mean forecast error is calculated as the average of the forecast error values.
1 |
mean_forecast_error = mean(forecast_error) |
Forecast errors can be positive and negative. This means that when the average of these values is calculated, an ideal mean forecast error would be zero.
A mean forecast error value other than zero suggests a tendency of the model to over forecast (negative error) or under forecast (positive error). As such, the mean forecast error is also called the forecast bias.
The forecast error can be calculated directly as the mean of the forecast values. The example below demonstrates how the mean of the forecast errors can be calculated manually.
1 2 3 4 5 |
expected = [0.0, 0.5, 0.0, 0.5, 0.0] predictions = [0.2, 0.4, 0.1, 0.6, 0.2] forecast_errors = [expected[i]-predictions[i] for i in range(len(expected))] bias = sum(forecast_errors) * 1.0/len(expected) print('Bias: %f' % bias) |
Running the example prints the mean forecast error, also known as the forecast bias.
In this case the result is negative, meaning that we have over forecast.
1 |
Bias: -0.100000 |
The units of the forecast bias are the same as the units of the predictions. A forecast bias of zero, or a very small number near zero, shows an unbiased model.
Mean Absolute Error
The mean absolute error, or MAE, is calculated as the average of the forecast error values, where all of the forecast error values are forced to be positive.
Forcing values to be positive is called making them absolute. This is signified by the absolute function abs() or shown mathematically as two pipe characters around the value:Â |value|.
1 |
mean_absolute_error = mean( abs(forecast_error) ) |
Where abs() makes values positive, forecast_error is one or a sequence of forecast errors, and mean() calculates the average value.
We can use the mean_absolute_error() function from the scikit-learn library to calculate the mean absolute error for a list of predictions. The example below demonstrates this function.
1 2 3 4 5 |
from sklearn.metrics import mean_absolute_error expected = [0.0, 0.5, 0.0, 0.5, 0.0] predictions = [0.2, 0.4, 0.1, 0.6, 0.2] mae = mean_absolute_error(expected, predictions) print('MAE: %f' % mae) |
Running the example calculates and prints the mean absolute error for a list of 5 expected and predicted values.
1 |
MAE: 0.140000 |
These error values are in the original units of the predicted values. A mean absolute error of zero indicates no error.
Mean Squared Error
The mean squared error, or MSE, is calculated as the average of the squared forecast error values. Squaring the forecast error values forces them to be positive; it also has the effect of putting more weight on large errors.
Very large or outlier forecast errors are squared, which in turn has the effect of dragging the mean of the squared forecast errors out resulting in a larger mean squared error score. In effect, the score gives worse performance to those models that make large wrong forecasts.
1 |
mean_squared_error = mean(forecast_error^2) |
We can use the mean_squared_error() function from scikit-learn to calculate the mean squared error for a list of predictions. The example below demonstrates this function.
1 2 3 4 5 |
from sklearn.metrics import mean_squared_error expected = [0.0, 0.5, 0.0, 0.5, 0.0] predictions = [0.2, 0.4, 0.1, 0.6, 0.2] mse = mean_squared_error(expected, predictions) print('MSE: %f' % mse) |
Running the example calculates and prints the mean squared error for a list of expected and predicted values.
1 |
MSE: 0.022000 |
The error values are in squared units of the predicted values. A mean squared error of zero indicates perfect skill, or no error.
Root Mean Squared Error
The mean squared error described above is in the squared units of the predictions.
It can be transformed back into the original units of the predictions by taking the square root of the mean squared error score. This is called the root mean squared error, or RMSE.
1 |
rmse = sqrt(mean_squared_error) |
This can be calculated by using the sqrt() math function on the mean squared error calculated using the mean_squared_error() scikit-learn function.
1 2 3 4 5 6 7 |
from sklearn.metrics import mean_squared_error from math import sqrt expected = [0.0, 0.5, 0.0, 0.5, 0.0] predictions = [0.2, 0.4, 0.1, 0.6, 0.2] mse = mean_squared_error(expected, predictions) rmse = sqrt(mse) print('RMSE: %f' % rmse) |
Running the example calculates the root mean squared error.
1 |
RMSE: 0.148324 |
The RMES error values are in the same units as the predictions. As with the mean squared error, an RMSE of zero indicates no error.
Further Reading
Below are some references for further reading on time series forecast error measures.
- Section 3.3 Measuring Predictive Accuracy, Practical Time Series Forecasting with R: A Hands-On Guide.
- Section 2.5 Evaluating Forecast Accuracy, Forecasting: principles and practice
- scikit-learn Metrics API
- Section 3.3.4. Regression metrics, scikit-learn API Guide
Summary
In this tutorial, you discovered a suite of 5 standard time series performance measures in Python.
Specifically, you learned:
- How to calculate forecast residual error and how to estimate the bias in a list of forecasts.
- How to calculate mean absolute forecast error to describe error in the same units as the predictions.
- How to calculate the widely used mean squared error and root mean squared error for forecasts.
Do you have any questions about time series forecast performance measures, or about this tutorial
Ask your questions in the comments below and I will do my best to answer.
I’ve seen MAPE used a few times to evaluate our forecasting models. Do you see this used often and when would you use one over the other?
Hi Peter, MAPE is a good metric and I do see it used.
I prefer RMSE myself.
I have 9.69 rmse value from arima model how do i reduced it?
Try alternate model configurations?
Try alternate models?
Try alternate data preparations?
First line of code of Forecast Error should be forecast_error = expected_value “-” predicted_value.
I believe this is a typo.
Yes, that was a typo. Fixed. Thanks Ian.
Dr.Jason,
Can you provide us simple way to split the data with 10 fold cross validation for train and test set with large csv file. Then apply different algorithms to train model after that test model to check how does model accurate. we also want to see ROC curve to combine different algorithms
my second question does ROC curve show precision of model?? can you show me a mathematical formula for ROC curve?
Sorry, I do not have the capacity to prepare this example for you.
You can learn more about ROC curves here:
https://machinelearningmastery.mystagingwebsite.com/assessing-comparing-classifier-performance-roc-curves-2/
What should be range of values for all different measures of performance for a acceptable model.
Good question, it really depends on your problem and the units of your variable.
Suppose, variable values range from 0-100, then what will be range?
If you have accuracy scores between 0 and 100, maybe 60% is good because the problem is hard, maybe 98% is good because the problem is easy.
I cannot answer this question generically, sorry.
A good way to figure out if a model is skillful is to compare it to a lot of other models or against a solid base line model (e.g. relative measure of good).
Hi Jason,
And what about if we perform multivariate time series forecasting?
Imagine we forecast 3 time series with the same model, how would you provide the results? per time series? the mean of the errors ?
Thanks for your time 🙂
You can decide how to evaluate the skill of the model, perhaps RMSE across all forecasted data points.
See this post for multivariate inputs:
https://machinelearningmastery.mystagingwebsite.com/multivariate-time-series-forecasting-lstms-keras/
See this post for multi-step forecast:
https://machinelearningmastery.mystagingwebsite.com/multi-step-time-series-forecasting-long-short-term-memory-networks-python/
Good evening, one question , if i want to get max error, how could it be?
What do you mean by max error?
Hi, thanks for the post. If I understand correctly, the method mentioned here is useful for correcting predictions if the ground truths of the test examples are readily available and are included in the correction process. I was wondering if there are similar approaches for situations where there is a noticeable trend for residuals in your training/testing data, and I’d like to create a model utilizing these trends in an environment where ground truths for new examples are not available?
ARIMA and ETS models can handle the trend in your data.
Hi Sir,
I am forecasting sales for each product on each retail store. I want the accuracy of more than 70% on 85% of store-product combination. So, I am calculating Absolute Percentage Error for each forecast. But I have lots of zeros and I am unable to evaluate the model completely.
According to my internet search, I found that Mean Absolute Scaled Error is a perfect measure for sales forecasting. But I didn’t found any concrete explanations on how to use and calculate it. As I am working with multiple stores and multiple products, I have multiple time series in dataset. I have all the predictions but don’t know how to evaluate?
Please give some details on how to do this and calculate MASE for multiple time series.
Thank you very much in advance.
Sorry, I don’t have material on MASE.
Perhaps search on scholar.google.com for examples?
Thanks for the suggestion.
Hi,
in “Statistical ans Machine Learning Forecasting Methods: Concerns and ways forward” by Spyros, Makridakis, they used this Code for sMAPE. Add this as def and use it in the same way as you use mse. I assume it should work.
Hi,
Do you know any error metrics that punish longer lasting errors in time series more than large magnitude errors?
thanks,
bobby
What do you mean longer lasting errors?
Hey, I was wondering if you know of an error measure that is not so sensitive to outliers? I have some high peaks in my timeseries that are difficult to predict and I want this error to not carry to much weight, when evaluating my prediction.
Perhaps Median Absolute Error?
hey, can u tell me that how can i know the accuracy of my model from rmse value
You cannot calculate accuracy for a regression problem, I explain this more here:
https://machinelearningmastery.mystagingwebsite.com/classification-versus-regression-in-machine-learning/
How to know which error(RMSE,MSE, MAE) can we use in our time series predictions?
You can talk to project stakeholders and discover what they would like to know about the performance of a model on the problem – then choose a metric accordingly.
If unsure, use RMSE as the units will be in the scale of the target variable and it’s easy to understand.
Hi
Once again, great articles and sorry, I just asked you a question on another topic as well.
Tracking Error = Standard deviation of difference between Actual and Predicted values
I am thinking about using Tracking Error to measure Time Series Forecasting Performance. Any reason I shouldn’t use it?
Thanks
Dav
I’m not familiar with it, sorry.
Hi Jason,
I’m confused with the Forecast bias: “A mean forecast error value other than zero suggests a tendency of the model to over forecast (positive error) or under forecast (negative error)”
actual – prediction > 0 if prediction is below, and it’d understand that’s under forecast, but in your example the bias is negative and the prediction is above:
expected = [0.0, 0.5, 0.0, 0.5, 0.0]
predictions = [0.2, 0.4, 0.1, 0.6, 0.2]
Is there a mistake somewhere or maybe I’m missing or not understanding something?
Thanks a lot
Yes, I have it the wrong way around, thanks.
Negative is over forecast, positive is under forecast.
Fixed.
I really enjoyed reading your post, thank you for this. one question if I may:
let’s say we are working with a dataset when you are forecasting population growth (number of people) and your dataset’s most recent value shows roughly 37mil population.
Assuming we do all of the forecasting and calculations correctly, and I (we) are currently sitting at
Mean Absolute Error: 52,386
Mean Squared Error: 3,650,276,091
Root Mean Squared Error: 60,417
(and just for fun) Mean Absolute Percentage Error: 0.038
How does one interpret these numbers when working with a dataset of this scale? I’ve read that “closer to zero is best” but I feel like the size of my dataset means that 60,417 is actually a pretty good number, but I’m not sure.
(not sure if this is enough data to go off of or not)
A model has skill if it outperforms a naive forecast:
https://machinelearningmastery.mystagingwebsite.com/faq/single-faq/how-to-know-if-a-model-has-good-performance
Does that help?
are that matrix can be used for the ARIMA model and LSTM? If yes, Is it the same as your example describes?
Sorry, I don’t understand.
Perhaps you can elaborate or rephrase the question? What do you mean by “matrix for ARIMA”?
Hi Jason,
Thanks for your great article. Can you please help me to this scenario.
My Actual and Predicted is having more 0’s. Which metric is more suitable to measure the forecast accuracy percentage. My end users are looking at accuracy as a percentage format.
Actual -> 0,1,1,4,1,1,0
Predicted-> 1,0,0,2,1,1,0
You’re welcome!
Perhaps explore MAE and RMSE and even others and pick one that best captures the goals of your project.
Is there any difference between squared loss and mean squared error. For more reference – Page 6 of this research paper https://arxiv.org/pdf/1511.05942.pdf
Same thing I would expect. I have not checked your paper, sorry.
Hello Jason, Great fan of your work.
Suggesting a correction, Under MAE, 2nd Line Should it be “forecast error values” in place of “forecast values”.
Thanks, fixed!
Hi Jason Thank you for this wonderfull article/tutorial,
I am trying to make a forecast by using 4 years of daily data which is about grocery sales.
I prepared several models to forecast.. One of the interesting result that I came occur is that my SARIMA model beats out RandomForest and other tree models in terms of MAPE but in the case of the RMSE Random forests and other tree machine learning models are more desirable. I am confused to make a clear judgement about this issue. Do you have any idea why it is the case ?
Choose one metric for model selection, then choose a model that does well on that one metric.
Hello, Jason! Your books and articles are the only solution of my problem, but I also have a question, how can we measure the performance of multi step model of, let’s say, 3 days? for instance the RMSE = [2, 4, 5], can we take average RMSE of these three? And second can we measure the Coefficient of determination in time series data? Is it valid measure metric?
Thanks.
Yes, you can calculate the error for each forecasted lead time separately if you like.
I think I have examples of this is power forecasting tutorials:
https://machinelearningmastery.mystagingwebsite.com/?s=power+forecasting&post_type=post&submit=Search
Dear Jason, Thank you very much for you response. and another question is can I calculate an average RMSE, MAE of these three? is it a valid measure metric? and what about coefficient of determination (R-squared)? is it valid metric for time series data?
Sure, although I recommend selecting one metric to optimize for your project – because sometimes they will disagree.
Thank you very much for your response and time. Have a nice day!
You’re welcome.
hey dont know if you are still replying but how can i find the standardized accuracy using MAR and MARp that is the MAR of large number of random guessing
What is “MAR”?
Hello Jason,
Thanks for putting this together. What are your thoughts on using the weighted RMSE metric?
Regards,
S
No strong opinions. I recommend carefully selecting a metric that best captures the goals of your project.
Hi Jason,
I forecast the next 15/30 days of session count. Is there any technique to find the accuracy of the forecasted values.
FYI: I do not have actual values to compare.
We cannot calculate accuracy of a regression model, see this:
https://machinelearningmastery.mystagingwebsite.com/faq/single-faq/how-do-i-calculate-accuracy-for-regression
Hi Jason.
I’m working on a model where it is better to predict less than more and it is important that big errors are penalized. The problem is that rmse penalizes big errors but dods not care about predicting more or less while MAE may tend towards a negative bias but does not penalize big errors.
My question is would it be wise to separate the predictions as follows :
– if an error e(t) is cosidered too big ( the difference is with the true value is bigger than a predetermined percentage say 40% ) then it is squared like the rmse
– if a prediction is over the demand then the error is multiplied by a factor bigger than one say 2 )
– for other ” normal predictions we go forward with mae
Then we have three average errors from which we make a final average
I would appreciate your opinion.
Thank you.
Hi Kay98…This is actually a very good approach in selecting the loss function.
The following resource provides more clarity on how to choose loss functions.
https://machinelearningmastery.mystagingwebsite.com/how-to-choose-loss-functions-when-training-deep-learning-neural-networks/
Let us know if you have any additional questions.
Regards,
Hi !
I have another question. I actually cannot find an answer about how to calculate the RMSE as a percentage value.
I appreciate the help.
Thank you.
Dear Jason,
If I use the whole time series data for training, is the training error (using any of the error metrics in your blog) a good indicator of model accuracy?
The situation is that, I’ve done experiments for time series forecasting using Auto Arima, and I evaluated the model by Splitting the dataset to train and test. But, now that my model is going to be used in practice, I input the whole data to the model for training not to loose any information. I still need to display an indicator of the accuracy of my model to show how much the forecasts of my model could be reliable. So, I’m wondering if the training error could be considered here as accuracy metric of the model in case no test set is considered.
Best,
Samin
Hi Samin…The following may be of interest:
https://machinelearningmastery.mystagingwebsite.com/a-gentle-introduction-to-the-challenge-of-training-deep-learning-neural-network-models/
https://machinelearningmastery.mystagingwebsite.com/training-validation-test-split-and-cross-validation-done-right/
Hi,
In your example, rmse = 0.1483. So how can we interpret that?
Hi Syamini…The following may be of interest:
https://machinelearningmastery.mystagingwebsite.com/regression-metrics-for-machine-learning/
MSE is calculated over actual dataset or normalized one?
Hi Ankit…Either is fine because it is used a relative comparison metric.