A popular and widely used statistical method for time series forecasting is the ARIMA model.

ARIMA is an acronym that stands for AutoRegressive Integrated Moving Average. It is a class of model that captures a suite of different standard temporal structures in time series data.

In this tutorial, you will discover how to develop an ARIMA model for time series data with Python.

After completing this tutorial, you will know:

- About the ARIMA model the parameters used and assumptions made by the model.
- How to fit an ARIMA model to data and use it to make forecasts.
- How to configure the ARIMA model on your time series problem.

Let’s get started.

## Autoregressive Integrated Moving Average Model

An ARIMA model is a class of statistical models for analyzing and forecasting time series data.

It explicitly caters to a suite of standard structures in time series data, and as such provides a simple yet powerful method for making skillful time series forecasts.

ARIMA is an acronym that stands for AutoRegressive Integrated Moving Average. It is a generalization of the simpler AutoRegressive Moving Average and adds the notion of integration.

This acronym is descriptive, capturing the key aspects of the model itself. Briefly, they are:

**AR**:*Autoregression*. A model that uses the dependent relationship between an observation and some number of lagged observations.**I**:*Integrated*. The use of differencing of raw observations (e.g. subtracting an observation from an observation at the previous time step) in order to make the time series stationary.**MA**:*Moving Average*. A model that uses the dependency between an observation and a residual error from a moving average model applied to lagged observations.

Each of these components are explicitly specified in the model as a parameter. A standard notation is used of ARIMA(p,d,q) where the parameters are substituted with integer values to quickly indicate the specific ARIMA model being used.

The parameters of the ARIMA model are defined as follows:

**p**: The number of lag observations included in the model, also called the lag order.**d**: The number of times that the raw observations are differenced, also called the degree of differencing.**q**: The size of the moving average window, also called the order of moving average.

A linear regression model is constructed including the specified number and type of terms, and the data is prepared by a degree of differencing in order to make it stationary, i.e. to remove trend and seasonal structures that negatively affect the regression model.

A value of 0 can be used for a parameter, which indicates to not use that element of the model. This way, the ARIMA model can be configured to perform the function of an ARMA model, and even a simple AR, I, or MA model.

Adopting an ARIMA model for a time series assumes that the underlying process that generated the observations is an ARIMA process. This may seem obvious, but helps to motivate the need to confirm the assumptions of the model in the raw observations and in the residual errors of forecasts from the model.

Next, let’s take a look at how we can use the ARIMA model in Python. We will start with loading a simple univariate time series.

### Stop learning Time Series Forecasting the *slow way*!

Take my free 7-day email course and discover data prep, modeling and more (with sample code).

Click to sign-up and also get a free PDF Ebook version of the course.

## Shampoo Sales Dataset

This dataset describes the monthly number of sales of shampoo over a 3 year period.

The units are a sales count and there are 36 observations. The original dataset is credited to Makridakis, Wheelwright, and Hyndman (1998).

Learn more about the dataset and download it from here.

Download the dataset and place it in your current working directory with the filename “*shampoo-sales.csv*“.

Below is an example of loading the Shampoo Sales dataset with Pandas with a custom function to parse the date-time field. The dataset is baselined in an arbitrary year, in this case 1900.

1 2 3 4 5 6 7 8 9 10 11 |
from pandas import read_csv from pandas import datetime from matplotlib import pyplot def parser(x): return datetime.strptime('190'+x, '%Y-%m') series = read_csv('shampoo-sales.csv', header=0, parse_dates=[0], index_col=0, squeeze=True, date_parser=parser) print(series.head()) series.plot() pyplot.show() |

Running the example prints the first 5 rows of the dataset.

1 2 3 4 5 6 7 |
Month 1901-01-01 266.0 1901-02-01 145.9 1901-03-01 183.1 1901-04-01 119.3 1901-05-01 180.3 Name: Sales, dtype: float64 |

The data is also plotted as a time series with the month along the x-axis and sales figures on the y-axis.

We can see that the Shampoo Sales dataset has a clear trend.

This suggests that the time series is not stationary and will require differencing to make it stationary, at least a difference order of 1.

Let’s also take a quick look at an autocorrelation plot of the time series. This is also built-in to Pandas. The example below plots the autocorrelation for a large number of lags in the time series.

1 2 3 4 5 6 7 8 9 10 11 |
from pandas import read_csv from pandas import datetime from matplotlib import pyplot from pandas.tools.plotting import autocorrelation_plot def parser(x): return datetime.strptime('190'+x, '%Y-%m') series = read_csv('shampoo-sales.csv', header=0, parse_dates=[0], index_col=0, squeeze=True, date_parser=parser) autocorrelation_plot(series) pyplot.show() |

Running the example, we can see that there is a positive correlation with the first 10-to-12 lags that is perhaps significant for the first 5 lags.

A good starting point for the AR parameter of the model may be 5.

## ARIMA with Python

The statsmodels library provides the capability to fit an ARIMA model.

An ARIMA model can be created using the statsmodels library as follows:

- Define the model by calling ARIMA() and passing in the
*p*,*d*, and*q*parameters. - The model is prepared on the training data by calling the fit() function.
- Predictions can be made by calling the predict() function and specifying the index of the time or times to be predicted.

Let’s start off with something simple. We will fit an ARIMA model to the entire Shampoo Sales dataset and review the residual errors.

First, we fit an ARIMA(5,1,0) model. This sets the lag value to 5 for autoregression, uses a difference order of 1 to make the time series stationary, and uses a moving average model of 0.

When fitting the model, a lot of debug information is provided about the fit of the linear regression model. We can turn this off by setting the *disp* argument to 0.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 |
from pandas import read_csv from pandas import datetime from pandas import DataFrame from statsmodels.tsa.arima_model import ARIMA from matplotlib import pyplot def parser(x): return datetime.strptime('190'+x, '%Y-%m') series = read_csv('shampoo-sales.csv', header=0, parse_dates=[0], index_col=0, squeeze=True, date_parser=parser) # fit model model = ARIMA(series, order=(5,1,0)) model_fit = model.fit(disp=0) print(model_fit.summary()) # plot residual errors residuals = DataFrame(model_fit.resid) residuals.plot() pyplot.show() residuals.plot(kind='kde') pyplot.show() print(residuals.describe()) |

Running the example prints a summary of the fit model. This summarizes the coefficient values used as well as the skill of the fit on the on the in-sample observations.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 |
ARIMA Model Results ============================================================================== Dep. Variable: D.Sales No. Observations: 35 Model: ARIMA(5, 1, 0) Log Likelihood -196.170 Method: css-mle S.D. of innovations 64.241 Date: Mon, 12 Dec 2016 AIC 406.340 Time: 11:09:13 BIC 417.227 Sample: 02-01-1901 HQIC 410.098 - 12-01-1903 ================================================================================= coef std err z P>|z| [95.0% Conf. Int.] --------------------------------------------------------------------------------- const 12.0649 3.652 3.304 0.003 4.908 19.222 ar.L1.D.Sales -1.1082 0.183 -6.063 0.000 -1.466 -0.750 ar.L2.D.Sales -0.6203 0.282 -2.203 0.036 -1.172 -0.068 ar.L3.D.Sales -0.3606 0.295 -1.222 0.231 -0.939 0.218 ar.L4.D.Sales -0.1252 0.280 -0.447 0.658 -0.674 0.424 ar.L5.D.Sales 0.1289 0.191 0.673 0.506 -0.246 0.504 Roots ============================================================================= Real Imaginary Modulus Frequency ----------------------------------------------------------------------------- AR.1 -1.0617 -0.5064j 1.1763 -0.4292 AR.2 -1.0617 +0.5064j 1.1763 0.4292 AR.3 0.0816 -1.3804j 1.3828 -0.2406 AR.4 0.0816 +1.3804j 1.3828 0.2406 AR.5 2.9315 -0.0000j 2.9315 -0.0000 ----------------------------------------------------------------------------- |

First, we get a line plot of the residual errors, suggesting that there may still be some trend information not captured by the model.

Next, we get a density plot of the residual error values, suggesting the errors are Gaussian, but may not be centered on zero.

The distribution of the residual errors is displayed. The results show that indeed there is a bias in the prediction (a non-zero mean in the residuals).

1 2 3 4 5 6 7 8 |
count 35.000000 mean -5.495213 std 68.132882 min -133.296597 25% -42.477935 50% -7.186584 75% 24.748357 max 133.237980 |

Note, that although above we used the entire dataset for time series analysis, ideally we would perform this analysis on just the training dataset when developing a predictive model.

Next, let’s look at how we can use the ARIMA model to make forecasts.

## Rolling Forecast ARIMA Model

The ARIMA model can be used to forecast future time steps.

We can use the predict() function on the ARIMAResults object to make predictions. It accepts the index of the time steps to make predictions as arguments. These indexes are relative to the start of the training dataset used to make predictions.

If we used 100 observations in the training dataset to fit the model, then the index of the next time step for making a prediction would be specified to the prediction function as *start=101, end=101*. This would return an array with one element containing the prediction.

We also would prefer the forecasted values to be in the original scale, in case we performed any differencing (*d>0* when configuring the model). This can be specified by setting the *typ* argument to the value *‘levels’*: *typ=’levels’*.

Alternately, we can avoid all of these specifications by using the forecast() function, which performs a one-step forecast using the model.

We can split the training dataset into train and test sets, use the train set to fit the model, and generate a prediction for each element on the test set.

A rolling forecast is required given the dependence on observations in prior time steps for differencing and the AR model. A crude way to perform this rolling forecast is to re-create the ARIMA model after each new observation is received.

We manually keep track of all observations in a list called history that is seeded with the training data and to which new observations are appended each iteration.

Putting this all together, below is an example of a rolling forecast with the ARIMA model in Python.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 |
from pandas import read_csv from pandas import datetime from matplotlib import pyplot from statsmodels.tsa.arima_model import ARIMA from sklearn.metrics import mean_squared_error def parser(x): return datetime.strptime('190'+x, '%Y-%m') X = series.values size = int(len(X) * 0.66) train, test = X[0:size], X[size:len(X)] history = [x for x in train] predictions = list() for t in range(len(test)): model = ARIMA(history, order=(5,1,0)) model_fit = model.fit(disp=0) output = model_fit.forecast() yhat = output[0] predictions.append(yhat) obs = test[t] history.append(obs) print('predicted=%f, expected=%f' % (yhat, obs)) error = mean_squared_error(test, predictions) print('Test MSE: %.3f' % error) # plot pyplot.plot(test) pyplot.plot(predictions, color='red') pyplot.show() |

Running the example prints the prediction and expected value each iteration.

We can also calculate a final mean squared error score (MSE) for the predictions, providing a point of comparison for other ARIMA configurations.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
predicted=349.117688, expected=342.300000 predicted=306.512968, expected=339.700000 predicted=387.376422, expected=440.400000 predicted=348.154111, expected=315.900000 predicted=386.308808, expected=439.300000 predicted=356.081996, expected=401.300000 predicted=446.379501, expected=437.400000 predicted=394.737286, expected=575.500000 predicted=434.915566, expected=407.600000 predicted=507.923407, expected=682.000000 predicted=435.483082, expected=475.300000 predicted=652.743772, expected=581.300000 predicted=546.343485, expected=646.900000 Test MSE: 6958.325 |

A line plot is created showing the expected values (blue) compared to the rolling forecast predictions (red). We can see the values show some trend and are in the correct scale.

The model could use further tuning of the p, d, and maybe even the q parameters.

## Configuring an ARIMA Model

The classical approach for fitting an ARIMA model is to follow the Box-Jenkins Methodology.

This is a process that uses time series analysis and diagnostics to discover good parameters for the ARIMA model.

In summary, the steps of this process are as follows:

**Model Identification**. Use plots and summary statistics to identify trends, seasonality, and autoregression elements to get an idea of the amount of differencing and the size of the lag that will be required.**Parameter Estimation**. Use a fitting procedure to find the coefficients of the regression model.**Model Checking**. Use plots and statistical tests of the residual errors to determine the amount and type of temporal structure not captured by the model.

The process is repeated until either a desirable level of fit is achieved on the in-sample or out-of-sample observations (e.g. training or test datasets).

The process was described in the classic 1970 textbook on the topic titled Time Series Analysis: Forecasting and Control by George Box and Gwilym Jenkins. An updated 5th edition is now available if you are interested in going deeper into this type of model and methodology.

Given that the model can be fit efficiently on modest-sized time series datasets, grid searching parameters of the model can be a valuable approach.

## Summary

In this tutorial, you discovered how to develop an ARIMA model for time series forecasting in Python.

Specifically, you learned:

- About the ARIMA model, how it can be configured, and assumptions made by the model.
- How to perform a quick time series analysis using the ARIMA model.
- How to use an ARIMA model to forecast out of sample predictions.

Do you have any questions about ARIMA, or about this tutorial?

Ask your questions in the comments below and I will do my best to answer.

Many thank

You’re welcome.

Much appreciated, Jason. Keep them coming, please.

Sure thing! I’m glad you’re finding them useful.

What else would you like to see?

Hi Jason ,can you suggest how one can solve time series problem if the target variable is categorical having around 500 categories.

Thanks

That is a lot of categories.

Perhaps moving to a neural network type model with a lot of capacity. You may also require a vast amount of data to learn this problem.

good,Has been paid close attention to your blog.

Thanks!

Gives me loads of errors:

Traceback (most recent call last):

File “/Users/kevinoost/anaconda/lib/python3.5/site-packages/pandas/io/parsers.py”, line 2276, in converter

date_parser(*date_cols), errors=’ignore’)

File “/Users/kevinoost/PycharmProjects/ARIMA/main.py”, line 6, in parser

return datetime.strptime(‘190’+x, ‘%Y-%m’)

TypeError: strptime() argument 1 must be str, not numpy.ndarray

During handling of the above exception, another exception occurred:

Traceback (most recent call last):

File “/Users/kevinoost/anaconda/lib/python3.5/site-packages/pandas/io/parsers.py”, line 2285, in converter

dayfirst=dayfirst),

File “pandas/src/inference.pyx”, line 841, in pandas.lib.try_parse_dates (pandas/lib.c:57884)

File “pandas/src/inference.pyx”, line 838, in pandas.lib.try_parse_dates (pandas/lib.c:57802)

File “/Users/kevinoost/PycharmProjects/ARIMA/main.py”, line 6, in parser

return datetime.strptime(‘190’+x, ‘%Y-%m’)

File “/Users/kevinoost/anaconda/lib/python3.5/_strptime.py”, line 510, in _strptime_datetime

tt, fraction = _strptime(data_string, format)

File “/Users/kevinoost/anaconda/lib/python3.5/_strptime.py”, line 343, in _strptime

(data_string, format))

ValueError: time data ‘190Sales of shampoo over a three year period’ does not match format ‘%Y-%m’

During handling of the above exception, another exception occurred:

Traceback (most recent call last):

File “/Users/kevinoost/PycharmProjects/ARIMA/main.py”, line 8, in

series = read_csv(‘shampoo-sales.csv’, header=0, parse_dates=[0], index_col=0, squeeze=True, date_parser=parser)

File “/Users/kevinoost/anaconda/lib/python3.5/site-packages/pandas/io/parsers.py”, line 562, in parser_f

return _read(filepath_or_buffer, kwds)

File “/Users/kevinoost/anaconda/lib/python3.5/site-packages/pandas/io/parsers.py”, line 325, in _read

return parser.read()

File “/Users/kevinoost/anaconda/lib/python3.5/site-packages/pandas/io/parsers.py”, line 815, in read

ret = self._engine.read(nrows)

File “/Users/kevinoost/anaconda/lib/python3.5/site-packages/pandas/io/parsers.py”, line 1387, in read

index, names = self._make_index(data, alldata, names)

File “/Users/kevinoost/anaconda/lib/python3.5/site-packages/pandas/io/parsers.py”, line 1030, in _make_index

index = self._agg_index(index)

File “/Users/kevinoost/anaconda/lib/python3.5/site-packages/pandas/io/parsers.py”, line 1111, in _agg_index

arr = self._date_conv(arr)

File “/Users/kevinoost/anaconda/lib/python3.5/site-packages/pandas/io/parsers.py”, line 2288, in converter

return generic_parser(date_parser, *date_cols)

File “/Users/kevinoost/anaconda/lib/python3.5/site-packages/pandas/io/date_converters.py”, line 38, in generic_parser

results[i] = parse_func(*args)

File “/Users/kevinoost/PycharmProjects/ARIMA/main.py”, line 6, in parser

return datetime.strptime(‘190’+x, ‘%Y-%m’)

File “/Users/kevinoost/anaconda/lib/python3.5/_strptime.py”, line 510, in _strptime_datetime

tt, fraction = _strptime(data_string, format)

File “/Users/kevinoost/anaconda/lib/python3.5/_strptime.py”, line 343, in _strptime

(data_string, format))

ValueError: time data ‘190Sales of shampoo over a three year period’ does not match format ‘%Y-%m’

Process finished with exit code 1

Help would be much appreciated.

It looks like there might be an issue with your data file.

Open the csv in a text editor and confirm the header line looks sensible.

Also confirm that you have no extra data at the end of the file. Sometimes the datamarket files download with footer data that you need to delete.

Let say I have a time series data with many attribute. For example a row will have (speed, fuel, tire_pressure), how could we made a model out of this ? the value of each column may affect each other, so we cannot do forecasting on solely 1 column. I google a lot but all the example I’ve found so far only work on time series of 1 attribute.

This is called multivariate time series forecasting. Linear models like ARIMA were not designed for this type of problem.

generally, you can use the lag-based representation of each feature and then apply a standard machine learning algorithm.

I hope to have some tutorials on this soon.

Wanted to check in on this, do you have any tutorials on multivariate time series forecasting?

Also, when you say standard machine learning algorithm, would a random forest model work?

Thanks!

Update: the

`statsmodels.tsa.arima_model.ARIMA()`

function documentation says it takes the optional parameter`exog`

, which is described in the documentation as ‘an optional array of exogenous variables’. This sounds like multivariate analysis to me, would you agree?I am trying to predict number of cases of a mosquito-borne disease, over time, given weather data. So I believe the ARIMA model should work for this, correct?

Thank you!

I have not experimented with this argument.

No multivariate examples at this stage.

Yes, any supervised learning method.

Hello Ng,

Your problem fits what VAR (Vector Autoregression) models is designed for. See the following links for more information. I hope this helps your work.

https://en.wikipedia.org/wiki/Vector_autoregression

http://statsmodels.sourceforge.net/devel/vector_ar.html

Hi, would you have a example for the seasonal ARIMA post? I have installed latest statsmodels module, but there is an error of import the SARIMAX. Do help if you manage to figure it out. Thanks.

Hi Kelvid, I don’t have one at the moment. I ‘ll prepare an example of SARIMAX and post it soon.

It is so informative..thankyou

I’m glad to hear that Muhammad.

Great post Jason!

I have a couple of questions:

– Just to be sure. model_fit.forecast() is single step ahead forecasts and model_fit.predict() is for multiple step ahead forecasts?

– I am working with a series that seems at least quite similar to the shampoo series (by inspection). When I use predict on the training data, I get this zig-zag pattern in the prediction as well. But for the test data, the prediction is much smoother and seems to saturate at some level. Would you expect this? If not, what could be wrong?

Hi Sebastian,

Yes, forecast() is for one step forecasts. You can do one step forecasts with predict() also, but it is more work.

I would not expect prediction beyond a few time steps to be very accurate, if that is your question?

Thanks for the reply!

Concerning the second question. Yes, you are right the prediction is not very accurate. But moreover, the predicted time series has a totally different frequency content. As I said, it is smooth and not zig-zaggy as the original data. Is this normal or am I doing something wrong. I also tried the multiple step prediction (model_fit.predict()) on the training data and then the forecast seem to have more or less the same frequency content (more zig-zaggy) as the data I am trying to predict.

Hi Sebastian, I see.

In the case of predicting on the training dataset, the model has access to real observations. For example, if you predict the next 5 obs somewhere in the training dataset, it will use obs(t+4) to predict t+5 rather than prediction(t+4).

In the case of predicting beyond the end of the model data, it does not have obs to make predictions (unless you provide them), it only has access to the predictions it made for prior time steps. The result is the errors compound and things go off the rails fast (flat forecast).

Does that make sense/help?

That helped!

Thanks!

Glad to hear it Sebastian.

Hi Jason,

suppose my training set is 1949 to 1961. Can I get the data for 1970 with using Forecast or Predict function

Thanks

Satya

Yes, you would have to predict 10 years worth of data though. The predictions after 10 years would likely have a lot of error.

So this is building a model and then checking it off of the given data right?

-How can I predict what would come next after the last data point? Am I misunderstanding the code?

Hi Elliot,

You can predict the next data point at the end of the data by training on all of the available data then calling model.forecast().

I have a post on how to make predictions here:

http://machinelearningmastery.com/make-predictions-time-series-forecasting-python/

Does that help?

I tried the model.forecast at the end of the program.

“AttributeError: ‘ARIMA’ object has no attribute ‘forecast'”

Also on your article: http://machinelearningmastery.com/make-predictions-time-series-forecasting-python/

In step 3, when it says “Prediction: 46.755211”, is that meaning after it fit the model on the dataset, it uses the model to predict what would happen next from the dataset, right?

Hi Elliot, the forecast() function is on the ARIMAResults object. You can learn more about it here:

http://statsmodels.sourceforge.net/stable/generated/statsmodels.tsa.arima_model.ARIMAResults.forecast.html

Thanks Jason for this post!

It was really useful. And your blogs are becoming a must read for me because of the applicable and piecemeal nature of your tutorials.

Keep up the good work!

You’re welcome, I’m glad to hear that.

Hi,

This is not the first post on ARIMA, but it is the best so far. Thank you.

I’m glad to hear you say that Kalin.

Hey Jason,

thank you very much for the post, very good written! I have a question: so I used your approach to build the model, but when I try to forecast the data that are out of sample, I commented out the obs = test[t] and change history.append(obs) to history.append(yhat), and I got a flat prediction… so what could be the reason? and how do you actually do the out-of-sample predictions based on the model fitted on train dataset? Thank you very much!

Hi james,

Each loop in the rolling forecast shows you how to make a one-step out of sample forecast.

Train your ARIMA on all available data and call forecast().

If you want to perform a multi-step forecast, indeed, you will need to treat prior forecasts as “observations” and use them for subsequent forecasts. You can do this automatically using the predict() function. Depending on the problem, this approach is often not skillful (e.g. a flat forecast).

Does that help?

Hi Jason,

thank you for you reply! so what could be the reason a flat forecast occurs and how to avoid it?

Hi James,

The model may not have enough information to make a good forecast.

Consider exploring alternate methods that can perform multi-step forecasts in one step – like neural nets or recurrent neural nets.

Hi Jason,

thanks a lot for your information! still need to learn a lot from people like you! 😀 nice day!

I’m here to help James!

when i calculate train and test error , train rmse is greater than test rmse.. why is it so?

I see this happen sometimes Supriya.

It suggests the model may not be well suited for the data.

Hello Jason, thanks for this amazing post.

I was wondering how does the “size” work here. For example lets say i want to forecast only 30 days ahead. I keep getting problems with the degrees of freedom.

Could you please explain this to me.

Thanks

Hi Matias, the “size” in the example is used to split the data into train/test sets for model evaluation using walk forward validation.

You can set this any way you like or evaluate your model different ways.

To forecast 30 days ahead, you are going to need a robust model and enough historic data to evaluate this model effectively.

I get it. Thanks Jason.

I was thinking, in this particular example, ¿will the prediction change if we keep adding data?

Great question Matias.

The amount of history is one variable to test with your model.

Design experiments to test if having more or less history improves performance.

Dear Jason,

Thank you for explaining the ARIMA model in such clear detail.

It helped me to make my own model to get numerical forrcasts and store it in a database.

So nice that we live in an era where knowledge is de-mystified .

I’m glad to here it!

Hi Jason. Very good work!

It would be great to see how forecasting models can be used to detect anomalies in time series. thanks.

Great suggestion, thanks Jacques.

Hi there. Many thanks. I think you need to change the way you parse the datetime to:

datetime.strptime(’19’+x, ‘%Y-%b’)

Many thanks

Are you sure?

See this list of abbreviations:

https://docs.python.org/3/library/datetime.html#strftime-and-strptime-behavior

The “%m” refers to “Month as a zero-padded decimal number.” which is exactly what we have here.

See a sample of the raw data file:

The “%b” refers to “Month as locale’s abbreviated name.” which we do not have here.

Hi Jason,

Lucky i found this at the begining of my project.. Its a great start point and enriching.

Keep it coming :).

This can also be used for non linear time series as well?

Thanks,

niri

Glad to hear it Niirkshith.

Try and see.

Dear Dr Jason,

In the above example of the rolling forecast, you used the rmse of the predicted and the actual value.

Another way of getting the residuals of the model is to get the std devs of the residuals of the fitted model

Question, is the std dev of the residuals the same as the root_mean_squared(actual, predicted)?

Thank you

Anthony of Sydney NSW

what is the difference between measuring the std deviation of the residuals of a fitted model and the rmse of the rolling forecast will

No, they are not the same.

See this post on performance measures:

http://machinelearningmastery.com/time-series-forecasting-performance-measures-with-python/

The RMSE is like the average residual error, but not quite because of the square and square root that makes the result positive.

Hi Jason,

Great writeup, had a query, when u have a seasonal data and do seasonal differencing. i.e for exy(t)=y(t)-y(t-12) for yearly data. What will be the value of d in ARIMA(p,d,q).

typo, ex y(t)=y(t)-y(t-12) for monthly data not yearly

Great question Niirkshith.

ARIMA will not do seasonal differencing (there is a version that will called SARIMA). The d value on ARIMA will be unrelated to the seasonal differencing and will assume the input data is already seasonally adjusted.

Thanks for getting back.

Hi, Jason

thanks for this example. My question how is chosen the parameter q ?

best Ivan

You can use ACF and PACF plots to help choose the values for p and q.

See this post:

http://machinelearningmastery.com/gentle-introduction-autocorrelation-partial-autocorrelation/

Hi Jason, I am wondering if you did a similar tutorial on multi-variate time series forecasting?

Not yet, I am working on some.

Hi Jason,

any updates on the same

Hi Jason,

Thanks for the great post! It was very helpful. I’m currently trying to forecast with the ARIMA model using order (4, 1, 5) and I’m getting an error message “The computed initial MA coefficients are not invertible. You should induce invertibility, choose a different model order, or you can pass your own start_params.” The model works when fitting, but seems to error out when I move to model_fit = model.fit(disp=0). The forecast works well when using your parameters of (0, 1, 5) and I used ACF and PACF plots to find my initial p and q parameters. Any ideas on the cause/fix for the error? Any tips would be much appreciated.

It’s a great blog that you have, but the PACF determines the AR order not the ACF.

Thanks Tom.

I believe ACF and PACF both inform values for q and p:

http://machinelearningmastery.com/gentle-introduction-autocorrelation-partial-autocorrelation/

Good afternoon!

Is there an analog to the function auto.arima in the package for python from the package of the language R.

For automatic selection of ARIMA parameters?

Thank you!

Yes, you can grid search yourself, see how here:

http://machinelearningmastery.com/grid-search-arima-hyperparameters-with-python/

Hi. Great one. Suppose I have multiple airlines data number of passengers for two years recorded on daily basis. Now I want to predict for each airline number of possible passangers on next few months. How can I fit these time series models. Separate model for each airline or one single model?

Try both approaches and double down on what works best.

Hi Jason, if in my dataset, my first column is date (YYYYMMDD) and second column is time (hhmmss) and third column is value at given date and time. So could I use ARIMA model for forecasting such type of time series ?

Yes, use a custom parse function to combine the date and time into one index column.

Hi Sir, Do you have tutorial about vector auto regression model (for multi-variate time series forecasting?)

Not at the moment.

Thanks a lot, Dr. Jason. This tutorial explained a lot. But I tried to run it on an oil prices data set from Bp and I get the following error:

SVD did not converge

I used (p,d,q) = (5, 1, 0)

Would you please help me on solving or at least understanding this error?

Perhaps consider rescaling your input data and explore other configurations?

Hi Jason,

I have a general question about ARIMA model in the case of multiple Time Series:

suppose you have not only one time series but many (i.e. the power generated per hour at 1000 different wind farms). So you have a dataset of 1000 time series of N points each and you want to predict the next N+M points for each of the time series.

Analyzing each time series separately with the ARIMA could be a waste. Maybe there are similarities in the time evolution of these 1000 different patterns which could help my predictions. What approach would you suggest in this case?

You could not use ARIMA.

For linear models, you could use vector autoregressions (VAR).

For nonlinear methods, I’d recommend a neural network.

I hope that helps as a start.

Hi Jeson, it’s possible to training the ARIMA with more files? Thanks!

Do you mean multiple series?

See VAR:

http://www.statsmodels.org/dev/vector_ar.html

“First, we get a line plot of the residual errors, suggesting that there may still be some trend information not captured by the model.”

So are you looking for a smooth flat line in the curve?

No, the upward trend that appears to exist in the plot of residuals.

At the end of the code, when I tried to print the predictions, it printed as the array, how do I convert it to the data points???

print(predictions)

[array([ 309.59070719]), array([ 388.64159699]), array([ 348.77807261]), array([ 383.60202178]), array([ 360.99214813]), array([ 449.34210105]), array([ 395.44928401]), array([ 434.86484106]), array([ 512.30201612]), array([ 428.59722583]), array([ 625.99359188]), array([ 543.53887362])]

Never mind.. I figured it out…

forecasts = numpy.array(predictions)

[[ 309.59070719]

[ 388.64159699]

[ 348.77807261]

[ 383.60202178]

[ 360.99214813]

[ 449.34210105]

[ 395.44928401]

[ 434.86484106]

[ 512.30201612]

[ 428.59722583]

[ 625.99359188]

[ 543.53887362]]

Keep up the good work Jason.. Your blogs are extremely helpful and easy to follow.. Loads of appreciation..

Glad to hear it.

Hi Jason and thank you for this post, its really helpful!

I have one question regarding ARIMA computation time.

I’m working on a dataset of 10K samples, and I’ve tried rolling and “non rolling” (where coefficients are only estimated once or at least not every new sample) forecasting with ARIMA :

– rolling forecast produces good results but takes a big amount of time (I’m working with an old computer, around 3/6h depending on the ARMA model);

– “non rolling” doesn’t forecast well at all.

Re-estimating the coefficients for each new sample is the only possibility for proper ARIMA forecasting?

Thanks for your help!

I would focus on the approach that gives the best results on your problem and is robust. Don’t get caught up on “proper”.

Dear Respected Sir, I have tried to use ARIMA model for my dataset, some samples of my dataset are following,

YYYYMMDD hhmmss Duration

20100916 130748 18

20100916 131131 99

20100916 131324 214

20100916 131735 72

20100916 135342 37

20100916 144059 250

20100916 150148 87

20100916 150339 0

20100916 150401 180

20100916 154652 248

20100916 183403 0

20100916 210148 0

20100917 71222 179

20100917 73320 0

20100917 81718 25

20100917 93715 15

But when I used ARIMA model for such type of dataset, the prediction was very bad and test MSE was very high as well, My dataset has irregular pattern and autocorrelation is also very low. so could ARIMA model be used for such type of dataset ? or I have to do some modification in my dataset for using ARIMA model?

Looking forward.

Thanks

Perhaps try data transforms?

Perhaps try other algorithms?

Perhaps try gathering more data.

Here are more ideas:

http://machinelearningmastery.com/machine-learning-performance-improvement-cheat-sheet/

Hi Jason,

def parser(x):

return datetime.strptime(‘190’+x, ‘%Y-%m’)

series = read_csv(‘/home/administrator/Downloads/shampoo.csv’, header=0, parse_dates=[0], index_col=0, squeeze=True, date_parser=parser)

print(series.head())

for these lines of code, I’m getting the following error

ValueError: time data ‘190Sales of shampoo over a three year period’ does not match format ‘%Y-%m’

Please help.

Thanks

Check that you have deleted the footer in the raw data file.

Hi Jason

Does ARIMA have any limitations for size of the sample. I have a dataset with 18k rows of data, ARIMA just doesn’t complete.

Thanks

Kushal

Yes, it does not work well with lots of data (linalg methods under the covers blow up) and it can take forever as you see.

You could fit the model using gradient descent, but not with statsmodels, you may need to code it yourself.

Love this. The code is very straightforward and the explanations are nice.

I would like to see a HMM model on here. I have been struggling with a few different packages (pomegranate and hmmlearn) for some time now. would like to see what you can do with it! (particularly a stock market example)

Thanks Olivia, I hope to cover HMMs in the future.

Good evening,

In what I am doing, I have a training set and a test set. In the training set, I am fitting an ARIMA model, let’s say ARIMA(0,1,1) to the training set. What I want to do is use this model and apply it to the test set to get the residuals.

So far I have:

model = ARIMA(data,order = (0,1,1))

model_fit = model.fit(disp=0)

res = model_fit.resid

This gives me the residuals for the training set. So I want to apply the ARIMA model in ‘model’ to the test data.

Is there a function to do this?

Thank you

Hi Ben,

You could use your fit model to make a prediction for the test dataset then compare the predictions vs the real values to calculate the residual errors.

Could you give me an example of the syntax? I understand that idea, but when I would try the results were very poor.

I provide a suite of examples, please search the blog for ARIMA or start here:

http://machinelearningmastery.com/start-here/#timeseries