How to Tune ARIMA Parameters in Python

There are many parameters to consider when configuring an ARIMA model with Statsmodels in Python.

In this tutorial, we take a look at a few key parameters (other than the order parameter) that you may be curious about.

Specifically, after completing this tutorial, you will know:

  • How to suppress noisy output from the underlying mathematical libraries when fitting an ARIMA model.
  • The effect of enabling or disabling a trend term in your ARIMA model.
  • The influence of using different mathematical solvers to fit coefficients to your training data.

Note, if you are interested in tuning the order parameter, see the post:

Let’s get started.

Shampoo Sales Dataset

This dataset describes the monthly number of sales of shampoo over a 3 year period.

The units are a sales count and there are 36 observations. The original dataset is credited to Makridakis, Wheelwright, and Hyndman (1998).

You can download and learn more about the dataset here.

The example below loads and creates a plot of the loaded dataset.

Running the example loads the dataset as a Pandas Series and prints the first 5 rows.

A line plot of the series is then created showing a clear increasing trend.

Line Plot of Monthly Shampoo Sales Dataset

Line Plot of Monthly Shampoo Sales Dataset

Experimental Test-Setup

It is important to evaluate time series forecasting models consistently.

In this section, we will define how we will evaluate the three forecast models in this tutorial.

First, we will hold the last one year of data back and evaluate forecasts on this data. Given the data is monthly, this means that the last 12 observations will be used as test data.

We will use a walk-forward validation method to evaluate model performance. This means that each time step in the test dataset will be enumerated, a model constructed on history data, and the forecast compared to the expected value. The observation will then be added to the training dataset and the process repeated.

Walk-forward validation is a realistic way to evaluate time series forecast models as one would expect models to be updated as new observations are made available.

Finally, forecasts will be evaluated using root mean squared error, or RMSE. The benefit of RMSE is that it penalizes large errors and the scores are in the same units as the forecast values (car sales per month).

An ARIMA(4,1,0) forecast model will be used as the baseline to explore the additional parameters of the model. This may not be the optimal model for the problem, but is generally skillful against some other hand tested configurations.

In summary, the test harness involves:

  • The last 2 years of data used a test set.
  • Walk-forward validation for model evaluation.
  • Root mean squared error used to report model skill.
  • An ARIMA(4,1,0) model will be used as a baseline.

The complete example is listed below.

Running the example spews a lot of convergence information and finishes with an RMSE score of 84.832 monthly shampoo sales.

A plot of the forecast vs the actual observations in the test harness is created to give some context for the model we are working with.

ARIMA Forecast for Monthly Shampoo Sales Dataset

ARIMA Forecast for Monthly Shampoo Sales Dataset

Now let’s dive into some of the other ARIMA parameters.

The “disp” Parameter

The first parameter we will look at is the disp parameter.

This is described as follows:

If True, convergence information is printed. For the default l_bfgs_b solver, disp controls the frequency of the output during the iterations. disp < 0 means no output in this case.

By default, this parameter is set to 1, which shows output.

We are dealing with this first because it is critical in removing all of the convergence output when evaluating the ARIMA model using walk-forward validation.

Setting it to False turns off all of this noise.

The complete example is listed below.

Running this example not only produces a cleaner output, but also is much faster to execute.

We will leave disp=False on all following examples.

The “transparams” Parameter

This parameter controls whether or not to perform a transform on AR parameters.

Specifically, it is described as:

Whether or not to transform the parameters to ensure stationarity. Uses the transformation suggested in Jones (1980). If False, no checking for stationarity or invertibility is done.

By default, transparams is set to True, meaning this transform is performed.

This parameter is also used on the R version of the ARIMA implementation (see docs) and I expect this is why it is here in statsmodels.

The statsmodels doco is weak on this, but you can learn more about the transform in the paper:

The example below demonstrates turning this parameter off.

Running this example results in more convergence warnings from the solver.

The RMSE of the model with transparams turned off also results in slightly worse results on this dataset.

Experiment with this parameter on and off on your dataset and confirm it results in a benefit.

The “trend” Parameter

The trend parameter adds an additional constant term to the model. Think of it like a bias or intercept term.

It is described as:

Whether to include a constant or not. ‘c’ includes constant, ‘nc’ no constant.

By default, a trend term is enabled with trend set to ‘c‘.

We can see the effect clearly if we rerun the original example and print the model coefficients for each step of the walk-forward validation and compare the same with the trend term turned off.

The below example prints the coefficients each iteration with the trend constant enabled (the default).

Running the example shows the 4 AR terms specified in the order of the model plus the first term in the array, which is a trend constant.

Note that one set of parameters is printed for each model fit, one for each step of the walk-forward validation.

We can repeat this experiment with the trend term disabled (trend=’nc’), as follows.

Running the example shows a slightly worse RMSE score on this problem, with this ARIMA configuration.

We can see that the constant term ( removed from the array of coefficients each iteration.

Experiment on your own problem and determine whether this constant improves performance.

My own experimentation suggests that ARIMA models may be less likely to converge with the trend term disabled, especially when using more than zero MA terms.

The “solver” Parameter

The solver parameter specifies the numerical optimization method to fit the coefficients to the data.

There is often little reason to tune this parameter other than execution speed if you have a lot of data. The differences will likely be quite minor.

The parameter is described as follows:

Solver to be used. The default is ‘lbfgs’ (limited memory Broyden-Fletcher-Goldfarb-Shanno). Other choices are ‘bfgs’, ‘newton’ (Newton-Raphson), ‘nm’ (Nelder-Mead), ‘cg’ – (conjugate gradient), ‘ncg’ (non-conjugate gradient), and ‘powell’. By default, the limited memory BFGS uses m=12 to approximate the Hessian, projected gradient tolerance of 1e-8 and factr = 1e2. You can change these by using kwargs.

The default is the fast “lbfgs” method (Limited-memory BFGS).

Nevertheless, below is an experiment that compares the RMSE model skill and execution time of each solver.

Running the example prints the RMSE and time in seconds of each solver.

A graph of solver vs RMSE is provided. As expected, there is little difference between the solvers on this small dataset.

You may see different results or different stability of the solvers on your own problem.

ARIMA Model Error vs Solver

ARIMA Model Error (Test RMSE) vs Solver

A graph of solver vs execution time in seconds is also created. The graph shows a marked difference between solvers.

Generally, “lbfgs” and “bfgs” provide good real-world tradeoff between speed, performance, and stability.

ARIMA Execution Time vs Solver

ARIMA Execution Time (seconds) vs Solver

If you do decide to test out solvers, you may also want to vary the “maxiter” that limits the number of iterations before converge, the “tol” parameter that defines the precision of convergence, and the “method” parameter that defines the cost function being optimized.

Additional Resources

This section lists some resources you may find useful alongside this tutorial.


In this tutorial, you discovered some of the finer points in configuring your ARIMA model with Statsmodels in Python.

Specifically, you learned:

  • How to turn off the noisy convergence output from the solver when fitting coefficients.
  • How to evaluate the difference between different solvers to fit your ARIMA model.
  • The effect of enabling and disabling a trend term in your ARIMA model.

Do you have any questions about fitting your ARIMA model in Python?
Ask your question in the comments below and I will do my best to answer.

Want to Develop Time Series Forecasts with Python?

Introduction to Time Series Forecasting With Python

Develop Your Own Forecasts in Minutes

...with just a few lines of python code

Discover how in my new Ebook:
Introduction to Time Series Forecasting With Python

It covers self-study tutorials and end-to-end projects on topics like:
Loading data, visualization, modeling, algorithm tuning, and much more...

Finally Bring Time Series Forecasting to
Your Own Projects

Skip the Academics. Just Results.

Click to learn more.

No comments yet.

Leave a Reply