How to Check if Time Series Data is Stationary with Python

Last Updated on

Time series is different from more traditional classification and regression predictive modeling problems.

The temporal structure adds an order to the observations. This imposed order means that important assumptions about the consistency of those observations needs to be handled specifically.

For example, when modeling, there are assumptions that the summary statistics of observations are consistent. In time series terminology, we refer to this expectation as the time series being stationary.

These assumptions can be easily violated in time series by the addition of a trend, seasonality, and other time-dependent structures.

In this tutorial, you will discover how to check if your time series is stationary with Python.

After completing this tutorial, you will know:

  • How to identify obvious stationary and non-stationary time series using line plot.
  • How to spot check summary statistics like mean and variance for a change over time.
  • How to use statistical tests with statistical significance to check if a time series is stationary.

Discover how to prepare and visualize time series data and develop autoregressive forecasting models in my new book, with 28 step-by-step tutorials, and full python code.

Let’s get started.

  • Updated Feb/2017: Fixed typo in interpretation of p-value, added bullet points to make it clearer.
  • Updated May/2018: Improved language around reject vs fail to reject of statistical tests.
  • Updated Apr/2019: Updated the link to dataset.
  • Updated Aug/2019: Updated data loading to use new API.
How to Check if Time Series Data is Stationary with Python

How to Check if Time Series Data is Stationary with Python
Photo by Susanne Nilsson, some rights reserved.

Stationary Time Series

The observations in a stationary time series are not dependent on time.

Time series are stationary if they do not have trend or seasonal effects. Summary statistics calculated on the time series are consistent over time, like the mean or the variance of the observations.

When a time series is stationary, it can be easier to model. Statistical modeling methods assume or require the time series to be stationary to be effective.

Below is an example of loading the Daily Female Births dataset that is stationary.

Running the example creates the following plot.

Daily Female Births Dataset Plot

Daily Female Births Dataset Plot

Stop learning Time Series Forecasting the slow way!

Take my free 7-day email course and discover how to get started (with sample code).

Click to sign-up and also get a free PDF Ebook version of the course.

Start Your FREE Mini-Course Now!

Non-Stationary Time Series

Observations from a non-stationary time series show seasonal effects, trends, and other structures that depend on the time index.

Summary statistics like the mean and variance do change over time, providing a drift in the concepts a model may try to capture.

Classical time series analysis and forecasting methods are concerned with making non-stationary time series data stationary by identifying and removing trends and removing seasonal effects.

Below is an example of the Airline Passengers dataset that is non-stationary, showing both trend and seasonal components.

Running the example creates the following plot.

Non-Stationary Airline Passengers Dataset

Non-Stationary Airline Passengers Dataset

Types of Stationary Time Series

The notion of stationarity comes from the theoretical study of time series and it is a useful abstraction when forecasting.

There are some finer-grained notions of stationarity that you may come across if you dive deeper into this topic. They are:

They are:

  • Stationary Process: A process that generates a stationary series of observations.
  • Stationary Model: A model that describes a stationary series of observations.
  • Trend Stationary: A time series that does not exhibit a trend.
  • Seasonal Stationary: A time series that does not exhibit seasonality.
  • Strictly Stationary: A mathematical definition of a stationary process, specifically that the joint distribution of observations is invariant to time shift.

Stationary Time Series and Forecasting

Should you make your time series stationary?

Generally, yes.

If you have clear trend and seasonality in your time series, then model these components, remove them from observations, then train models on the residuals.

If we fit a stationary model to data, we assume our data are a realization of a stationary process. So our first step in an analysis should be to check whether there is any evidence of a trend or seasonal effects and, if there is, remove them.

— Page 122, Introductory Time Series with R.

Statistical time series methods and even modern machine learning methods will benefit from the clearer signal in the data.

But…

We turn to machine learning methods when the classical methods fail. When we want more or better results. We cannot know how to best model unknown nonlinear relationships in time series data and some methods may result in better performance when working with non-stationary observations or some mixture of stationary and non-stationary views of the problem.

The suggestion here is to treat properties of a time series being stationary or not as another source of information that can be used in feature engineering and feature selection on your time series problem when using machine learning methods.

Checks for Stationarity

There are many methods to check whether a time series (direct observations, residuals, otherwise) is stationary or non-stationary.

  1. Look at Plots: You can review a time series plot of your data and visually check if there are any obvious trends or seasonality.
  2. Summary Statistics: You can review the summary statistics for your data for seasons or random partitions and check for obvious or significant differences.
  3. Statistical Tests: You can use statistical tests to check if the expectations of stationarity are met or have been violated.

Above, we have already introduced the Daily Female Births and Airline Passengers datasets as stationary and non-stationary respectively with plots showing an obvious lack and presence of trend and seasonality components.

Next, we will look at a quick and dirty way to calculate and review summary statistics on our time series dataset for checking to see if it is stationary.

Summary Statistics

A quick and dirty check to see if your time series is non-stationary is to review summary statistics.

You can split your time series into two (or more) partitions and compare the mean and variance of each group. If they differ and the difference is statistically significant, the time series is likely non-stationary.

Next, let’s try this approach on the Daily Births dataset.

Daily Births Dataset

Because we are looking at the mean and variance, we are assuming that the data conforms to a Gaussian (also called the bell curve or normal) distribution.

We can also quickly check this by eyeballing a histogram of our observations.

Running the example plots a histogram of values from the time series. We clearly see the bell curve-like shape of the Gaussian distribution, perhaps with a longer right tail.

Histogram of Daily Female Births

Histogram of Daily Female Births

Next, we can split the time series into two contiguous sequences. We can then calculate the mean and variance of each group of numbers and compare the values.

Running this example shows that the mean and variance values are different, but in the same ball-park.

Next, let’s try the same trick on the Airline Passengers dataset.

Airline Passengers Dataset

Cutting straight to the chase, we can split our dataset and calculate the mean and variance for each group.

Running the example, we can see the mean and variance look very different.

We have a non-stationary time series.

Well, maybe.

Let’s take one step back and check if assuming a Gaussian distribution makes sense in this case by plotting the values of the time series as a histogram.

Running the example shows that indeed the distribution of values does not look like a Gaussian, therefore the mean and variance values are less meaningful.

This squashed distribution of the observations may be another indicator of a non-stationary time series.

Histogram of Airline Passengers

Histogram of Airline Passengers

Reviewing the plot of the time series again, we can see that there is an obvious seasonality component, and it looks like the seasonality component is growing.

This may suggest an exponential growth from season to season. A log transform can be used to flatten out exponential change back to a linear relationship.

Below is the same histogram with a log transform of the time series.

Running the example, we can see the more familiar Gaussian-like or Uniform-like distribution of values.

Histogram Log of Airline Passengers

Histogram Log of Airline Passengers

We also create a line plot of the log transformed data and can see the exponential growth seems diminished, but we still have a trend and seasonal elements.

Line Plot Log of Airline Passengers

Line Plot Log of Airline Passengers

We can now calculate the mean and standard deviation of the values of the log transformed dataset.

Running the examples shows mean and standard deviation values for each group that are again similar, but not identical.

Perhaps, from these numbers alone, we would say the time series is stationary, but we strongly believe this to not be the case from reviewing the line plot.

This is a quick and dirty method that may be easily fooled.

We can use a statistical test to check if the difference between two samples of Gaussian random variables is real or a statistical fluke. We could explore statistical significance tests, like the Student t-test, but things get tricky because of the serial correlation between values.

In the next section, we will use a statistical test designed to explicitly comment on whether a univariate time series is stationary.

Augmented Dickey-Fuller test

Statistical tests make strong assumptions about your data. They can only be used to inform the degree to which a null hypothesis can be rejected or fail to be reject. The result must be interpreted for a given problem to be meaningful.

Nevertheless, they can provide a quick check and confirmatory evidence that your time series is stationary or non-stationary.

The Augmented Dickey-Fuller test is a type of statistical test called a unit root test.

The intuition behind a unit root test is that it determines how strongly a time series is defined by a trend.

There are a number of unit root tests and the Augmented Dickey-Fuller may be one of the more widely used. It uses an autoregressive model and optimizes an information criterion across multiple different lag values.

The null hypothesis of the test is that the time series can be represented by a unit root, that it is not stationary (has some time-dependent structure). The alternate hypothesis (rejecting the null hypothesis) is that the time series is stationary.

  • Null Hypothesis (H0): If failed to be rejected, it suggests the time series has a unit root, meaning it is non-stationary. It has some time dependent structure.
  • Alternate Hypothesis (H1): The null hypothesis is rejected; it suggests the time series does not have a unit root, meaning it is stationary. It does not have time-dependent structure.

We interpret this result using the p-value from the test. A p-value below a threshold (such as 5% or 1%) suggests we reject the null hypothesis (stationary), otherwise a p-value above the threshold suggests we fail to reject the null hypothesis (non-stationary).

  • p-value > 0.05: Fail to reject the null hypothesis (H0), the data has a unit root and is non-stationary.
  • p-value <= 0.05: Reject the null hypothesis (H0), the data does not have a unit root and is stationary.

Below is an example of calculating the Augmented Dickey-Fuller test on the Daily Female Births dataset. The statsmodels library provides the adfuller() function that implements the test.

Running the example prints the test statistic value of -4. The more negative this statistic, the more likely we are to reject the null hypothesis (we have a stationary dataset).

As part of the output, we get a look-up table to help determine the ADF statistic. We can see that our statistic value of -4 is less than the value of -3.449 at 1%.

This suggests that we can reject the null hypothesis with a significance level of less than 1% (i.e. a low probability that the result is a statistical fluke).

Rejecting the null hypothesis means that the process has no unit root, and in turn that the time series is stationary or does not have time-dependent structure.

We can perform the same test on the Airline Passenger dataset.

Running the example gives a different picture than the above. The test statistic is positive, meaning we are much less likely to reject the null hypothesis (it looks non-stationary).

Comparing the test statistic to the critical values, it looks like we would have to fail to reject the null hypothesis that the time series is non-stationary and does have time-dependent structure.

Let’s log transform the dataset again to make the distribution of values more linear and better meet the expectations of this statistical test.

Running the example shows a negative value for the test statistic.

We can see that the value is larger than the critical values, again, meaning that we can fail to reject the null hypothesis and in turn that the time series is non-stationary.

Summary

In this tutorial, you discovered how to check if your time series is stationary with Python.

Specifically, you learned:

  • The importance of time series data being stationary for use with statistical modeling methods and even some modern machine learning methods.
  • How to use line plots and basic summary statistics to check if a time series is stationary.
  • How to calculate and interpret statistical significance tests to check if a time series is stationary.

Do you have any questions about stationary and non-stationary time series, or about this post?
Ask your questions in the comments below and I will do my best to answer.

Want to Develop Time Series Forecasts with Python?

Introduction to Time Series Forecasting With Python

Develop Your Own Forecasts in Minutes

...with just a few lines of python code

Discover how in my new Ebook:
Introduction to Time Series Forecasting With Python

It covers self-study tutorials and end-to-end projects on topics like:
Loading data, visualization, modeling, algorithm tuning, and much more...

Finally Bring Time Series Forecasting to
Your Own Projects

Skip the Academics. Just Results.

Click to learn more.

120 Responses to How to Check if Time Series Data is Stationary with Python

  1. Eduardo Gonzatti December 30, 2016 at 6:08 am #

    Hi there, nice post!

    Just a quick question, when testing the residuals of an OLS regression between two price SERIES for stationarity would you consider then, the two price series, to be cointegrated If the H0 was rejected by the ADF test that you ran on the residuals ?
    Or would you first run the ADF test on each of the price series in order to see if they are I(1) themselves?

    Thanks!

    • Jason Brownlee December 31, 2016 at 7:00 am #

      Hi Eduardo,

      Both. I would check both the input data and the residuals.

      • adedayo abraham June 1, 2017 at 8:12 pm #

        Can you help with material .for this project topic. Comparison of different method of stationalizing time series data. You can inbox me on this mail box. adedayo.temmy@yahoo.com. thanks

  2. Eduardo Gonzatti January 6, 2017 at 10:46 pm #

    Hi Jason,
    Thanks for the reply!

    I asked this because of a “common sense” (maybe not) assumption that price series would not be, per se, stationary, by definition, so, sometimes I ask myself if isn’t this kind of testing a little too much.

    Best Regards!

    • Jason Brownlee January 7, 2017 at 8:39 am #

      I’m high on ML methods for time series over linear methods like ARIMA, but one really important consideration is stationarity.

      Trend removal and exploring seasonality specifically is a big deal otherwise ML methods blow-up for the same reasons as linear methods.

      I’d like to do a whole series of posts on stationarity.

      • Naira Grigoryan October 16, 2018 at 4:16 am #

        Hi . What kind of methods are well for removing checking time series is :

        1. trendy
        2. stationary
        3. has seasonality

        I would like to forcats time series via RNN, but for getting more accuarate results I need at the beging check all this 3 characteristics.

  3. Cipher January 8, 2017 at 11:50 pm #

    beside using log, do you consider to use panda.diff()?

    • Jason Brownlee January 9, 2017 at 7:51 am #

      Thanks Cipher, it beats doing the difference manually.

      • Asieh August 28, 2019 at 4:09 am #

        Hi Jason,

        I am not sure I understand why in the case of the airline dataset you also checked the log. Because the first test shows that the data is not stationary. Why would you do the test on the logs as well? In other words, let’s say the adfuller test on the log showed that the log of this dataset is stationary; It still doesn’t change the fact that the dataset itself is non-stationary.

        • Jason Brownlee August 28, 2019 at 6:42 am #

          A power transform like a log can remove the non-stationary variance – as we see in this dataset.

  4. Seine Yumnam January 19, 2017 at 2:26 am #

    great article! easy to understand. simplicity is powerful. and all those good stuff.

  5. jack kinkade February 28, 2017 at 5:55 am #

    “We interpret this result using the p-value from the test. A p-value below a threshold (such as 5% or 1%) suggests we accept the null hypothesis (non-stationary), otherwise it suggests we reject the null hypothesis (stationary)”

    Shouldn’t this be the opposite of what you have stated. Please see http://stats.stackexchange.com/questions/55805/how-do-you-interpret-results-from-unit-root-tests

    • Jason Brownlee February 28, 2017 at 8:30 am #

      Hi Jack,

      Yes, that is a typo, fixing now and I made it clearer with some more bullet points. All of the analysis in the post is correct.

      In summary, the null hypothesis (H0) is that there is a unit root (autoregression is non-stationary). Rejecting it means no unit root and non-stationary.

      A p-value more than the critical value means we cannot reject H0, we accept that there is a unit root and that the data is non-stationary.

      • Amine Ait el harraj April 9, 2018 at 10:29 pm #

        Hi Jason, thank you for the great article, but i think but you fixed the typo in the opposite way, if i’m not mistaken it’s more like ” We interpret this result using the p-value from the test. A p-value below a threshold (such as 5% or 1%) suggests we accept the null hypothesis (stationary), otherwise it suggests we reject the null hypothesis (non-stationary) “

        • Amine Ait el harraj April 9, 2018 at 10:32 pm #

          My bad it’s correct, I confused it with the H0 of stationar tests

        • Miro June 26, 2019 at 12:49 am #

          Autoregression need not to be non-stationary. Take AR(1) e.g.

  6. Magnus March 31, 2017 at 1:57 am #

    I performed the Augmented Dickey-Fuller test on my own data set. My result are as follows:
    ADF Statistic: -34.360229
    p-value: 0.000000
    Critical Values:
    1%: -3.430
    5%: -2.862
    10%: -2.567
    So my time series are stationary. In my example, I have 525600 values giving me a maxlag of 102. These are minute data for one month. But I don’t understand how so few lags can detect e.g a daily variation?
    Now when I calculate the distribution of occurrence frequency, there is clearly a time dependence on UT on hourly binned data. I have a higher number of samples, for a certain range of values, at around 15 UT compared to other times. So I have a UT dependence in my data, but it is still stationary. So I guess one should be careful when using this test. In my case there is a UT dependency on the number of values, at a certain level, rather than the value itself. How to deal with this? One idea, perhaps, is to add sine and cosine of time to the inputs. Any comments on this?

    • Jason Brownlee March 31, 2017 at 5:56 am #

      Interesting. Perhaps it would be worth performing a stationary test at different time scales?

      • Magnus April 1, 2017 at 1:50 am #

        Yes, I did. Same result. I also performed the test on the sunspot number, from one of your earlier posts. I then got this result:
        ADF Statistic: -9.567668
        p-value: 0.000000
        Critical Values:
        1%: -3.433
        5%: -2.863
        10%: -2.567
        Now, I am really confused. I also did a test on artificial data from a sine function with normally distributed data added to it. Now the test gave a p-value of 0.07, but from the plot it was very obvious the data is non-stationary. So I really suggest to use the group by process in Pandas and plot the data.
        Another approach, instead of removing seasonality is the following. If only the target values, used for training a prediction model, are non-stationary, then it might be easier to add sine/cosine of time to the inputs. Of course, the input space increases but there is no need to create time-lagged data for these inputs.
        I appreciate any comments and suggestions.

        • Jason Brownlee April 19, 2017 at 7:58 am #

          I’m dubious about your results.

          I have found the test to be reliable.

          Perhaps the version of the statsmodels library is out of date, or perhaps the data you have loaded does not match your expectation?

        • abdulwahid gul October 29, 2018 at 2:17 am #

          The problem is the way you are printing out the results. Can you just print the whole variable like this
          print(result)

          or something like this
          print(‘ADF Statistic: {}’.format(result[0])).

          I was having the same problem, but changing the printing format, fixed it for me.
          ADF Statistic: -12.851066
          p-value: 0.000000
          Critical Values:
          1%: -3.431
          5%: -2.862
          10%: -2.567

  7. Joy April 1, 2017 at 4:02 am #

    Results of Dickey-Fuller Test:
    Test Statistic -1.152597e+01
    p-value 3.935525e-21
    #Lags Used 2.300000e+01
    Number of Observations Used 1.417000e+03
    Critical Value (5%) -2.863582e+00
    Critical Value (1%) -3.434973e+00
    Critical Value (10%) -2.567857e+00
    dtype: float64

    Hi , I want to forecast temperature of my time series dataset. Dickey -Fuller test in python gives me above results, which shows Test statistics is larger than any of the critical value meaning time series is not stationary after taking transformations. So ,can i forecast without time series being non-stationary?

    • Jason Brownlee April 1, 2017 at 5:59 am #

      You can, but consider another round of differencing.

    • Clarke January 30, 2018 at 7:33 pm #

      What code did you use to get this, Joy? I’m trying to get results like that but I only get the graph

  8. Ritesh Kumar July 24, 2017 at 8:13 pm #

    Does the statsmodel python library require us to convert the series into stationary series before feeding the series to any of the ARMA or ARIMA models ?

    • Jason Brownlee July 25, 2017 at 9:41 am #

      Ideally, I would. The model can difference to address trends, but I would recommend explicitly pre-processing the data before hand. This will help you better understand your problem/data.

  9. Francois3C November 5, 2017 at 4:25 pm #

    Great article, you make these topics understandable.

    I started testing some series for stationarity and got strange behaviors I cannot understand.

    In Python (3.6), ADF give so different results for linear sequences of 100 and 101 items:

    from statsmodels.tsa.stattools import adfuller
    adfuller(range(100))
    adfuller(range(101))

    give ADF statistics of +2.59 and -4.23.

    I’d expect both results to be very close to each other. Neither is stationary series is stationary as the express the same trend. But the test is positive in one case and negative in the other.

    What is wrong?

    • Jason Brownlee November 6, 2017 at 4:50 am #

      I would not worry, focus on the test (e.g. the value relative to critical value), not the value itself.

      • Francois3C November 6, 2017 at 5:10 pm #

        Thanks for the quick reply.

        But this is precisely my problem: with a slight change in the number of observations of a series of constant slope/trend of +1, the test swings entirely from non-stationary to stationary for a reason I fail to understand.

        from statsmodels.tsa.stattools import adfuller
        X=range(100)
        result = adfuller(X)
        print(‘ADF Statistic: %f’ % result[0])
        print(‘p-value: %f’ % result[1])
        for key, value in result[4].items():
        print(‘\t%s: %.3f’ % (key, value))

        ADF Statistic: 2.589283
        p-value: 0.999073
        1%: -3.505
        5%: -2.894
        10%: -2.584

        X=range(101)
        result = adfuller(X)
        print(‘ADF Statistic: %f’ % result[0])
        print(‘p-value: %f’ % result[1])
        for key, value in result[4].items():
        print(‘\t%s: %.3f’ % (key, value))

        ADF Statistic: -4.232578
        p-value: 0.000580
        1%: -3.504
        5%: -2.894
        10%: -2.584

        • Jason Brownlee November 7, 2017 at 9:46 am #

          Ah I see. It might be a case of requiring a critically minimum amount of data for the statistical test to be viable.

  10. Choubix November 30, 2017 at 10:15 pm #

    Thanks for sharing the knowledge!
    quick questions if you don’t mind: I would like to test a few trading strategies on ETFs. It looks obvious that these time series are non stationary.
    how does one go about converting them to stationary?
    I would like to use Technical Indicators (which input are prices) as features in my model. What shall I do to the features?
    my objective is not to predict price but to classify into “buy/sell” (or hold).
    any algo better suited for financial time series?

    Thank you!

    • Jason Brownlee December 1, 2017 at 7:34 am #

      You can use differencing and seasonal adjustment. I have posts on both methods, use the search feature.

  11. foyle December 7, 2017 at 8:24 am #

    Actually, when ADF Statistic < critical value then it is stationary. Comparing pvalue with critical value is not right and confusing. Including the adfuller api explanation in http://www.statsmodels.org/dev/generated/statsmodels.tsa.stattools.adfuller.html.

    • Jason Brownlee December 7, 2017 at 3:03 pm #

      I don’t think we are comparing the p-value to anything in this post, I believe are reviewing the test statistic.

      • foyle December 7, 2017 at 11:31 pm #

        I see. It is ADF Statistic < critical value or p-value < threshold, then the series is stationary. Threshold is 0.05 etc.

        • Jason Brownlee December 8, 2017 at 5:41 am #

          Perhaps I’m dense, but where exactly? Can you quote the text?

          I note that I describe how to interpret p-values separately from interpreting test static.

        • foyle December 8, 2017 at 8:28 am #

          From your blog.
          p-value <= 0.05: Reject the null hypothesis (H0), the data does not have a unit root and is stationary.

          And there is another explanation based on critical value.

          https://www.analyticsvidhya.com/blog/2016/02/time-series-forecasting-codes-python/

          So there are two ways of considering the adf result, using p-value or using critical value.

          • Jason Brownlee December 8, 2017 at 2:29 pm #

            In that section, I was introducing the meaning of the p-value, not how to interpret the test. Sorry for the confusion.

  12. Dima Burlaj December 22, 2017 at 9:05 pm #

    I performed the Dickey–Fuller test and get 1 as p-value. Then I performed Box_Cox transform which allowed to decrease p-value to 0.96. Then I performed seasonal differentiation and p-value decreased to 0.0000. After this, I build LSTM neural network and train it. Now, I want to compare results in the original scale, in the transformed. I found the scipy.special.inv_boxcox() function, which does the inverse transformation. But for me, it is not working. What can be wrong?

    • Jason Brownlee December 23, 2017 at 5:17 am #

      Perhaps you can experiment on some test data separate from your model. Transform and then inverse transform.

      Remember all operations need to be reversed, including the seasonal adjustment.

  13. Prashanth February 2, 2018 at 10:00 pm #

    Excellent!

  14. TAMER A. FARRAG April 23, 2018 at 3:14 am #

    I’m very interested in your articles Jason, I have a question.

    If I train my model using the residual data ( removing seasonal and trends), what about the predicted values how we get the correct values ( how to add seasonally and trends again) .

    I hope that I can present my question correctly. sorry for my poor English.

    additional remark,

    Why you don’t use the package “statsmodels” to decompose the time series. I mean the issue discussed here:

    https://stackoverflow.com/questions/20672236/time-series-decomposition-function-in-python

    • Jason Brownlee April 23, 2018 at 6:21 am #

      If you remove the trend and seasonality prior to modeling, you can add them back to the prediction.

      If you used differencing, invert the differencing. If you used a model, invert the application of the model.

      I have many examples on the blog of this.

      • Joe Herro March 24, 2019 at 9:15 am #

        HI Jason,

        I am looking for he blog posts noted above regarding how to re-add trends and seasonality to modeling after it has been removed. Could you please point me to the blog posts which cover this? I cannot seem to locate them.

        Thanks much,

        Joe

  15. Fauzan Taufik May 23, 2018 at 2:58 am #

    Hi Jason, as far as I know adfuller is a statistical test for Random walk, H1 means not Random walk, above you revealing that H1 also means stastionary time-series, does any non-random walk is stationary?

  16. Denis June 1, 2018 at 2:38 am #

    Traceback (most recent call last):
    File “shampoo.py”, line 20, in
    result = adfuller(X)
    File “/home/denis/.local/lib/python3.5/site-packages/statsmodels/tsa/stattools.py”, line 221, in adfuller
    xdall = lagmat(xdiff[:, None], maxlag, trim=’both’, original=’in’)
    File “/home/denis/.local/lib/python3.5/site-packages/statsmodels/tsa/tsatools.py”, line 397, in lagmat
    nobs, nvar = xa.shape
    ValueError: too many values to unpack (expected 2)

  17. nabila August 31, 2018 at 11:46 pm #

    Hi Jason,

    Nice tutorial! I’m just starting out with time series data so I’m wondering, if my data doesn’t pass stationarity tests, then I cannot use time series analysis on it, is that right? Can RNNs model time series that are non-stationary?

    • Jason Brownlee September 1, 2018 at 6:21 am #

      You can, but results might not be good.

      RNNs do seem to perform better with a stationary series in my experience.

  18. Aishwarya Singh September 12, 2018 at 7:42 pm #

    Hi,

    I guess trend-stationary series is the one that has a trend but no unit root. Correct me if I am wrong .

    Reference : http://in.mathworks.com/help/econ/trend-stationary-vs-difference-stationary.html

  19. Krishna September 17, 2018 at 10:31 pm #

    Hi Jason,
    I have a small question, when you are working with ADF when the result suggests stationary is it difference stationary, stationary in increments or normal weak stationary in time series…….

    • Jason Brownlee September 18, 2018 at 6:15 am #

      The statistical test is reporting a likelihood of being stationary rather than a fact.

  20. Dmitry September 25, 2018 at 4:38 am #

    I am working with a time series that has multiple random measurements for every moment in time. In other words, my X series includes sets of distinct measured values for every timestamp.

    Will the described procedure and code work for me as is? Should I be sorting such X or not?

    • Jason Brownlee September 25, 2018 at 6:30 am #

      You would need to work with each time series (variable) separately.

  21. amelie BM September 25, 2018 at 11:50 pm #

    Thank you for your explanations!It is quite interesting
    After the evaluation of the model, how can we visualize the predicted data to complete the available database

    thanks

    • Jason Brownlee September 26, 2018 at 6:16 am #

      Thanks. You can use the matplotlib plot() function to plot yhat vs y

  22. ML Uros October 12, 2018 at 5:28 am #

    Hi Jason. In the example above you used ADF to test whether the Gaussian normally distributed sample is stationary. 1) Any hints on what to do if we try to model a process that shows a non-Gaussian distribution? 2) Can we still make inferences about stationarity based on means/variances of two subsamples from a non-Gaussian process? 3) Could you please point me to a reference with a nice description of how to test for stationarity in non-normal samples and how to model such time series? Thanks!

    • Jason Brownlee October 12, 2018 at 6:45 am #

      Good question, using a data visualization is always a great fall-back.

  23. Lahiru October 17, 2018 at 8:13 pm #

    I need codes for Bai Perron test,KPSS test and Phillips Perron test

  24. Volka November 20, 2018 at 1:03 pm #

    Hi, this is a very interesting tutorial. Thanks a lot.

    I am having about 1000+ different time-series dataset in the format of (year,number) and need to forecast the values for each and every dataset in next 5 years.As i have lot of datasets, I would like to know if there is a way to automate the aforementioned stationary check step, so that I can directly perform the ARIMA process? or is there any other algorithm that you would recommend?

    • Jason Brownlee November 20, 2018 at 2:06 pm #

      Perhaps difference all datasets before modeling?

      • Volka November 21, 2018 at 4:23 pm #

        Thanks a lot for the suggestion. Did you mean performing ‘log’ as ‘difference’? And after that using the p-value of Augmented Dickey-Fuller test to decide the stationary?

        Just curious to know, does performing ‘log’ guarantee that you have a stationary time-series dataset?

        • Jason Brownlee November 22, 2018 at 6:21 am #

          Log and other power transforms can calm an increasing/changing variance and make the data distribution more Gaussian.

  25. mk December 11, 2018 at 2:31 pm #

    Log transform does not work,and how can we do the next for this situation?
    Thanks.

  26. Lukasz January 10, 2019 at 8:48 pm #

    Thanks Jason, great article.
    However could you please advice how would you approach stationarity test with multivariate, multiinput and multioutput time series?
    I’m working on multistep prediction for a few thousands of different elements. For each element, i’ll probably have to use a few different kind of measurements (like temperature, pressure, traffic etc.). I have to predict three timeseries for each site. For some network elements measurements goes up, for others they go down… How should i approach stationarity test properely in that case? Should i check and transform for each element separately? Or perform the stationarity test for each aggregated by time measurement (over the whole set of elements)? What do you suggest?
    Regards.

    • Jason Brownlee January 11, 2019 at 7:44 am #

      I would start by performing a test on each separate univariate series.

      Then test if making one or all series stationary impacts model performance, e.g a linear model.

      • Lukasz January 23, 2019 at 8:28 pm #

        Thanks Jason!

  27. Hector Alvaro Rojas February 3, 2019 at 11:22 pm #

    I have a question related to the Augmented Dickey-Fuller (ADF) test that you applied with the “International airline passengers” dataset. When ADF test is applied using R we get different decision results than when using python. Here are the numbers.

    Using R:

    library(tseries)
    tsData <- AirPassengers # ts data
    adf.test(as.timeseries(tsData)) # p-value < 0.05 indicates the TS is stationary

    Out:

    Augmented Dickey-Fuller Test

    data: tsData
    Dickey-Fuller = -7.3186, Lag order = 5, p-value = 0.01
    alternative hypothesis: stationary

    Warning message:
    In adf.test(tsData) : p-value smaller than printed p-value

    Conclusion: Reject Ho. So we accept it is stationary.

    Using Python:

    from pandas import Series
    from statsmodels.tsa.stattools import adfuller
    series = Series.from_csv('daily-total-female-births.csv', header=0)
    X = series.values
    result = adfuller(X)
    print('ADF Statistic: %f' % result[0])
    print('p-value: %f' % result[1])
    print('Critical Values:')
    for key, value in result[4].items():
    print('\t%s: %.3f' % (key, value))

    Out:
    ADF Statistic: 0.815369
    p-value: 0.991880
    Critical Values:
    5%: -2.884
    1%: -3.482
    10%: -2.579

    Conclusion: No Reject Ho. So we no accept it is stationary. Then it looks Non-stationary
    The test statistic is positive, meaning we are much less likely to reject the null hypothesis (it looks non-stationary).

    Both datasets (R: AirPassengers and Python: daily-total-female-births) are the same. So, I can not get the reasons why ADF test showed different results.

    Would you please give me a hand in finding an explanation of this rare situation?

    • Jason Brownlee February 4, 2019 at 5:48 am #

      The airline dataset is not stationary. If a library reports that it is, perhaps there is a bug in your code or the library?

      • Hector Alvaro Rojas February 7, 2019 at 11:41 pm #

        The right R code is:

        library(tseries)
        tsData <- AirPassengers # ts data
        adf.test(tsData)

        AirPassengers

        Out:
        Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
        1949 112 118 132 129 121 135 148 148 136 119 104 118
        1950 115 126 141 135 125 149 170 170 158 133 114 140
        1951 145 150 178 163 172 178 199 199 184 162 146 166
        1952 171 180 193 181 183 218 230 242 209 191 172 194
        1953 196 196 236 235 229 243 264 272 237 211 180 201
        1954 204 188 235 227 234 264 302 293 259 229 203 229
        1955 242 233 267 269 270 315 364 347 312 274 237 278
        1956 284 277 317 313 318 374 413 405 355 306 271 306
        1957 315 301 356 348 355 422 465 467 404 347 305 336
        1958 340 318 362 348 363 435 491 505 404 359 310 337
        1959 360 342 406 396 420 472 548 559 463 407 362 405
        1960 417 391 419 461 472 535 622 606 508 461 390 432

        class(AirPassengers)

        Out:
        [1] "ts"

        frequency(AirPassengers)

        Out:
        [1] 12

        I run the test again but now using your dataset as a source (converting it to” ts” class in R) and I got the same result.

        Anyway, there is something definitely wrong in this way to present the stationarity study of this dataset by using the Augmented Dickey-Fuller (ADF) test in R.

        There are many articles on the web (www) that using just the ADF test with R conclude that the AirPassengers dataset is stationary when clearly is not.

        I agree with you 100%. I mean, considering R if we complement our study using other test and Autocorrelation analysis we can get the same conclusion that the AirPassengers dataset is not a stationary one.

        Thanks for your answer and time, of course.

        Regards.

        HA

        • Jason Brownlee February 8, 2019 at 7:50 am #

          Interesting. Perhaps there is a bug in the routine, or a change in the way the test must be interpreted in R? I think the latter is more likely.

  28. Alex Bietrix April 29, 2019 at 11:58 pm #

    Hi,
    I have a question about time series modeling. I implemented a multilinear regression in Python, but I found on Eviews that a threshold regression would be better.

    Do you know how to implement this new model with Python ? I didn’t find the solution for the moment.

    Thank you very much for your answer.

    Alex

    • Jason Brownlee April 30, 2019 at 6:59 am #

      I’m not familiar with that algorithm, perhaps ask the author?

  29. Min May 29, 2019 at 12:40 am #

    Hi Jason,

    Great post! I am new to time series analysis. I have questions for you, is that a high autocorrelation plot (even lags = 100) means the time series is non-stationary? Since I used all methods of yours, and it seems the data should be stationary, but after plot the autocorrelation, It still gives me a high result.

    • Jason Brownlee May 29, 2019 at 8:45 am #

      Not necessarily, if you calculate the ACF/PACF after removing trends and seasonality, it could be stationary and still have high correlation (I think – off hand).

  30. zik June 2, 2019 at 5:34 pm #

    Thanks so much prof..
    ive been trying to reach you sir ..
    my test was -2 for adf and didnt pass any of the significance test,
    do you think phillip perron test is a good idea ?

  31. Shubham Kumar June 6, 2019 at 8:58 am #

    Hi Jason,

    Great content!
    If after applying ADF test on my time series, I get NULL hypothesis true, then that means that the time series follows a random walk process.

    Which means that trying to make the series stationary wouldn’t work right? Because random walk means no learnable pattern.

    Differencing the series will give me a trend stationary time series but non stationary variance.

    Am I correct? Or is there some transformation that could work?

    • Jason Brownlee June 6, 2019 at 2:16 pm #

      A random walk is non-stationary, but not all non-stationary time series are a random walk.

      Try differencing to make it stationary.

  32. Kimiya June 7, 2019 at 5:20 pm #

    Hi!
    thanks for the great content!
    I have a question I’ll appreciate if you could help me!
    is it possible that we see an OBVIOUS trend in the plot, but still get the result that the TS is stationary form the Dickey-Fuller test?

  33. vamsee krishna jagarlamudi June 13, 2019 at 12:38 am #

    I have a small question, how can I cite the material you have presented here

  34. Anindya Sankar Chattopadhyay June 23, 2019 at 10:30 pm #

    Hi Jason:

    Do you have any article on VAR and VARMAX using statsmodels module of Python?

    Any idea on how to measure the error of forecasting for those models in terms of MAE?

    Thanks

  35. Amandeep June 23, 2019 at 11:27 pm #

    Hi Jason,

    Do you have any article on checking stationarity for multivariate time series forecasting methods?

    It would be great you can share VAR module using Python as well.

    Thanks,
    Aman

    • Jason Brownlee June 24, 2019 at 6:32 am #

      I do not, only univariate data.

      Thanks for the suggestion.

  36. André Araujo July 1, 2019 at 11:50 pm #

    Hi Jason, Hw r u?
    How I can handle a log plot if a have some zero in my time-series? My target is predict rain. Is not stationary. I got a error follow this approach: divide by zero encountered in log.
    thanks.

    • Jason Brownlee July 2, 2019 at 7:33 am #

      You can create a log plot by adding a value to all samples to ensure all values are positive.

  37. Leen July 21, 2019 at 3:08 am #

    Hello Jason,

    ADF Statistic: -4.808291
    p-value: 0.000052
    Critical Values:
    5%: -2.870
    1%: -3.449
    10%: -2.571

    Here since -4 for ADF statistic is less than all critical values means we reject the null hypothesis, but p-value 0.000052 is greater than 5% (-2.870) so we fail to reject null hypothesis ? How should we know ?

    • Leen July 21, 2019 at 3:09 am #

      I mean we are getting conflicting results ?

    • Jason Brownlee July 21, 2019 at 6:33 am #

      p-value <= 0.05 means we reject. -4.808291 <= -3.449 means we reject. Reject in both cases.

  38. Leen July 21, 2019 at 3:25 am #

    Sorry Jason, I forgot that p-value must be less than 0.05 rather than the critical value at 5%. It was after I asked I found the mistake XD

  39. Tejasvi August 4, 2019 at 4:49 pm #

    Hi Jason,

    A clarification on this statement please – “The intuition behind a unit root test is that it determines how strongly a time series is defined by a {trend}.”

    I came across an example time series which is “Trend Stationary” but clearly is seasonal. ADF is reporting a VERY low p-value for it.

    So, I am wondering,

    1. If stationary means absence of both trend and seasonality, is there a different test to check for complete stationarity (tread and seasonal)?
    2. Or can the models work well as long as time series is trend stationary?

    Please advise. Many thanks.

    • Jason Brownlee August 5, 2019 at 6:47 am #

      Yes, it won’t be stationary if there is seasonality. Often it is simpler to just talk about trends, but you’re right.

      A stationary time series has neither a trend nor seasonality.

  40. Loulou August 23, 2019 at 12:05 am #

    Hi Jason,

    as in statsmodel’s docs:

    statsmodels.tsa.stattools.adfuller(x, maxlag=None, regression=’c’, autolag=’AIC’, store=False, regresults=False)

    if we use the default values, maxlag will be 12*(nobs/100)^{1/4} by default, and the number of lags is chosen to minimize the corresponding information criterion (in this case AIC)

    My question is:
    if the returned lag, result[2], is 14, do you have an idea if we must take all lags up to 14 or just the 14th lag ?

    Many thanks in advance

    • Jason Brownlee August 23, 2019 at 6:31 am #

      All lags up to 14th by default.

      To be more selective about lags in that interval, a custom model may be required.

  41. Anna September 6, 2019 at 6:43 pm #

    Hi Jason,

    I’m looking at the VXO and some other stock market data.
    For the VXO seasonal decompose shows some trend and strong seasonality, but AD-Test suggests the data is stationary. I didn’t perform any kind of transformation here.
    For the S&P500 on the other hand side i had to use log returns to achieve stationarity. Is this possible? I’m struggling to understand the difference.
    In case this is perfectly fine: can i use both series as input for the same model like that (let’s say OLS to begin with), or do they have to get the same treatment i.e. the same transformation to lead to useful results?

    Thank you very much in advance for your reply.

    • Jason Brownlee September 7, 2019 at 5:23 am #

      Different time series may have to be made stationary in different ways. Once stationary, they can be modelled.

  42. Berns Buenaobra September 7, 2019 at 6:27 am #

    Doc Jason strikes again! This post gave me tremendously good insight! Almost done with my use case but a good colleague at work throws at me the most important test for my (magnitude, time) paired data – how do you test for the presence of seasonality if stationary? Augmented Dickey-Fuller test is. Thanks again.

    • Jason Brownlee September 8, 2019 at 5:08 am #

      Happy that it helped!

      If you have seasonality, the data is not stationary.

Leave a Reply