# How to Check if Time Series Data is Stationary with Python

Last Updated on August 15, 2020

Time series is different from more traditional classification and regression predictive modeling problems.

The temporal structure adds an order to the observations. This imposed order means that important assumptions about the consistency of those observations needs to be handled specifically.

For example, when modeling, there are assumptions that the summary statistics of observations are consistent. In time series terminology, we refer to this expectation as the time series being stationary.

These assumptions can be easily violated in time series by the addition of a trend, seasonality, and other time-dependent structures.

In this tutorial, you will discover how to check if your time series is stationary with Python.

After completing this tutorial, you will know:

• How to identify obvious stationary and non-stationary time series using line plot.
• How to spot check summary statistics like mean and variance for a change over time.
• How to use statistical tests with statistical significance to check if a time series is stationary.

Kick-start your project with my new book Time Series Forecasting With Python, including step-by-step tutorials and the Python source code files for all examples.

Let’s get started.

• Updated Feb/2017: Fixed typo in interpretation of p-value, added bullet points to make it clearer.
• Updated May/2018: Improved language around reject vs fail to reject of statistical tests.
• Updated Apr/2019: Updated the link to dataset.
• Updated Nov/2019: Updated mean/variance example for Python 3, also updated bug in data loading (thanks John).

How to Check if Time Series Data is Stationary with Python
Photo by Susanne Nilsson, some rights reserved.

## Stationary Time Series

The observations in a stationary time series are not dependent on time.

Time series are stationary if they do not have trend or seasonal effects. Summary statistics calculated on the time series are consistent over time, like the mean or the variance of the observations.

When a time series is stationary, it can be easier to model. Statistical modeling methods assume or require the time series to be stationary to be effective.

Below is an example of loading the Daily Female Births dataset that is stationary.

Running the example creates the following plot.

Daily Female Births Dataset Plot

### Stop learning Time Series Forecasting the slow way!

Take my free 7-day email course and discover how to get started (with sample code).

Click to sign-up and also get a free PDF Ebook version of the course.

## Non-Stationary Time Series

Observations from a non-stationary time series show seasonal effects, trends, and other structures that depend on the time index.

Summary statistics like the mean and variance do change over time, providing a drift in the concepts a model may try to capture.

Classical time series analysis and forecasting methods are concerned with making non-stationary time series data stationary by identifying and removing trends and removing seasonal effects.

Below is an example of the Airline Passengers dataset that is non-stationary, showing both trend and seasonal components.

Running the example creates the following plot.

Non-Stationary Airline Passengers Dataset

## Types of Stationary Time Series

The notion of stationarity comes from the theoretical study of time series and it is a useful abstraction when forecasting.

There are some finer-grained notions of stationarity that you may come across if you dive deeper into this topic. They are:

They are:

• Stationary Process: A process that generates a stationary series of observations.
• Stationary Model: A model that describes a stationary series of observations.
• Trend Stationary: A time series that does not exhibit a trend.
• Seasonal Stationary: A time series that does not exhibit seasonality.
• Strictly Stationary: A mathematical definition of a stationary process, specifically that the joint distribution of observations is invariant to time shift.

## Stationary Time Series and Forecasting

Should you make your time series stationary?

Generally, yes.

If you have clear trend and seasonality in your time series, then model these components, remove them from observations, then train models on the residuals.

If we fit a stationary model to data, we assume our data are a realization of a stationary process. So our first step in an analysis should be to check whether there is any evidence of a trend or seasonal effects and, if there is, remove them.

— Page 122,  Introductory Time Series with R.

Statistical time series methods and even modern machine learning methods will benefit from the clearer signal in the data.

But…

We turn to machine learning methods when the classical methods fail. When we want more or better results. We cannot know how to best model unknown nonlinear relationships in time series data and some methods may result in better performance when working with non-stationary observations or some mixture of stationary and non-stationary views of the problem.

The suggestion here is to treat properties of a time series being stationary or not as another source of information that can be used in feature engineering and feature selection on your time series problem when using machine learning methods.

## Checks for Stationarity

There are many methods to check whether a time series (direct observations, residuals, otherwise) is stationary or non-stationary.

1. Look at Plots: You can review a time series plot of your data and visually check if there are any obvious trends or seasonality.
2. Summary Statistics: You can review the summary statistics for your data for seasons or random partitions and check for obvious or significant differences.
3. Statistical Tests: You can use statistical tests to check if the expectations of stationarity are met or have been violated.

Above, we have already introduced the Daily Female Births and Airline Passengers datasets as stationary and non-stationary respectively with plots showing an obvious lack and presence of trend and seasonality components.

Next, we will look at a quick and dirty way to calculate and review summary statistics on our time series dataset for checking to see if it is stationary.

## Summary Statistics

A quick and dirty check to see if your time series is non-stationary is to review summary statistics.

You can split your time series into two (or more) partitions and compare the mean and variance of each group. If they differ and the difference is statistically significant, the time series is likely non-stationary.

Next, let’s try this approach on the Daily Births dataset.

### Daily Births Dataset

Because we are looking at the mean and variance, we are assuming that the data conforms to a Gaussian (also called the bell curve or normal) distribution.

We can also quickly check this by eyeballing a histogram of our observations.

Running the example plots a histogram of values from the time series. We clearly see the bell curve-like shape of the Gaussian distribution, perhaps with a longer right tail.

Histogram of Daily Female Births

Next, we can split the time series into two contiguous sequences. We can then calculate the mean and variance of each group of numbers and compare the values.

Running this example shows that the mean and variance values are different, but in the same ball-park.

Next, let’s try the same trick on the Airline Passengers dataset.

### Airline Passengers Dataset

Cutting straight to the chase, we can split our dataset and calculate the mean and variance for each group.

Running the example, we can see the mean and variance look very different.

We have a non-stationary time series.

Well, maybe.

Let’s take one step back and check if assuming a Gaussian distribution makes sense in this case by plotting the values of the time series as a histogram.

Running the example shows that indeed the distribution of values does not look like a Gaussian, therefore the mean and variance values are less meaningful.

This squashed distribution of the observations may be another indicator of a non-stationary time series.

Histogram of Airline Passengers

Reviewing the plot of the time series again, we can see that there is an obvious seasonality component, and it looks like the seasonality component is growing.

This may suggest an exponential growth from season to season. A log transform can be used to flatten out exponential change back to a linear relationship.

Below is the same histogram with a log transform of the time series.

Running the example, we can see the more familiar Gaussian-like or Uniform-like distribution of values.

Histogram Log of Airline Passengers

We also create a line plot of the log transformed data and can see the exponential growth seems diminished, but we still have a trend and seasonal elements.

Line Plot Log of Airline Passengers

We can now calculate the mean and standard deviation of the values of the log transformed dataset.

Running the examples shows mean and standard deviation values for each group that are again similar, but not identical.

Perhaps, from these numbers alone, we would say the time series is stationary, but we strongly believe this to not be the case from reviewing the line plot.

This is a quick and dirty method that may be easily fooled.

We can use a statistical test to check if the difference between two samples of Gaussian random variables is real or a statistical fluke. We could explore statistical significance tests, like the Student t-test, but things get tricky because of the serial correlation between values.

In the next section, we will use a statistical test designed to explicitly comment on whether a univariate time series is stationary.

## Augmented Dickey-Fuller test

Statistical tests make strong assumptions about your data. They can only be used to inform the degree to which a null hypothesis can be rejected or fail to be reject. The result must be interpreted for a given problem to be meaningful.

Nevertheless, they can provide a quick check and confirmatory evidence that your time series is stationary or non-stationary.

The Augmented Dickey-Fuller test is a type of statistical test called a unit root test.

The intuition behind a unit root test is that it determines how strongly a time series is defined by a trend.

There are a number of unit root tests and the Augmented Dickey-Fuller may be one of the more widely used. It uses an autoregressive model and optimizes an information criterion across multiple different lag values.

The null hypothesis of the test is that the time series can be represented by a unit root, that it is not stationary (has some time-dependent structure). The alternate hypothesis (rejecting the null hypothesis) is that the time series is stationary.

• Null Hypothesis (H0): If failed to be rejected, it suggests the time series has a unit root, meaning it is non-stationary. It has some time dependent structure.
• Alternate Hypothesis (H1): The null hypothesis is rejected; it suggests the time series does not have a unit root, meaning it is stationary. It does not have time-dependent structure.

We interpret this result using the p-value from the test. A p-value below a threshold (such as 5% or 1%) suggests we reject the null hypothesis (stationary), otherwise a p-value above the threshold suggests we fail to reject the null hypothesis (non-stationary).

• p-value > 0.05: Fail to reject the null hypothesis (H0), the data has a unit root and is non-stationary.
• p-value <= 0.05: Reject the null hypothesis (H0), the data does not have a unit root and is stationary.

Below is an example of calculating the Augmented Dickey-Fuller test on the Daily Female Births dataset. The statsmodels library provides the adfuller() function that implements the test.

Running the example prints the test statistic value of -4. The more negative this statistic, the more likely we are to reject the null hypothesis (we have a stationary dataset).

As part of the output, we get a look-up table to help determine the ADF statistic. We can see that our statistic value of -4 is less than the value of -3.449 at 1%.

This suggests that we can reject the null hypothesis with a significance level of less than 1% (i.e. a low probability that the result is a statistical fluke).

Rejecting the null hypothesis means that the process has no unit root, and in turn that the time series is stationary or does not have time-dependent structure.

We can perform the same test on the Airline Passenger dataset.

Running the example gives a different picture than the above. The test statistic is positive, meaning we are much less likely to reject the null hypothesis (it looks non-stationary).

Comparing the test statistic to the critical values, it looks like we would have to fail to reject the null hypothesis that the time series is non-stationary and does have time-dependent structure.

Let’s log transform the dataset again to make the distribution of values more linear and better meet the expectations of this statistical test.

Running the example shows a negative value for the test statistic.

We can see that the value is larger than the critical values, again, meaning that we can fail to reject the null hypothesis and in turn that the time series is non-stationary.

## Summary

In this tutorial, you discovered how to check if your time series is stationary with Python.

Specifically, you learned:

• The importance of time series data being stationary for use with statistical modeling methods and even some modern machine learning methods.
• How to use line plots and basic summary statistics to check if a time series is stationary.
• How to calculate and interpret statistical significance tests to check if a time series is stationary.

## Want to Develop Time Series Forecasts with Python?

#### Develop Your Own Forecasts in Minutes

...with just a few lines of python code

Discover how in my new Ebook:
Introduction to Time Series Forecasting With Python

It covers self-study tutorials and end-to-end projects on topics like: Loading data, visualization, modeling, algorithm tuning, and much more...

### 181 Responses to How to Check if Time Series Data is Stationary with Python

1. Eduardo Gonzatti December 30, 2016 at 6:08 am #

Hi there, nice post!

Just a quick question, when testing the residuals of an OLS regression between two price SERIES for stationarity would you consider then, the two price series, to be cointegrated If the H0 was rejected by the ADF test that you ran on the residuals ?
Or would you first run the ADF test on each of the price series in order to see if they are I(1) themselves?

Thanks!

• Jason Brownlee December 31, 2016 at 7:00 am #

Hi Eduardo,

Both. I would check both the input data and the residuals.

• adedayo abraham June 1, 2017 at 8:12 pm #

Can you help with material .for this project topic. Comparison of different method of stationalizing time series data. You can inbox me on this mail box. [email protected]. thanks

2. Eduardo Gonzatti January 6, 2017 at 10:46 pm #

Hi Jason,

I asked this because of a “common sense” (maybe not) assumption that price series would not be, per se, stationary, by definition, so, sometimes I ask myself if isn’t this kind of testing a little too much.

Best Regards!

• Jason Brownlee January 7, 2017 at 8:39 am #

I’m high on ML methods for time series over linear methods like ARIMA, but one really important consideration is stationarity.

Trend removal and exploring seasonality specifically is a big deal otherwise ML methods blow-up for the same reasons as linear methods.

I’d like to do a whole series of posts on stationarity.

• Naira Grigoryan October 16, 2018 at 4:16 am #

Hi . What kind of methods are well for removing checking time series is :

1. trendy
2. stationary
3. has seasonality

I would like to forcats time series via RNN, but for getting more accuarate results I need at the beging check all this 3 characteristics.

• Jason Brownlee October 16, 2018 at 6:39 am #

Data visualization is a good start.

3. Cipher January 8, 2017 at 11:50 pm #

beside using log, do you consider to use panda.diff()?

• Jason Brownlee January 9, 2017 at 7:51 am #

Thanks Cipher, it beats doing the difference manually.

• Asieh August 28, 2019 at 4:09 am #

Hi Jason,

I am not sure I understand why in the case of the airline dataset you also checked the log. Because the first test shows that the data is not stationary. Why would you do the test on the logs as well? In other words, let’s say the adfuller test on the log showed that the log of this dataset is stationary; It still doesn’t change the fact that the dataset itself is non-stationary.

• Jason Brownlee August 28, 2019 at 6:42 am #

A power transform like a log can remove the non-stationary variance – as we see in this dataset.

• edgar panganiban October 14, 2019 at 3:47 pm #

I think what he means is that when you do power transform (log) to a non-stationary variance, and do testing on it (adf) and build a model also. That only applies to the transformed data. But how about the original , non-transformed dataset?

• Jason Brownlee October 15, 2019 at 6:04 am #

We no longer work with the raw data, we work on the transformed data. We continue to apply transforms until we get something stationary, then fit a model.

Does that help?

4. Seine Yumnam January 19, 2017 at 2:26 am #

great article! easy to understand. simplicity is powerful. and all those good stuff.

• Jason Brownlee January 19, 2017 at 7:34 am #

Thanks Seine. I’m glad you found it useful.

5. jack kinkade February 28, 2017 at 5:55 am #

“We interpret this result using the p-value from the test. A p-value below a threshold (such as 5% or 1%) suggests we accept the null hypothesis (non-stationary), otherwise it suggests we reject the null hypothesis (stationary)”

Shouldn’t this be the opposite of what you have stated. Please see http://stats.stackexchange.com/questions/55805/how-do-you-interpret-results-from-unit-root-tests

• Jason Brownlee February 28, 2017 at 8:30 am #

Hi Jack,

Yes, that is a typo, fixing now and I made it clearer with some more bullet points. All of the analysis in the post is correct.

In summary, the null hypothesis (H0) is that there is a unit root (autoregression is non-stationary). Rejecting it means no unit root and non-stationary.

A p-value more than the critical value means we cannot reject H0, we accept that there is a unit root and that the data is non-stationary.

• Amine Ait el harraj April 9, 2018 at 10:29 pm #

Hi Jason, thank you for the great article, but i think but you fixed the typo in the opposite way, if i’m not mistaken it’s more like ” We interpret this result using the p-value from the test. A p-value below a threshold (such as 5% or 1%) suggests we accept the null hypothesis (stationary), otherwise it suggests we reject the null hypothesis (non-stationary) “

• Amine Ait el harraj April 9, 2018 at 10:32 pm #

My bad it’s correct, I confused it with the H0 of stationar tests

• Miro June 26, 2019 at 12:49 am #

Autoregression need not to be non-stationary. Take AR(1) e.g.

6. Magnus March 31, 2017 at 1:57 am #

I performed the Augmented Dickey-Fuller test on my own data set. My result are as follows:
p-value: 0.000000
Critical Values:
1%: -3.430
5%: -2.862
10%: -2.567
So my time series are stationary. In my example, I have 525600 values giving me a maxlag of 102. These are minute data for one month. But I don’t understand how so few lags can detect e.g a daily variation?
Now when I calculate the distribution of occurrence frequency, there is clearly a time dependence on UT on hourly binned data. I have a higher number of samples, for a certain range of values, at around 15 UT compared to other times. So I have a UT dependence in my data, but it is still stationary. So I guess one should be careful when using this test. In my case there is a UT dependency on the number of values, at a certain level, rather than the value itself. How to deal with this? One idea, perhaps, is to add sine and cosine of time to the inputs. Any comments on this?

• Jason Brownlee March 31, 2017 at 5:56 am #

Interesting. Perhaps it would be worth performing a stationary test at different time scales?

• Magnus April 1, 2017 at 1:50 am #

Yes, I did. Same result. I also performed the test on the sunspot number, from one of your earlier posts. I then got this result:
p-value: 0.000000
Critical Values:
1%: -3.433
5%: -2.863
10%: -2.567
Now, I am really confused. I also did a test on artificial data from a sine function with normally distributed data added to it. Now the test gave a p-value of 0.07, but from the plot it was very obvious the data is non-stationary. So I really suggest to use the group by process in Pandas and plot the data.
Another approach, instead of removing seasonality is the following. If only the target values, used for training a prediction model, are non-stationary, then it might be easier to add sine/cosine of time to the inputs. Of course, the input space increases but there is no need to create time-lagged data for these inputs.
I appreciate any comments and suggestions.

• Jason Brownlee April 19, 2017 at 7:58 am #

I have found the test to be reliable.

Perhaps the version of the statsmodels library is out of date, or perhaps the data you have loaded does not match your expectation?

• abdulwahid gul October 29, 2018 at 2:17 am #

The problem is the way you are printing out the results. Can you just print the whole variable like this
print(result)

or something like this

I was having the same problem, but changing the printing format, fixed it for me.
p-value: 0.000000
Critical Values:
1%: -3.431
5%: -2.862
10%: -2.567

7. Joy April 1, 2017 at 4:02 am #

Results of Dickey-Fuller Test:
Test Statistic -1.152597e+01
p-value 3.935525e-21
#Lags Used 2.300000e+01
Number of Observations Used 1.417000e+03
Critical Value (5%) -2.863582e+00
Critical Value (1%) -3.434973e+00
Critical Value (10%) -2.567857e+00
dtype: float64

Hi , I want to forecast temperature of my time series dataset. Dickey -Fuller test in python gives me above results, which shows Test statistics is larger than any of the critical value meaning time series is not stationary after taking transformations. So ,can i forecast without time series being non-stationary?

• Jason Brownlee April 1, 2017 at 5:59 am #

You can, but consider another round of differencing.

• Clarke January 30, 2018 at 7:33 pm #

What code did you use to get this, Joy? I’m trying to get results like that but I only get the graph

8. Ritesh Kumar July 24, 2017 at 8:13 pm #

Does the statsmodel python library require us to convert the series into stationary series before feeding the series to any of the ARMA or ARIMA models ?

• Jason Brownlee July 25, 2017 at 9:41 am #

Ideally, I would. The model can difference to address trends, but I would recommend explicitly pre-processing the data before hand. This will help you better understand your problem/data.

9. Francois3C November 5, 2017 at 4:25 pm #

Great article, you make these topics understandable.

I started testing some series for stationarity and got strange behaviors I cannot understand.

In Python (3.6), ADF give so different results for linear sequences of 100 and 101 items:

give ADF statistics of +2.59 and -4.23.

I’d expect both results to be very close to each other. Neither is stationary series is stationary as the express the same trend. But the test is positive in one case and negative in the other.

What is wrong?

• Jason Brownlee November 6, 2017 at 4:50 am #

I would not worry, focus on the test (e.g. the value relative to critical value), not the value itself.

• Francois3C November 6, 2017 at 5:10 pm #

But this is precisely my problem: with a slight change in the number of observations of a series of constant slope/trend of +1, the test swings entirely from non-stationary to stationary for a reason I fail to understand.

X=range(100)
print(‘p-value: %f’ % result[1])
for key, value in result[4].items():
print(‘\t%s: %.3f’ % (key, value))

p-value: 0.999073
1%: -3.505
5%: -2.894
10%: -2.584

X=range(101)
print(‘p-value: %f’ % result[1])
for key, value in result[4].items():
print(‘\t%s: %.3f’ % (key, value))

p-value: 0.000580
1%: -3.504
5%: -2.894
10%: -2.584

• Jason Brownlee November 7, 2017 at 9:46 am #

Ah I see. It might be a case of requiring a critically minimum amount of data for the statistical test to be viable.

10. Choubix November 30, 2017 at 10:15 pm #

Thanks for sharing the knowledge!
quick questions if you don’t mind: I would like to test a few trading strategies on ETFs. It looks obvious that these time series are non stationary.
how does one go about converting them to stationary?
I would like to use Technical Indicators (which input are prices) as features in my model. What shall I do to the features?
my objective is not to predict price but to classify into “buy/sell” (or hold).
any algo better suited for financial time series?

Thank you!

• Jason Brownlee December 1, 2017 at 7:34 am #

You can use differencing and seasonal adjustment. I have posts on both methods, use the search feature.

11. foyle December 7, 2017 at 8:24 am #

Actually, when ADF Statistic < critical value then it is stationary. Comparing pvalue with critical value is not right and confusing. Including the adfuller api explanation in http://www.statsmodels.org/dev/generated/statsmodels.tsa.stattools.adfuller.html.

• Jason Brownlee December 7, 2017 at 3:03 pm #

I don’t think we are comparing the p-value to anything in this post, I believe are reviewing the test statistic.

• foyle December 7, 2017 at 11:31 pm #

I see. It is ADF Statistic < critical value or p-value < threshold, then the series is stationary. Threshold is 0.05 etc.

• Jason Brownlee December 8, 2017 at 5:41 am #

Perhaps I’m dense, but where exactly? Can you quote the text?

I note that I describe how to interpret p-values separately from interpreting test static.

• foyle December 8, 2017 at 8:28 am #

p-value <= 0.05: Reject the null hypothesis (H0), the data does not have a unit root and is stationary.

And there is another explanation based on critical value.

So there are two ways of considering the adf result, using p-value or using critical value.

• Jason Brownlee December 8, 2017 at 2:29 pm #

In that section, I was introducing the meaning of the p-value, not how to interpret the test. Sorry for the confusion.

12. Dima Burlaj December 22, 2017 at 9:05 pm #

I performed the Dickey–Fuller test and get 1 as p-value. Then I performed Box_Cox transform which allowed to decrease p-value to 0.96. Then I performed seasonal differentiation and p-value decreased to 0.0000. After this, I build LSTM neural network and train it. Now, I want to compare results in the original scale, in the transformed. I found the scipy.special.inv_boxcox() function, which does the inverse transformation. But for me, it is not working. What can be wrong?

• Jason Brownlee December 23, 2017 at 5:17 am #

Perhaps you can experiment on some test data separate from your model. Transform and then inverse transform.

Remember all operations need to be reversed, including the seasonal adjustment.

13. Prashanth February 2, 2018 at 10:00 pm #

Excellent!

14. TAMER A. FARRAG April 23, 2018 at 3:14 am #

I’m very interested in your articles Jason, I have a question.

If I train my model using the residual data ( removing seasonal and trends), what about the predicted values how we get the correct values ( how to add seasonally and trends again) .

I hope that I can present my question correctly. sorry for my poor English.

Why you don’t use the package “statsmodels” to decompose the time series. I mean the issue discussed here:

https://stackoverflow.com/questions/20672236/time-series-decomposition-function-in-python

• Jason Brownlee April 23, 2018 at 6:21 am #

If you remove the trend and seasonality prior to modeling, you can add them back to the prediction.

If you used differencing, invert the differencing. If you used a model, invert the application of the model.

I have many examples on the blog of this.

• Joe Herro March 24, 2019 at 9:15 am #

HI Jason,

I am looking for he blog posts noted above regarding how to re-add trends and seasonality to modeling after it has been removed. Could you please point me to the blog posts which cover this? I cannot seem to locate them.

Thanks much,

Joe

• Jason Brownlee March 25, 2019 at 6:39 am #

If you remove the trend by differencing or the seasonality by seasonal differencing, you can add it back directly, via addition with the value that was subtracted.

15. Fauzan Taufik May 23, 2018 at 2:58 am #

Hi Jason, as far as I know adfuller is a statistical test for Random walk, H1 means not Random walk, above you revealing that H1 also means stastionary time-series, does any non-random walk is stationary?

16. Denis June 1, 2018 at 2:38 am #

Traceback (most recent call last):
File “shampoo.py”, line 20, in
File “/home/denis/.local/lib/python3.5/site-packages/statsmodels/tsa/stattools.py”, line 221, in adfuller
xdall = lagmat(xdiff[:, None], maxlag, trim=’both’, original=’in’)
File “/home/denis/.local/lib/python3.5/site-packages/statsmodels/tsa/tsatools.py”, line 397, in lagmat
nobs, nvar = xa.shape
ValueError: too many values to unpack (expected 2)

17. nabila August 31, 2018 at 11:46 pm #

Hi Jason,

Nice tutorial! I’m just starting out with time series data so I’m wondering, if my data doesn’t pass stationarity tests, then I cannot use time series analysis on it, is that right? Can RNNs model time series that are non-stationary?

• Jason Brownlee September 1, 2018 at 6:21 am #

You can, but results might not be good.

RNNs do seem to perform better with a stationary series in my experience.

18. Aishwarya Singh September 12, 2018 at 7:42 pm #

Hi,

I guess trend-stationary series is the one that has a trend but no unit root. Correct me if I am wrong .

19. Krishna September 17, 2018 at 10:31 pm #

Hi Jason,
I have a small question, when you are working with ADF when the result suggests stationary is it difference stationary, stationary in increments or normal weak stationary in time series…….

• Jason Brownlee September 18, 2018 at 6:15 am #

The statistical test is reporting a likelihood of being stationary rather than a fact.

20. Dmitry September 25, 2018 at 4:38 am #

I am working with a time series that has multiple random measurements for every moment in time. In other words, my X series includes sets of distinct measured values for every timestamp.

Will the described procedure and code work for me as is? Should I be sorting such X or not?

• Jason Brownlee September 25, 2018 at 6:30 am #

You would need to work with each time series (variable) separately.

21. amelie BM September 25, 2018 at 11:50 pm #

Thank you for your explanations!It is quite interesting
After the evaluation of the model, how can we visualize the predicted data to complete the available database

thanks﻿

• Jason Brownlee September 26, 2018 at 6:16 am #

Thanks. You can use the matplotlib plot() function to plot yhat vs y

22. ML Uros October 12, 2018 at 5:28 am #

Hi Jason. In the example above you used ADF to test whether the Gaussian normally distributed sample is stationary. 1) Any hints on what to do if we try to model a process that shows a non-Gaussian distribution? 2) Can we still make inferences about stationarity based on means/variances of two subsamples from a non-Gaussian process? 3) Could you please point me to a reference with a nice description of how to test for stationarity in non-normal samples and how to model such time series? Thanks!

• Jason Brownlee October 12, 2018 at 6:45 am #

Good question, using a data visualization is always a great fall-back.

23. Lahiru October 17, 2018 at 8:13 pm #

I need codes for Bai Perron test,KPSS test and Phillips Perron test

• Jason Brownlee October 18, 2018 at 6:26 am #

24. Volka November 20, 2018 at 1:03 pm #

Hi, this is a very interesting tutorial. Thanks a lot.

I am having about 1000+ different time-series dataset in the format of (year,number) and need to forecast the values for each and every dataset in next 5 years.As i have lot of datasets, I would like to know if there is a way to automate the aforementioned stationary check step, so that I can directly perform the ARIMA process? or is there any other algorithm that you would recommend?

• Jason Brownlee November 20, 2018 at 2:06 pm #

Perhaps difference all datasets before modeling?

• Volka November 21, 2018 at 4:23 pm #

Thanks a lot for the suggestion. Did you mean performing ‘log’ as ‘difference’? And after that using the p-value of Augmented Dickey-Fuller test to decide the stationary?

Just curious to know, does performing ‘log’ guarantee that you have a stationary time-series dataset?

• Jason Brownlee November 22, 2018 at 6:21 am #

Log and other power transforms can calm an increasing/changing variance and make the data distribution more Gaussian.

25. mk December 11, 2018 at 2:31 pm #

Log transform does not work，and how can we do the next for this situation？
Thanks.

• Jason Brownlee December 11, 2018 at 2:34 pm #

What do you mean it does not work?

• mk December 11, 2018 at 5:44 pm #

The turth is that the time series is still non-stationary.How can we get the stationary time series ?

26. Lukasz January 10, 2019 at 8:48 pm #

Thanks Jason, great article.
However could you please advice how would you approach stationarity test with multivariate, multiinput and multioutput time series?
I’m working on multistep prediction for a few thousands of different elements. For each element, i’ll probably have to use a few different kind of measurements (like temperature, pressure, traffic etc.). I have to predict three timeseries for each site. For some network elements measurements goes up, for others they go down… How should i approach stationarity test properely in that case? Should i check and transform for each element separately? Or perform the stationarity test for each aggregated by time measurement (over the whole set of elements)? What do you suggest?
Regards.

• Jason Brownlee January 11, 2019 at 7:44 am #

I would start by performing a test on each separate univariate series.

Then test if making one or all series stationary impacts model performance, e.g a linear model.

• Lukasz January 23, 2019 at 8:28 pm #

Thanks Jason!

27. Hector Alvaro Rojas February 3, 2019 at 11:22 pm #

I have a question related to the Augmented Dickey-Fuller (ADF) test that you applied with the “International airline passengers” dataset. When ADF test is applied using R we get different decision results than when using python. Here are the numbers.

Using R:

library(tseries)
tsData <- AirPassengers # ts data
adf.test(as.timeseries(tsData)) # p-value < 0.05 indicates the TS is stationary

Out:

Augmented Dickey-Fuller Test

data: tsData
Dickey-Fuller = -7.3186, Lag order = 5, p-value = 0.01
alternative hypothesis: stationary

Warning message:
In adf.test(tsData) : p-value smaller than printed p-value

Conclusion: Reject Ho. So we accept it is stationary.

Using Python:

from pandas import Series
X = series.values
print('p-value: %f' % result[1])
print('Critical Values:')
for key, value in result[4].items():
print('\t%s: %.3f' % (key, value))

Out:
p-value: 0.991880
Critical Values:
5%: -2.884
1%: -3.482
10%: -2.579

Conclusion: No Reject Ho. So we no accept it is stationary. Then it looks Non-stationary
The test statistic is positive, meaning we are much less likely to reject the null hypothesis (it looks non-stationary).

Both datasets (R: AirPassengers and Python: daily-total-female-births) are the same. So, I can not get the reasons why ADF test showed different results.

Would you please give me a hand in finding an explanation of this rare situation?

• Jason Brownlee February 4, 2019 at 5:48 am #

The airline dataset is not stationary. If a library reports that it is, perhaps there is a bug in your code or the library?

• Hector Alvaro Rojas February 7, 2019 at 11:41 pm #

The right R code is:

library(tseries)
tsData <- AirPassengers # ts data

AirPassengers

Out:
Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
1949 112 118 132 129 121 135 148 148 136 119 104 118
1950 115 126 141 135 125 149 170 170 158 133 114 140
1951 145 150 178 163 172 178 199 199 184 162 146 166
1952 171 180 193 181 183 218 230 242 209 191 172 194
1953 196 196 236 235 229 243 264 272 237 211 180 201
1954 204 188 235 227 234 264 302 293 259 229 203 229
1955 242 233 267 269 270 315 364 347 312 274 237 278
1956 284 277 317 313 318 374 413 405 355 306 271 306
1957 315 301 356 348 355 422 465 467 404 347 305 336
1958 340 318 362 348 363 435 491 505 404 359 310 337
1959 360 342 406 396 420 472 548 559 463 407 362 405
1960 417 391 419 461 472 535 622 606 508 461 390 432

class(AirPassengers)

Out:
[1] "ts"

frequency(AirPassengers)

Out:
[1] 12

I run the test again but now using your dataset as a source (converting it to” ts” class in R) and I got the same result.

Anyway, there is something definitely wrong in this way to present the stationarity study of this dataset by using the Augmented Dickey-Fuller (ADF) test in R.

There are many articles on the web (www) that using just the ADF test with R conclude that the AirPassengers dataset is stationary when clearly is not.

I agree with you 100%. I mean, considering R if we complement our study using other test and Autocorrelation analysis we can get the same conclusion that the AirPassengers dataset is not a stationary one.

Regards.

HA

• Jason Brownlee February 8, 2019 at 7:50 am #

Interesting. Perhaps there is a bug in the routine, or a change in the way the test must be interpreted in R? I think the latter is more likely.

28. Alex Bietrix April 29, 2019 at 11:58 pm #

Hi,
I have a question about time series modeling. I implemented a multilinear regression in Python, but I found on Eviews that a threshold regression would be better.

Do you know how to implement this new model with Python ? I didn’t find the solution for the moment.

Alex

• Jason Brownlee April 30, 2019 at 6:59 am #

I’m not familiar with that algorithm, perhaps ask the author?

29. Min May 29, 2019 at 12:40 am #

Hi Jason,

Great post! I am new to time series analysis. I have questions for you, is that a high autocorrelation plot (even lags = 100) means the time series is non-stationary? Since I used all methods of yours, and it seems the data should be stationary, but after plot the autocorrelation, It still gives me a high result.

• Jason Brownlee May 29, 2019 at 8:45 am #

Not necessarily, if you calculate the ACF/PACF after removing trends and seasonality, it could be stationary and still have high correlation (I think – off hand).

30. zik June 2, 2019 at 5:34 pm #

Thanks so much prof..
ive been trying to reach you sir ..
my test was -2 for adf and didnt pass any of the significance test,
do you think phillip perron test is a good idea ?

• Jason Brownlee June 3, 2019 at 6:38 am #

I am not familiar with that test, sorry.

31. Shubham Kumar June 6, 2019 at 8:58 am #

Hi Jason,

Great content!
If after applying ADF test on my time series, I get NULL hypothesis true, then that means that the time series follows a random walk process.

Which means that trying to make the series stationary wouldn’t work right? Because random walk means no learnable pattern.

Differencing the series will give me a trend stationary time series but non stationary variance.

Am I correct? Or is there some transformation that could work?

• Jason Brownlee June 6, 2019 at 2:16 pm #

A random walk is non-stationary, but not all non-stationary time series are a random walk.

Try differencing to make it stationary.

• Shubham Kumar June 7, 2019 at 12:06 am #

But isn’t ADF testing for the presence of a unit root? And presence of unit root implies a random walk. This is my understanding. That is where the confusion is: if the ADF null hypothesis is accepted, then forecasting shouldn’t work.

• Jason Brownlee June 7, 2019 at 8:03 am #

That is not my understanding.

Unit root means non-stationary, probably trend non-stationary:
https://en.wikipedia.org/wiki/Dickey%E2%80%93Fuller_test

• Shubham Kumar June 7, 2019 at 11:21 pm #

So Random Walk is specifically for the case when coefficient of Y(t-1) term is 1 and the process is AR(1) ?

Xt = b0 + b1×X(t-1) + εt

i.e. random walk is a special case of AR(1) process where b1 = 1.

Correct this time, right?

• Jason Brownlee June 8, 2019 at 6:56 am #
32. Kimiya June 7, 2019 at 5:20 pm #

Hi!
thanks for the great content!
I have a question I’ll appreciate if you could help me!
is it possible that we see an OBVIOUS trend in the plot, but still get the result that the TS is stationary form the Dickey-Fuller test?

33. vamsee krishna jagarlamudi June 13, 2019 at 12:38 am #

I have a small question, how can I cite the material you have presented here

• vamsee krishna jagarlamudi June 13, 2019 at 12:40 am #

Sorry, Mainly the reference for the ADF test case

• Jason Brownlee June 13, 2019 at 6:18 am #
34. Anindya Sankar Chattopadhyay June 23, 2019 at 10:30 pm #

Hi Jason:

Do you have any article on VAR and VARMAX using statsmodels module of Python?

Any idea on how to measure the error of forecasting for those models in terms of MAE?

Thanks

35. Amandeep June 23, 2019 at 11:27 pm #

Hi Jason,

Do you have any article on checking stationarity for multivariate time series forecasting methods?

It would be great you can share VAR module using Python as well.

Thanks,
Aman

• Jason Brownlee June 24, 2019 at 6:32 am #

I do not, only univariate data.

Thanks for the suggestion.

36. André Araujo July 1, 2019 at 11:50 pm #

Hi Jason, Hw r u?
How I can handle a log plot if a have some zero in my time-series? My target is predict rain. Is not stationary. I got a error follow this approach: divide by zero encountered in log.
thanks.

• Jason Brownlee July 2, 2019 at 7:33 am #

You can create a log plot by adding a value to all samples to ensure all values are positive.

37. Leen July 21, 2019 at 3:08 am #

Hello Jason,

p-value: 0.000052
Critical Values:
5%: -2.870
1%: -3.449
10%: -2.571

Here since -4 for ADF statistic is less than all critical values means we reject the null hypothesis, but p-value 0.000052 is greater than 5% (-2.870) so we fail to reject null hypothesis ? How should we know ?

• Leen July 21, 2019 at 3:09 am #

I mean we are getting conflicting results ?

• Jason Brownlee July 21, 2019 at 6:33 am #

p-value <= 0.05 means we reject. -4.808291 <= -3.449 means we reject. Reject in both cases.

38. Leen July 21, 2019 at 3:25 am #

Sorry Jason, I forgot that p-value must be less than 0.05 rather than the critical value at 5%. It was after I asked I found the mistake XD

• Jason Brownlee July 21, 2019 at 6:34 am #

No problem at all, it can get confusing.

39. Tejasvi August 4, 2019 at 4:49 pm #

Hi Jason,

A clarification on this statement please – “The intuition behind a unit root test is that it determines how strongly a time series is defined by a {trend}.”

I came across an example time series which is “Trend Stationary” but clearly is seasonal. ADF is reporting a VERY low p-value for it.

So, I am wondering,

1. If stationary means absence of both trend and seasonality, is there a different test to check for complete stationarity (tread and seasonal)?
2. Or can the models work well as long as time series is trend stationary?

• Jason Brownlee August 5, 2019 at 6:47 am #

Yes, it won’t be stationary if there is seasonality. Often it is simpler to just talk about trends, but you’re right.

A stationary time series has neither a trend nor seasonality.

40. Loulou August 23, 2019 at 12:05 am #

Hi Jason,

as in statsmodel’s docs:

statsmodels.tsa.stattools.adfuller(x, maxlag=None, regression=’c’, autolag=’AIC’, store=False, regresults=False)

if we use the default values, maxlag will be 12*(nobs/100)^{1/4} by default, and the number of lags is chosen to minimize the corresponding information criterion (in this case AIC)

My question is:
if the returned lag, result[2], is 14, do you have an idea if we must take all lags up to 14 or just the 14th lag ?

• Jason Brownlee August 23, 2019 at 6:31 am #

All lags up to 14th by default.

To be more selective about lags in that interval, a custom model may be required.

41. Anna September 6, 2019 at 6:43 pm #

Hi Jason,

I’m looking at the VXO and some other stock market data.
For the VXO seasonal decompose shows some trend and strong seasonality, but AD-Test suggests the data is stationary. I didn’t perform any kind of transformation here.
For the S&P500 on the other hand side i had to use log returns to achieve stationarity. Is this possible? I’m struggling to understand the difference.
In case this is perfectly fine: can i use both series as input for the same model like that (let’s say OLS to begin with), or do they have to get the same treatment i.e. the same transformation to lead to useful results?

• Jason Brownlee September 7, 2019 at 5:23 am #

Different time series may have to be made stationary in different ways. Once stationary, they can be modelled.

42. Berns Buenaobra September 7, 2019 at 6:27 am #

Doc Jason strikes again! This post gave me tremendously good insight! Almost done with my use case but a good colleague at work throws at me the most important test for my (magnitude, time) paired data – how do you test for the presence of seasonality if stationary? Augmented Dickey-Fuller test is. Thanks again.

• Jason Brownlee September 8, 2019 at 5:08 am #

Happy that it helped!

If you have seasonality, the data is not stationary.

43. Sai September 27, 2019 at 3:54 pm #

Hi,

I am getting below results for Dickey-Fuller Test. Can someone suggest technique to make my data stationary.

I could see that graph pattern for actual data, rolling mean, rolling std is also almost same.

I have already tried log transformation, differencing and the results are still same 🙁

Results of Dickey-Fuller Test:
Test Statistic -8.161630e+00
p-value 9.110753e-13
#Lags Used 2.000000e+01
Number of Observations Used 1.351000e+03
Critical Value (1%) -3.435200e+00
Critical Value (5%) -2.863682e+00
Critical Value (10%) -2.567910e+00
dtype: float64

Regards,
Sai

44. edgar panganiban October 14, 2019 at 3:58 pm #

Nice tutorial! I just have a question. I have a date series with high variance and as a solution do the transform log on it. The thing is it changed the data on my date series, and when I do forecasting, it only forecast according to that changed/transform data ( although it greatly improves my model with a MSE of 1+ something only compare to 500+ something before).How can I bring it back where I can actually forecast the real values (but using the model from the transformed date series) or what approach is needed on this kind of scenario….

• Jason Brownlee October 15, 2019 at 6:06 am #

I don’t understand, sorry.

You fit a model on historical data, then use the model to make predictions on the future. Sometimes we prepare the historical data prior to fitting the model.

Which part are you having trouble with exactly?

If you need help making a prediction, e.g. calling predict(), this might help:
https://machinelearningmastery.com/make-sample-forecasts-arima-python/

45. Karolina October 15, 2019 at 3:54 am #

Hi Jason,
Thanks for this post, it is very helpful. I did the Dickey-Fuller test on my data and I got a zero p-value.
What does it mean? I read that it might mean that probably the data does not have normal distribution and are not stationary. Could it be the case? Below are the results of the test:
Test statistic = -4.539
P-value = 0.000
Critical values :
1%: -3.453342167806272
5%: -2.871663828287282
10%: -2.572164381381345

• Jason Brownlee October 15, 2019 at 6:20 am #

It may suggest the data is stationary.

Perhaps plot to confirm.

46. edgar panganiban October 21, 2019 at 2:01 pm #

Great tutorial, just one more question. What if in the end you find out that the time series data is non-stationary. Do you continue building model from it? Or do you find another time series data that can be build upon better.

• Jason Brownlee October 22, 2019 at 5:40 am #

Thanks.

You can make it stationary using differencing, seasonal differencing and power transforms.

47. Felipe October 30, 2019 at 1:33 am #

Hi Jason.

I have a doubt how to prepare this dataset (link below). Can you give a suggestion please?

It appears a non-stacionary series, correct?

Original dataset plot:
https://pasteboard.co/IEdYWWx.png

I tried to apply the diference method with 365 days

diff = difference(values, 365)

Dataset plot after difference:
https://pasteboard.co/IEe4rko.png

The procedure is that?

Thanks.

• Jason Brownlee October 30, 2019 at 6:05 am #

Yes, looks like seasonality.

Perhaps you need to perform seasonal differencing twice?

• Felipe November 1, 2019 at 2:05 am #

Hi Jason.

I have to evaluate the RMSE with normalized or non-normalized data?

The result of RMSE are the same for the both ?

Thanks.

• Jason Brownlee November 1, 2019 at 5:40 am #

Typically you evaluate error on data with all transforms reversed so the units are the same as the original data.

48. Karl October 30, 2019 at 10:18 am #

Good post, Jason and thanks!

I was wondering (assuming we had no prior knowledge) which X or Y was the independent variable and vice versa.. is there a way to determine one or the other?

49. Fahad November 17, 2019 at 3:10 am #

Hi Jason ,

Thank you for your email and for every things. I still learn from you and your book.
Please just I have two questions regarding the How to Check if Time Series Data is Stationary with Python.
My original data has trend that from the test of mean and variance values and by the visual observation it has trend.

and you know that to remove the trend I need to apply the first differencing or second differencing.

Please, Is applying the differencing method to remove the trend will affect my original data values?
Please, Is there any other method to remove the trend instead of differencing method?

Thank you.

Regards,

• Jason Brownlee November 17, 2019 at 7:15 am #

Yes, differencing will change your original data, this is the goal.

Yes, you can fit a linear model and subtract it from the data.

50. John November 24, 2019 at 7:28 am #

Hi, I was trying to run the same code as you have shown in this article, it seems that passing:

X = series.values

Shows the following error:

ValueError: too many values to unpack (expected 2)

I fixed this issue by using the following code:

import statsmodels.tsa.stattools as tsa

I am very confused as to what the difference between the two code blocks shown above is.

Excellent site and keep up the good work.

John

• John November 24, 2019 at 7:59 am #

One more small bug I found in the code:

split = (len(X) / 2) ### this returns 182.5 for female births csv
X1, X2 = X[0:split], X[split:]

The slicing procedure is expecting an integer for X1 & X2 so it returns an error if len(X) is an odd number,

I fixed this by using the following code:
split = round(len(X)/2)

• Jason Brownlee November 24, 2019 at 9:25 am #

Nice, thanks!

I have updated the examples.

• Jason Brownlee November 24, 2019 at 9:24 am #

Looks like a bug in the loading of the dataset. I must have introduced it in the update.

Thanks! Fixed.

51. vania todorova December 21, 2019 at 8:03 am #

Hello Jason, thanks for all the tutorials! I performed the ADF test on my time series data and it gave me the following results . DO you read this as it means its stationary and i can just go for the time series algorithm ? thanks!
Results of Dickey-Fuller Test:
Test Statistic -6.817677e+00
p-value 2.039396e-09
#Lags Used 6.000000e+00
Number of Observations Used 7.770000e+02
Critical Value (1%) -3.438794e+00
Critical Value (5%) -2.865267e+00
Critical Value (10%) -2.568755e+00
dtype: float64

• Jason Brownlee December 21, 2019 at 8:17 am #

It looks stationary.

• vania todorova December 24, 2019 at 3:54 am #

thanks! if i try to log the data it keeps telling that it encountered divided by zero in log . any suggestions how to deal with that other than not use the log of the data? thanks!

• Jason Brownlee December 24, 2019 at 6:44 am #

Make the data positive before using the log transform.

52. Rajesh Swarnkar January 24, 2020 at 6:03 pm #

Hi Jason,

I ran the adfuller and output is as below:

ADF Statistic: -2.457146
p-value: 0.126252
Critical Values: 1%: -3.438, 5%: -2.865, 10%: -2.569

https://github.com/RSwarnkar/temporary/blob/master/timeseries2.png

Is series stationary or non-stationary?

regards,
Rajesh

53. Rajesh Swarnkar January 24, 2020 at 6:05 pm #

Do you plan to have a login feature into website so that I can check my past comments and replies?

Thanks !

• Rajesh Swarnkar January 24, 2020 at 6:08 pm #

And a separate discussion domain would be awesome too !

• Jason Brownlee January 25, 2020 at 8:32 am #

Not at this stage.

54. ENRIQUE BONILLA February 1, 2020 at 7:10 am #

Hello Jason

A missing casting in line 4 should be added for Python version 3.7.

X = series.values
split = int(len(X) / 2)
X1 = X[0:split]
X2 = X[split:]
mean1, mean2 = X1.mean(), X2.mean()
var1, var2 = X1.var(), X2.var()
print(‘mean1=%f, mean2=%f’ % (mean1, mean2))
print(‘variance1=%f, variance2=%f’ % (var1, var2))

The rest code samples are ok

E. Bonilla

55. Devarshi Goswami February 12, 2020 at 5:56 pm #

following is the output of my adfuller() func.
I understand this means my time series is stationary ? is there any other interpretations that can be derived form this?
Can i fit this into an ARIMA model to predict future values?

p-value: 0.000238
Critical Values:
1%: -3.433
5%: -2.863
10%: -2.567

56. Steve April 6, 2020 at 2:47 pm #

I’m currently using this library. Have you you used this before?

https://alkaline-ml.com/pmdarima/modules/classes.html#arima-estimator-statistical-tests

57. Marcus April 12, 2020 at 2:11 pm #

Hi Jason. I just took ADF test and got p-value=0, Test statistics=-14 and other critical values all greater than -4. Following the rule, I could have accepted that the series is stationary. But the thing is, after I looked at its acf plot, the series presents a strong auto-correlation. In the plot, the coefficient decline very slowly to zero until the lag becomes 200, which is contradictory to the results of ADF test. So, I am very curious that could this actually happen? Or it’s just my mistakes somewhere.

• Jason Brownlee April 13, 2020 at 6:10 am #

Perhaps the assumptions of the test were violated making the result invalid – just a guess.

Perhaps use the results as a guide and focus on getting the most out of your models.

58. Leo Jingbo April 17, 2020 at 7:16 pm #

Thanks for this great post, it is very helpful. I have one question. How to check stationarity for multivariate time series? Can i just test the label? or test the all variables?

regards,
Leo

• Jason Brownlee April 18, 2020 at 5:45 am #

Perhaps start by checking if each separate series is stationary?

59. Hypnose April 21, 2020 at 3:15 pm #

Hi Jason, Nice article

I performed the two methods that you presented in the article on my own dataset.

– Firstly, I computed the means of many subseries obatained by lags, and I computed hers variance, i have done the same thing with the varainces. I saw the variance of means and the variance of variances are too large, hence according to the definition of stationarity, i concluded that the series is non-stationary

– Secondly I performed ADF test on the same series, and I obtained a p-value below to the critical size, then I rejected the null hypothesis, the series is stationary

I obtained two differents conclusions with the two methods, what’s wrong please ?

Sorry for my poor english am a french man.

• Jason Brownlee April 22, 2020 at 5:48 am #

Thanks.

Perhaps try differencing the data and fit a model on the differenced data and another on the raw data and use the model that achieves the best performance.

• Hypnose April 22, 2020 at 9:17 pm #

Thanks.

I will do it

60. Sai kumar June 7, 2020 at 10:50 pm #

Hello Jason,

I am very new to Machine learning and your article really helps related to Time series model 🙂
could you help me on my below queries

After performing the Dickey Fuller test my results are:
p-value: 0.61

it clearly says my model is Non – stationary and now to make it stationary what exactly would be the next step??
Also wanted to know my data set is just having two variables Date and Sales value which model could i proceed ARIMA or any other??

• Jason Brownlee June 8, 2020 at 6:13 am #

You can explore seasonal differencing if you have seasonal or differencing if you have a trend, the latter if you are unsure.

61. hossein Amini June 15, 2020 at 2:18 am #

How to distinguish the point of starting the stationarity in our time series? Like for instance, we have understood our signal is stationary, but how to determine the time of being stationary? The first solution that came into my mind was splitting the signal into different parts then calculating the statistics of each part. By finding the significant difference it is possible to determine but, it is not logical since it is not an automatic method to figure out.
What is the right idea?

• Jason Brownlee June 15, 2020 at 6:07 am #

Not sure I understand sorry. Either the serious is stationary or not.

62. Vinod July 9, 2020 at 4:59 am #

Am new to deep learning. I would like to know how to use CNN for a time series classification

63. Rahul Kumar October 9, 2020 at 12:02 am #

One correction before implementing the dickey-fuller test adfuller()

adfuller() accepts 1D array of time series.

X = X.iloc[:,0].values

as it is giving error when working with 2d array.

64. Saeed June 5, 2021 at 8:22 pm #

Hi Jason,
Thank you for the great demonstration as always.

Assuming the process is already stationary (i.e. I(0) and ADF test for presence of unit root rejected), is there any need to check the first difference? If yes, what is the rationale/application of it?

Any help is appreciated.

• Jason Brownlee June 6, 2021 at 5:49 am #

If your data is stationary both visually and statistically, then there may be no need to difference.

65. Luigi July 23, 2021 at 11:43 pm #

Hi Jason,
I am facing a behaviour I cannot explain.
I run adfuller, t-statistics and p-values are such that my series is stationary.
After I use plot_acf and plot_pacf on both my signal s and the signal squared (I wanted to check whether i have conditional heteroskedasticity).
Well the plot show high order lags for both s and s**2, which for me means that the series is not stationary and heteroskedastic
What am I missing? the acf and pacf and adfueller test seem to be contraddictory.

Thanks
Luigi

• Jason Brownlee July 24, 2021 at 5:15 am #

Correlation with lag does not mean nonstationary.

66. Nathan Hanks August 6, 2021 at 9:12 am #

With regards to scaling, would you apply the diff transform after you scale the data, or before? If i’m just applying MinMaxScaler, which resets to a range, the trends would still be evident, so I would think it doesn’t matter, but would like to hear your advice.

Thank you.

67. Rex November 7, 2021 at 4:28 pm #

Hello Jason.

It is an awesome tutorial, and it helps me a lot.
Just a quick question.

Is it possible that the result becomes worse after applying log transformation?

Here is the result before log transformation and after log transformation.

————————————————————————————————————————————
# Before

p-value: 0.012191
Critical Values:
1%: -3.563
5%: -2.919
10%: -2.597
————————————————————————————————————————————
# After

p-value: 0.022074
Critical Values:
1%: -3.563
5%: -2.919
10%: -2.597

————————————————————————————————————————————
BTW, I want to share my experience with you.
Before the log transformation, I compute the mean and variance of the raw data.
Here is the reulst:

mean1=2046.769231, mean2=1811.777778
variance1=91577.384615, variance2=19364.641026

And I thought it was non-stationary.
However, the ADF test shows that it is stationary at the significance value of 5%.
It is surprising and meaningful. I should have done both checks next time, hahaha.

• Adrian Tam November 14, 2021 at 11:16 am #

Log transformation is non-linear. Hence it definitely can become worse for some model or dataset.

68. Saleh March 19, 2022 at 9:55 pm #

Hi Jason,

I am working with a non-stationary time series. I think this time series might be piecewise stationary, meaning that on a short enough time interval,l it might be stationary. As an example, a speech signal has such a behavior, meaning that although it is not stationary in a long time interval, it can be modeled as a stationary signal on a short interval of 20 milliseconds.

I want to find the length of this short interval for the time series I’m working with? Do you have any suggestion for that? Can I only do trial and error?

Let’s say I’m working on predicting traffic evolution in a datacenter network, and I have some traffic datasets. Arima is the first model I’m trying, obviously this time series is non-stationary and its statistics are changing over time, but I guess maybe in a short time it can be considered as stationary. Should I just take an arbitrary interval and apply the ADF test?