Time series is different from more traditional classification and regression predictive modeling problems.

The temporal structure adds an order to the observations. This imposed order means that important assumptions about the consistency of those observations needs to be handled specifically.

For example, when modeling, there are assumptions that the summary statistics of observations are consistent. In time series terminology, we refer to this expectation as the time series being stationary.

These assumptions can be easily violated in time series by the addition of a trend, seasonality, and other time-dependent structures.

In this tutorial, you will discover how to check if your time series is stationary with Python.

After completing this tutorial, you will know:

- How to identify obvious stationary and non-stationary time series using line plot.
- How to spot check summary statistics like mean and variance for a change over time.
- How to use statistical tests with statistical significance to check if a time series is stationary.

Let’s get started.

**Update Feb/2017**: Fixed typo in interpretation of p-value, added bullet points to make it clearer.**Updated May/2018**: Improved language around reject vs fail to reject of statistical tests.

## Stationary Time Series

The observations in a stationary time series are not dependent on time.

Time series are stationary if they do not have trend or seasonal effects. Summary statistics calculated on the time series are consistent over time, like the mean or the variance of the observations.

When a time series is stationary, it can be easier to model. Statistical modeling methods assume or require the time series to be stationary to be effective.

Below is an example of the Daily Female Births dataset that is stationary.

1 2 3 4 5 |
from pandas import Series from matplotlib import pyplot series = Series.from_csv('daily-total-female-births.csv', header=0) series.plot() pyplot.show() |

Running the example creates the following plot.

### Stop learning Time Series Forecasting the *slow way*!

Take my free 7-day email course and discover how to get started (with sample code).

Click to sign-up and also get a free PDF Ebook version of the course.

## Non-Stationary Time Series

Observations from a non-stationary time series show seasonal effects, trends, and other structures that depend on the time index.

Summary statistics like the mean and variance do change over time, providing a drift in the concepts a model may try to capture.

Classical time series analysis and forecasting methods are concerned with making non-stationary time series data stationary by identifying and removing trends and removing seasonal effects.

Below is an example of the Airline Passengers dataset that is non-stationary, showing both trend and seasonal components.

1 2 3 4 5 |
from pandas import Series from matplotlib import pyplot series = Series.from_csv('international-airline-passengers.csv', header=0) series.plot() pyplot.show() |

Running the example creates the following plot.

## Types of Stationary Time Series

The notion of stationarity comes from the theoretical study of time series and it is a useful abstraction when forecasting.

There are some finer-grained notions of stationarity that you may come across if you dive deeper into this topic. They are:

They are:

**Stationary Process**: A process that generates a stationary series of observations.**Stationary Model**: A model that describes a stationary series of observations.**Trend Stationary**: A time series that does not exhibit a trend.**Seasonal Stationary**: A time series that does not exhibit seasonality.**Strictly Stationary**: A mathematical definition of a stationary process, specifically that the joint distribution of observations is invariant to time shift.

## Stationary Time Series and Forecasting

Should you make your time series stationary?

Generally, yes.

If you have clear trend and seasonality in your time series, then model these components, remove them from observations, then train models on the residuals.

If we fit a stationary model to data, we assume our data are a realization of a stationary process. So our first step in an analysis should be to check whether there is any evidence of a trend or seasonal effects and, if there is, remove them.

— Page 122, Introductory Time Series with R.

Statistical time series methods and even modern machine learning methods will benefit from the clearer signal in the data.

But…

We turn to machine learning methods when the classical methods fail. When we want more or better results. We cannot know how to best model unknown nonlinear relationships in time series data and some methods may result in better performance when working with non-stationary observations or some mixture of stationary and non-stationary views of the problem.

The suggestion here is to treat properties of a time series being stationary or not as another source of information that can be used in feature engineering and feature selection on your time series problem when using machine learning methods.

## Checks for Stationarity

There are many methods to check whether a time series (direct observations, residuals, otherwise) is stationary or non-stationary.

**Look at Plots**: You can review a time series plot of your data and visually check if there are any obvious trends or seasonality.**Summary Statistics**: You can review the summary statistics for your data for seasons or random partitions and check for obvious or significant differences.**Statistical Tests**: You can use statistical tests to check if the expectations of stationarity are met or have been violated.

Above, we have already introduced the Daily Female Births and Airline Passengers datasets as stationary and non-stationary respectively with plots showing an obvious lack and presence of trend and seasonality components.

Next, we will look at a quick and dirty way to calculate and review summary statistics on our time series dataset for checking to see if it is stationary.

## Summary Statistics

A quick and dirty check to see if your time series is non-stationary is to review summary statistics.

You can split your time series into two (or more) partitions and compare the mean and variance of each group. If they differ and the difference is statistically significant, the time series is likely non-stationary.

Next, let’s try this approach on the Daily Births dataset.

### Daily Births Dataset

Because we are looking at the mean and variance, we are assuming that the data conforms to a Gaussian (also called the bell curve or normal) distribution.

We can also quickly check this by eyeballing a histogram of our observations.

1 2 3 4 5 |
from pandas import Series from matplotlib import pyplot series = Series.from_csv('daily-total-female-births.csv', header=0) series.hist() pyplot.show() |

Running the example plots a histogram of values from the time series. We clearly see the bell curve-like shape of the Gaussian distribution, perhaps with a longer right tail.

Next, we can split the time series into two contiguous sequences. We can then calculate the mean and variance of each group of numbers and compare the values.

1 2 3 4 5 6 7 8 9 |
from pandas import Series series = Series.from_csv('daily-total-female-births.csv', header=0) X = series.values split = len(X) / 2 X1, X2 = X[0:split], X[split:] mean1, mean2 = X1.mean(), X2.mean() var1, var2 = X1.var(), X2.var() print('mean1=%f, mean2=%f' % (mean1, mean2)) print('variance1=%f, variance2=%f' % (var1, var2)) |

Running this example shows that the mean and variance values are different, but in the same ball-park.

1 2 |
mean1=39.763736, mean2=44.185792 variance1=49.213410, variance2=48.708651 |

Next, let’s try the same trick on the Airline Passengers dataset.

### Airline Passengers Dataset

Cutting straight to the chase, we can split our dataset and calculate the mean and variance for each group.

1 2 3 4 5 6 7 8 9 |
from pandas import Series series = Series.from_csv('international-airline-passengers.csv', header=0) X = series.values split = len(X) / 2 X1, X2 = X[0:split], X[split:] mean1, mean2 = X1.mean(), X2.mean() var1, var2 = X1.var(), X2.var() print('mean1=%f, mean2=%f' % (mean1, mean2)) print('variance1=%f, variance2=%f' % (var1, var2)) |

Running the example, we can see the mean and variance look very different.

We have a non-stationary time series.

1 2 |
mean1=182.902778, mean2=377.694444 variance1=2244.087770, variance2=7367.962191 |

Well, maybe.

Let’s take one step back and check if assuming a Gaussian distribution makes sense in this case by plotting the values of the time series as a histogram.

1 2 3 4 5 |
from pandas import Series from matplotlib import pyplot series = Series.from_csv('international-airline-passengers.csv', header=0) series.hist() pyplot.show() |

Running the example shows that indeed the distribution of values does not look like a Gaussian, therefore the mean and variance values are less meaningful.

This squashed distribution of the observations may be another indicator of a non-stationary time series.

Reviewing the plot of the time series again, we can see that there is an obvious seasonality component, and it looks like the seasonality component is growing.

This may suggest an exponential growth from season to season. A log transform can be used to flatten out exponential change back to a linear relationship.

Below is the same histogram with a log transform of the time series.

1 2 3 4 5 6 7 8 9 10 |
from pandas import Series from matplotlib import pyplot from numpy import log series = Series.from_csv('international-airline-passengers.csv', header=0) X = series.values X = log(X) pyplot.hist(X) pyplot.show() pyplot.plot(X) pyplot.show() |

Running the example, we can see the more familiar Gaussian-like or Uniform-like distribution of values.

We also create a line plot of the log transformed data and can see the exponential growth seems diminished, but we still have a trend and seasonal elements.

We can now calculate the mean and standard deviation of the values of the log transformed dataset.

1 2 3 4 5 6 7 8 9 10 11 12 |
from pandas import Series from matplotlib import pyplot from numpy import log series = Series.from_csv('international-airline-passengers.csv', header=0) X = series.values X = log(X) split = len(X) / 2 X1, X2 = X[0:split], X[split:] mean1, mean2 = X1.mean(), X2.mean() var1, var2 = X1.var(), X2.var() print('mean1=%f, mean2=%f' % (mean1, mean2)) print('variance1=%f, variance2=%f' % (var1, var2)) |

Running the examples shows mean and standard deviation values for each group that are again similar, but not identical.

Perhaps, from these numbers alone, we would say the time series is stationary, but we strongly believe this to not be the case from reviewing the line plot.

1 2 |
mean1=5.175146, mean2=5.909206 variance1=0.068375, variance2=0.049264 |

This is a quick and dirty method that may be easily fooled.

We can use a statistical test to check if the difference between two samples of Gaussian random variables is real or a statistical fluke. We could explore statistical significance tests, like the Student t-test, but things get tricky because of the serial correlation between values.

In the next section, we will use a statistical test designed to explicitly comment on whether a univariate time series is stationary.

## Augmented Dickey-Fuller test

Statistical tests make strong assumptions about your data. They can only be used to inform the degree to which a null hypothesis can be rejected or fail to be reject. The result must be interpreted for a given problem to be meaningful.

Nevertheless, they can provide a quick check and confirmatory evidence that your time series is stationary or non-stationary.

The Augmented Dickey-Fuller test is a type of statistical test called a unit root test.

The intuition behind a unit root test is that it determines how strongly a time series is defined by a trend.

There are a number of unit root tests and the Augmented Dickey-Fuller may be one of the more widely used. It uses an autoregressive model and optimizes an information criterion across multiple different lag values.

The null hypothesis of the test is that the time series can be represented by a unit root, that it is not stationary (has some time-dependent structure). The alternate hypothesis (rejecting the null hypothesis) is that the time series is stationary.

**Null Hypothesis (H0)**: If failed to be rejected, it suggests the time series has a unit root, meaning it is non-stationary. It has some time dependent structure.**Alternate Hypothesis (H1)**: The null hypothesis is rejected; it suggests the time series does not have a unit root, meaning it is stationary. It does not have time-dependent structure.

We interpret this result using the p-value from the test. A p-value below a threshold (such as 5% or 1%) suggests we reject the null hypothesis (stationary), otherwise a p-value above the threshold suggests we fail to reject the null hypothesis (non-stationary).

**p-value > 0.05**: Fail to reject the null hypothesis (H0), the data has a unit root and is non-stationary.**p-value <= 0.05**: Reject the null hypothesis (H0), the data does not have a unit root and is stationary.

Below is an example of calculating the Augmented Dickey-Fuller test on the Daily Female Births dataset. The statsmodels library provides the adfuller() function that implements the test.

1 2 3 4 5 6 7 8 9 10 |
from pandas import Series from statsmodels.tsa.stattools import adfuller series = Series.from_csv('daily-total-female-births.csv', header=0) X = series.values result = adfuller(X) print('ADF Statistic: %f' % result[0]) print('p-value: %f' % result[1]) print('Critical Values:') for key, value in result[4].items(): print('\t%s: %.3f' % (key, value)) |

Running the example prints the test statistic value of -4. The more negative this statistic, the more likely we are to reject the null hypothesis (we have a stationary dataset).

As part of the output, we get a look-up table to help determine the ADF statistic. We can see that our statistic value of -4 is less than the value of -3.449 at 1%.

This suggests that we can reject the null hypothesis with a significance level of less than 1% (i.e. a low probability that the result is a statistical fluke).

Rejecting the null hypothesis means that the process has no unit root, and in turn that the time series is stationary or does not have time-dependent structure.

1 2 3 4 5 6 |
ADF Statistic: -4.808291 p-value: 0.000052 Critical Values: 5%: -2.870 1%: -3.449 10%: -2.571 |

We can perform the same test on the Airline Passenger dataset.

1 2 3 4 5 6 7 8 9 10 |
from pandas import Series from statsmodels.tsa.stattools import adfuller series = Series.from_csv('international-airline-passengers.csv', header=0) X = series.values result = adfuller(X) print('ADF Statistic: %f' % result[0]) print('p-value: %f' % result[1]) print('Critical Values:') for key, value in result[4].items(): print('\t%s: %.3f' % (key, value)) |

Running the example gives a different picture than the above. The test statistic is positive, meaning we are much less likely to reject the null hypothesis (it looks non-stationary).

Comparing the test statistic to the critical values, it looks like we would have to fail to reject the null hypothesis that the time series is non-stationary and does have time-dependent structure.

1 2 3 4 5 6 |
ADF Statistic: 0.815369 p-value: 0.991880 Critical Values: 5%: -2.884 1%: -3.482 10%: -2.579 |

Let’s log transform the dataset again to make the distribution of values more linear and better meet the expectations of this statistical test.

1 2 3 4 5 6 7 8 9 10 11 |
from pandas import Series from statsmodels.tsa.stattools import adfuller from numpy import log series = Series.from_csv('international-airline-passengers.csv', header=0) X = series.values X = log(X) result = adfuller(X) print('ADF Statistic: %f' % result[0]) print('p-value: %f' % result[1]) for key, value in result[4].items(): print('\t%s: %.3f' % (key, value)) |

Running the example shows a negative value for the test statistic.

We can see that the value is larger than the critical values, again, meaning that we can fail to reject the null hypothesis and in turn that the time series is non-stationary.

1 2 3 4 5 |
ADF Statistic: -1.717017 p-value: 0.422367 5%: -2.884 1%: -3.482 10%: -2.579 |

## Summary

In this tutorial, you discovered how to check if your time series is stationary with Python.

Specifically, you learned:

- The importance of time series data being stationary for use with statistical modeling methods and even some modern machine learning methods.
- How to use line plots and basic summary statistics to check if a time series is stationary.
- How to calculate and interpret statistical significance tests to check if a time series is stationary.

Do you have any questions about stationary and non-stationary time series, or about this post?

Ask your questions in the comments below and I will do my best to answer.

Hi there, nice post!

Just a quick question, when testing the residuals of an OLS regression between two price SERIES for stationarity would you consider then, the two price series, to be cointegrated If the H0 was rejected by the ADF test that you ran on the residuals ?

Or would you first run the ADF test on each of the price series in order to see if they are I(1) themselves?

Thanks!

Hi Eduardo,

Both. I would check both the input data and the residuals.

Can you help with material .for this project topic. Comparison of different method of stationalizing time series data. You can inbox me on this mail box. adedayo.temmy@yahoo.com. thanks

Hi Jason,

Thanks for the reply!

I asked this because of a “common sense” (maybe not) assumption that price series would not be, per se, stationary, by definition, so, sometimes I ask myself if isn’t this kind of testing a little too much.

Best Regards!

I’m high on ML methods for time series over linear methods like ARIMA, but one really important consideration is stationarity.

Trend removal and exploring seasonality specifically is a big deal otherwise ML methods blow-up for the same reasons as linear methods.

I’d like to do a whole series of posts on stationarity.

Hi . What kind of methods are well for removing checking time series is :

1. trendy

2. stationary

3. has seasonality

I would like to forcats time series via RNN, but for getting more accuarate results I need at the beging check all this 3 characteristics.

Data visualization is a good start.

beside using log, do you consider to use panda.diff()?

Thanks Cipher, it beats doing the difference manually.

great article! easy to understand. simplicity is powerful. and all those good stuff.

Thanks Seine. I’m glad you found it useful.

“We interpret this result using the p-value from the test. A p-value below a threshold (such as 5% or 1%) suggests we accept the null hypothesis (non-stationary), otherwise it suggests we reject the null hypothesis (stationary)”

Shouldn’t this be the opposite of what you have stated. Please see http://stats.stackexchange.com/questions/55805/how-do-you-interpret-results-from-unit-root-tests

Hi Jack,

Yes, that is a typo, fixing now and I made it clearer with some more bullet points. All of the analysis in the post is correct.

In summary, the null hypothesis (H0) is that there is a unit root (autoregression is non-stationary). Rejecting it means no unit root and non-stationary.

A p-value more than the critical value means we cannot reject H0, we accept that there is a unit root and that the data is non-stationary.

Hi Jason, thank you for the great article, but i think but you fixed the typo in the opposite way, if i’m not mistaken it’s more like ” We interpret this result using the p-value from the test. A p-value below a threshold (such as 5% or 1%) suggests we accept the null hypothesis (stationary), otherwise it suggests we reject the null hypothesis (non-stationary) “

My bad it’s correct, I confused it with the H0 of stationar tests

No problem. It is confusing.

I performed the Augmented Dickey-Fuller test on my own data set. My result are as follows:

ADF Statistic: -34.360229

p-value: 0.000000

Critical Values:

1%: -3.430

5%: -2.862

10%: -2.567

So my time series are stationary. In my example, I have 525600 values giving me a maxlag of 102. These are minute data for one month. But I don’t understand how so few lags can detect e.g a daily variation?

Now when I calculate the distribution of occurrence frequency, there is clearly a time dependence on UT on hourly binned data. I have a higher number of samples, for a certain range of values, at around 15 UT compared to other times. So I have a UT dependence in my data, but it is still stationary. So I guess one should be careful when using this test. In my case there is a UT dependency on the number of values, at a certain level, rather than the value itself. How to deal with this? One idea, perhaps, is to add sine and cosine of time to the inputs. Any comments on this?

Interesting. Perhaps it would be worth performing a stationary test at different time scales?

Yes, I did. Same result. I also performed the test on the sunspot number, from one of your earlier posts. I then got this result:

ADF Statistic: -9.567668

p-value: 0.000000

Critical Values:

1%: -3.433

5%: -2.863

10%: -2.567

Now, I am really confused. I also did a test on artificial data from a sine function with normally distributed data added to it. Now the test gave a p-value of 0.07, but from the plot it was very obvious the data is non-stationary. So I really suggest to use the group by process in Pandas and plot the data.

Another approach, instead of removing seasonality is the following. If only the target values, used for training a prediction model, are non-stationary, then it might be easier to add sine/cosine of time to the inputs. Of course, the input space increases but there is no need to create time-lagged data for these inputs.

I appreciate any comments and suggestions.

I’m dubious about your results.

I have found the test to be reliable.

Perhaps the version of the statsmodels library is out of date, or perhaps the data you have loaded does not match your expectation?

The problem is the way you are printing out the results. Can you just print the whole variable like this

print(result)

or something like this

print(‘ADF Statistic: {}’.format(result[0])).

I was having the same problem, but changing the printing format, fixed it for me.

ADF Statistic: -12.851066

p-value: 0.000000

Critical Values:

1%: -3.431

5%: -2.862

10%: -2.567

Results of Dickey-Fuller Test:

Test Statistic -1.152597e+01

p-value 3.935525e-21

#Lags Used 2.300000e+01

Number of Observations Used 1.417000e+03

Critical Value (5%) -2.863582e+00

Critical Value (1%) -3.434973e+00

Critical Value (10%) -2.567857e+00

dtype: float64

Hi , I want to forecast temperature of my time series dataset. Dickey -Fuller test in python gives me above results, which shows Test statistics is larger than any of the critical value meaning time series is not stationary after taking transformations. So ,can i forecast without time series being non-stationary?

You can, but consider another round of differencing.

What code did you use to get this, Joy? I’m trying to get results like that but I only get the graph

Does the statsmodel python library require us to convert the series into stationary series before feeding the series to any of the ARMA or ARIMA models ?

Ideally, I would. The model can difference to address trends, but I would recommend explicitly pre-processing the data before hand. This will help you better understand your problem/data.

Great article, you make these topics understandable.

I started testing some series for stationarity and got strange behaviors I cannot understand.

In Python (3.6), ADF give so different results for linear sequences of 100 and 101 items:

from statsmodels.tsa.stattools import adfuller

adfuller(range(100))

adfuller(range(101))

give ADF statistics of +2.59 and -4.23.

I’d expect both results to be very close to each other. Neither is stationary series is stationary as the express the same trend. But the test is positive in one case and negative in the other.

What is wrong?

I would not worry, focus on the test (e.g. the value relative to critical value), not the value itself.

Thanks for the quick reply.

But this is precisely my problem: with a slight change in the number of observations of a series of constant slope/trend of +1, the test swings entirely from non-stationary to stationary for a reason I fail to understand.

from statsmodels.tsa.stattools import adfuller

X=range(100)

result = adfuller(X)

print(‘ADF Statistic: %f’ % result[0])

print(‘p-value: %f’ % result[1])

for key, value in result[4].items():

print(‘\t%s: %.3f’ % (key, value))

ADF Statistic: 2.589283

p-value: 0.999073

1%: -3.505

5%: -2.894

10%: -2.584

X=range(101)

result = adfuller(X)

print(‘ADF Statistic: %f’ % result[0])

print(‘p-value: %f’ % result[1])

for key, value in result[4].items():

print(‘\t%s: %.3f’ % (key, value))

ADF Statistic: -4.232578

p-value: 0.000580

1%: -3.504

5%: -2.894

10%: -2.584

Ah I see. It might be a case of requiring a critically minimum amount of data for the statistical test to be viable.

Thanks for sharing the knowledge!

quick questions if you don’t mind: I would like to test a few trading strategies on ETFs. It looks obvious that these time series are non stationary.

how does one go about converting them to stationary?

I would like to use Technical Indicators (which input are prices) as features in my model. What shall I do to the features?

my objective is not to predict price but to classify into “buy/sell” (or hold).

any algo better suited for financial time series?

Thank you!

You can use differencing and seasonal adjustment. I have posts on both methods, use the search feature.

Actually, when ADF Statistic < critical value then it is stationary. Comparing pvalue with critical value is not right and confusing. Including the adfuller api explanation in http://www.statsmodels.org/dev/generated/statsmodels.tsa.stattools.adfuller.html.

I don’t think we are comparing the p-value to anything in this post, I believe are reviewing the test statistic.

I see. It is ADF Statistic < critical value or p-value < threshold, then the series is stationary. Threshold is 0.05 etc.

Perhaps I’m dense, but where exactly? Can you quote the text?

I note that I describe how to interpret p-values separately from interpreting test static.

From your blog.

p-value <= 0.05: Reject the null hypothesis (H0), the data does not have a unit root and is stationary.

And there is another explanation based on critical value.

https://www.analyticsvidhya.com/blog/2016/02/time-series-forecasting-codes-python/

So there are two ways of considering the adf result, using p-value or using critical value.

In that section, I was introducing the meaning of the p-value, not how to interpret the test. Sorry for the confusion.

I performed the Dickey–Fuller test and get 1 as p-value. Then I performed Box_Cox transform which allowed to decrease p-value to 0.96. Then I performed seasonal differentiation and p-value decreased to 0.0000. After this, I build LSTM neural network and train it. Now, I want to compare results in the original scale, in the transformed. I found the scipy.special.inv_boxcox() function, which does the inverse transformation. But for me, it is not working. What can be wrong?

Perhaps you can experiment on some test data separate from your model. Transform and then inverse transform.

Remember all operations need to be reversed, including the seasonal adjustment.

Excellent!

Thanks.

I’m very interested in your articles Jason, I have a question.

If I train my model using the residual data ( removing seasonal and trends), what about the predicted values how we get the correct values ( how to add seasonally and trends again) .

I hope that I can present my question correctly. sorry for my poor English.

additional remark,

Why you don’t use the package “statsmodels” to decompose the time series. I mean the issue discussed here:

https://stackoverflow.com/questions/20672236/time-series-decomposition-function-in-python

If you remove the trend and seasonality prior to modeling, you can add them back to the prediction.

If you used differencing, invert the differencing. If you used a model, invert the application of the model.

I have many examples on the blog of this.

Hi Jason, as far as I know adfuller is a statistical test for Random walk, H1 means not Random walk, above you revealing that H1 also means stastionary time-series, does any non-random walk is stationary?

The test checks for a unit root:

https://en.wikipedia.org/wiki/Augmented_Dickey%E2%80%93Fuller_test

A random walk is not stationary.

I think it is worth mentioning that to apply the ADF test, we assume that the time series follows a given model (dy(t)=cst+gamma*y(t-1)+….+e(t)) where e(t) is the error term that is supposed to be a white noise.

So if you reject H0, and that indeed the error term is white noise and that gamma<1 then the model will be stationary (I think in the strict sense – cf https://people.maths.bris.ac.uk/~magpn/Research/LSTS/TOS.html for the different definitions of stationarity)

Thanks for the note.

Traceback (most recent call last):

File “shampoo.py”, line 20, in

result = adfuller(X)

File “/home/denis/.local/lib/python3.5/site-packages/statsmodels/tsa/stattools.py”, line 221, in adfuller

xdall = lagmat(xdiff[:, None], maxlag, trim=’both’, original=’in’)

File “/home/denis/.local/lib/python3.5/site-packages/statsmodels/tsa/tsatools.py”, line 397, in lagmat

nobs, nvar = xa.shape

ValueError: too many values to unpack (expected 2)

I have some suggestions here:

https://machinelearningmastery.com/faq/single-faq/why-does-the-code-in-the-tutorial-not-work-for-me

Hi Jason,

Nice tutorial! I’m just starting out with time series data so I’m wondering, if my data doesn’t pass stationarity tests, then I cannot use time series analysis on it, is that right? Can RNNs model time series that are non-stationary?

You can, but results might not be good.

RNNs do seem to perform better with a stationary series in my experience.

Hi,

I guess trend-stationary series is the one that has a trend but no unit root. Correct me if I am wrong .

Reference : http://in.mathworks.com/help/econ/trend-stationary-vs-difference-stationary.html

Hi Jason,

I have a small question, when you are working with ADF when the result suggests stationary is it difference stationary, stationary in increments or normal weak stationary in time series…….

The statistical test is reporting a likelihood of being stationary rather than a fact.

I am working with a time series that has multiple random measurements for every moment in time. In other words, my X series includes sets of distinct measured values for every timestamp.

Will the described procedure and code work for me as is? Should I be sorting such X or not?

You would need to work with each time series (variable) separately.

Thank you for your explanations!It is quite interesting

After the evaluation of the model, how can we visualize the predicted data to complete the available database

thanks

Thanks. You can use the matplotlib plot() function to plot yhat vs y

Hi Jason. In the example above you used ADF to test whether the Gaussian normally distributed sample is stationary. 1) Any hints on what to do if we try to model a process that shows a non-Gaussian distribution? 2) Can we still make inferences about stationarity based on means/variances of two subsamples from a non-Gaussian process? 3) Could you please point me to a reference with a nice description of how to test for stationarity in non-normal samples and how to model such time series? Thanks!

Good question, using a data visualization is always a great fall-back.

I need codes for Bai Perron test,KPSS test and Phillips Perron test

Perhaps try a google search?

Hi, this is a very interesting tutorial. Thanks a lot.

I am having about 1000+ different time-series dataset in the format of (year,number) and need to forecast the values for each and every dataset in next 5 years.As i have lot of datasets, I would like to know if there is a way to automate the aforementioned stationary check step, so that I can directly perform the ARIMA process? or is there any other algorithm that you would recommend?

Perhaps difference all datasets before modeling?

Thanks a lot for the suggestion. Did you mean performing ‘log’ as ‘difference’? And after that using the p-value of Augmented Dickey-Fuller test to decide the stationary?

Just curious to know, does performing ‘log’ guarantee that you have a stationary time-series dataset?

Log and other power transforms can calm an increasing/changing variance and make the data distribution more Gaussian.

Log transform does not work，and how can we do the next for this situation？

Thanks.

What do you mean it does not work?

The turth is that the time series is still non-stationary.How can we get the stationary time series ?

You can remove the trend by differencing, you can remove the seasonality by seasonal differencing.

I have posts on both topics, perhaps start here:

https://machinelearningmastery.com/start-here/#timeseries