How To Backtest Machine Learning Models for Time Series Forecasting

k-fold Cross Validation Does Not Work For Time Series Data and
Techniques That You Can Use Instead.

The goal of time series forecasting is to make accurate predictions about the future.

The fast and powerful methods that we rely on in machine learning, such as using train-test splits and k-fold cross validation, do not work in the case of time series data. This is because they ignore the temporal components inherent in the problem.

In this tutorial, you will discover how to evaluate machine learning models on time series data with Python. In the field of time series forecasting, this is called backtesting or hindcasting.

After completing this tutorial, you will know:

  • The limitations of traditional methods of model evaluation from machine learning and why evaluating models on out of sample data is required.
  • How to create train-test splits and multiple train-test splits of time series data for model evaluation in Python.
  • How walk-forward validation provides the most realistic evaluation of machine learning models on time series data.

Let’s get started.

How To Backtest Machine Learning Models for Time Series Forecasting

How To Backtest Machine Learning Models for Time Series Forecasting
Photo by Nasa, some rights reserved.

Model Evaluation

How do we know how good a given model is?

We could evaluate it on the data used to train it. This would be invalid. It might provide insight into how the selected model works, and even how it may be improved. But, any estimate of performance on this data would be optimistic, and any decisions based on this performance would be biased.


It is helpful to take it to an extreme:

A model that remembered the timestamps and value for each observation
would achieve perfect performance.

All real models we prepare will report a pale version of this result.

When evaluating a model for time series forecasting, we are interested in the performance of the model on data that was not used to train it. In machine learning, we call this unseen or out of sample data.

We can do this by splitting up the data that we do have available. We use some to prepare the model and we hold back some data and ask the model to make predictions for that period. The evaluation of these predictions will provide a good proxy for how the model will perform when we use it operationally.

In applied machine learning, we often split our data into a train and a test set: the training set used to prepare the model and the test set used to evaluate it. We may even use k-fold cross validation that repeats this process by systematically splitting the data into k groups, each given a chance to be a held out model.

These methods cannot be directly used with time series data.

This is because they assume that there is no relationship between the observations, that each observation is independent.

This is not true of time series data, where the time dimension of observations means that we cannot randomly split them into groups. Instead, we must split data up and respect the temporal order in which values were observed.

In time series forecasting, this evaluation of models on historical data is called backtesting. In some time series domains, such as meteorology, this is called hindcasting, as opposed to forecasting.

We will look at three different methods that you can use to backtest your machine learning models on time series problems. They are:

  1. Train-Test split that respect temporal order of observations.
  2. Multiple Train-Test splits that respect temporal order of observations.
  3. Walk-Forward Validation where a model may be updated each time step new data is received.

First, let’s take a look at a small, univariate time series data we will use as context to understand these three backtesting methods: the Sunspot dataset.

Stop learning Time Series Forecasting the slow way!

Take my free 7-day email course and discover data prep, modeling and more (with sample code).

Click to sign-up and also get a free PDF Ebook version of the course.

Start Your FREE Mini-Course Now!

Monthly Sunspot Dataset

This dataset describes a monthly count of the number of observed sunspots for just over 230 years (1749-1983).

The units are a count and there are 2,820 observations. The source of the dataset is credited as Andrews & Herzberg (1985).

Below is a sample of the first 5 rows of data, including the header row.

Below is a plot of the entire dataset taken from Data Market.

Monthly Sunspot Dataset

Monthly Sunspot Dataset

The dataset shows seasonality with large differences between seasons.

Download and learn more about the dataset here.

Download the dataset and save it into your current working directory with the filename “sunspots.csv“.

Load Sunspot Dataset

We can load the Sunspot dataset using Pandas.

Running the example prints the first 5 rows of data.

The dataset is also plotted.

Plot of the Sunspot Dataset

Plot of the Sunspot Dataset

Train-Test Split

You can split your dataset into training and testing subsets.

Your model can be prepared on the training dataset and predictions can be made and evaluated for the test dataset.

This can be done by selecting an arbitrary split point in the ordered list of observations and creating two new datasets. Depending on the amount of data you have available and the amount of data required, you can use splits of 50-50, 70-30 and 90-10.

It is straightforward to split data in Python.

After loading the dataset as a Pandas Series, we can extract the NumPy array of data values. The split point can be calculated as a specific index in the array. All records up to the split point are taken as the training dataset and all records from the split point to the end of the list of observations are taken as the test set.

Below is an example of this in Python using a split of 66-34.

Running the example prints the size of the loaded dataset and the size of the train and test sets created from the split.

We can make this visually by plotting the training and test sets using different colors.

Running the example plots the training dataset as blue and the test dataset as green.

Sunspot Dataset Train-Test Split

Sunspot Dataset Train-Test Split

Using a train-test split method to evaluate machine learning models is fast. Preparing the data is simple and intuitive and only one model is created and evaluated.

It is useful when you have a large amount of data so that both training and tests sets are representative of the original problem.

Next, we will look at repeating this process multiple times.

Multiple Train-Test Splits

We can repeat the process of splitting the time series into train and test sets multiple times.

This will require multiple models to be trained and evaluated, but this additional computational expense will provide a more robust estimate of the expected performance of the chosen method and configuration on unseen data.

We could do this manually by repeating the process described in the previous section with different split points.

Alternately, the scikit-learn library provides this capability for us in the TimeSeriesSplit object.

You must specify the number of splits to create and the TimeSeriesSplit to return the indexes of the train and test observations for each requested split.

The total number of training and test observations are calculated each split iteration (i) as follows:

Where n_samples is the total number of observations and n_splits is the total number of splits.

Let’s make this concrete with an example. Assume we have 100 observations and we want to create 2 splits.

For the first split, the train and test sizes would be calculated as:

Or the first 33 records are used for training and the next 33 records are used for testing.

The second split is calculated as follows:

Or, the first 67 records are used for training and the remaining 33 records are used for testing.

You can see that the test size stays consistent. This means that performance statistics calculated on the predictions of each trained model will be consistent and can be combined and compared. It provides an apples-to-apples comparison.

What differs is the number of records used to train the model each split, offering a larger and larger history to work with. This may make an interesting aspect of the analysis of results. Alternately, this too could be controlled by holding the number of observations used to train the model consistent and only using the same number of the most recent (last) observations in the training dataset each split to train the model, 33 in this contrived example.

Let’s look at how we can apply the TimeSeriesSplit on our sunspot data.

The dataset has 2,820 observations. Let’s create 3 splits for the dataset. Using the same arithmetic above, we would expect the following train and test splits to be created:

  • Split 1: 705 train, 705 test
  • Split 2: 1,410 train, 705 test
  • Split 3: 2,115 train, 705 test

As in the previous example, we will plot the train and test observations using separate colors. In this case, we will have 3 splits, so that will be 3 separate plots of the data.

Running the example prints the number and size of the train and test sets for each split.

We can see the number of observations in each of the train and test sets for each split match the expectations calculated using the simple arithmetic above.

The plot also shows the 3 splits and the growing number of total observations in each subsequent plot.

Sunspot Dataset Multiple Train-Test Split

Sunspot Dataset Multiple Train-Test Split

Using multiple train-test splits will result in more models being trained, and in turn, a more accurate estimate of the performance of the models on unseen data.

A limitation of the train-test split approach is that the trained models remain fixed as they are evaluated on each evaluation in the test set.

This may not be realistic as models can be retrained as new daily or monthly observations are made available. This concern is addressed in the next section.

Walk Forward Validation

In practice, we very likely will retrain our model as new data becomes available.

This would give the model the best opportunity to make good forecasts at each time step. We can evaluate our machine learning models under this assumption.

There are few decisions to make:

1. Minimum Number of Observations. First, we must select the minimum number of observations required to train the model. This may be thought of as the window width if a sliding window is used (see next point).
2. Sliding or Expanding Window. Next, we need to decide whether the model will be trained on all data it has available or only on the most recent observations. This determines whether a sliding or expanding window will be used.

After a sensible configuration is chosen for your test-setup, models can be trained and evaluated.

  1. Starting at the beginning of the time series, the minimum number of samples in the window is used to train a model.
  2. The model makes a prediction for the next time step.
  3. The prediction is stored or evaluated against the known value.
  4. The window is expanded to include the known value and the process is repeated (go to step 1.)

Because this methodology involves moving along the time series one-time step at a time, it is often called Walk Forward Testing or Walk Forward Validation. Additionally, because a sliding or expanding window is used to train a model, this method is also referred to as Rolling Window Analysis or a Rolling Forecast.

This capability is currently not available in scikit-learn, although you could contrive the same effect with a carefully configured TimeSeriesSplit.

Below is an example of how to split data into train and test sets using the Walk Forward Validation method.

Running the example simply prints the size of the training and test sets created. We can see the train set expanding teach time step and the test set fixed at one time step ahead.

Within the loop is where you would train and evaluate your model.

You can see that many more models are created.

This has the benefit again of providing a much more robust estimation of how the chosen modeling method and parameters will perform in practice. This improved estimate comes at the computational cost of creating so many models.

This is not expensive if the modeling method is simple or dataset is small (as in this example), but could be an issue at scale. In the above case, 2,820 models would be created and evaluated.

As such, careful attention needs to be paid to the window width and window type. These could be adjusted to contrive a test harness on your problem that is significantly less computationally expensive.

Walk-forward validation is the gold standard of model evaluation. It is the k-fold cross validation of the time series world and is recommended for your own projects.

Further Reading


In this tutorial, you discovered how to backtest machine learning models on time series data with Python.

Specifically, you learned:

  • About the importance of evaluating the performance of models on unseen or out-of-sample data.
  • How to create train-test splits of time series data, and how to create multiple such splits automatically.
  • How to use walk-forward validation to provide the most realistic test harness for evaluating your models.

Do you have any questions about evaluating your time series model or about this tutorial?
Ask your questions in the comments below and I will do my best to answer.

Want to Develop Time Series Forecasts with Python?

Introduction to Time Series Forecasting With Python

Develop Your Own Forecasts in Minutes

...with just a few lines of python code

Discover how in my new Ebook:
Introduction to Time Series Forecasting With Python

It covers self-study tutorials and end-to-end projects on topics like:
Loading data, visualization, modeling, algorithm tuning, and much more...

Finally Bring Time Series Forecasting to
Your Own Projects

Skip the Academics. Just Results.

Click to learn more.

95 Responses to How To Backtest Machine Learning Models for Time Series Forecasting

  1. Michael December 19, 2016 at 7:57 pm #


    second link from “Further Reading” should probably point to instead of, which is not found

  2. SalemAmeen December 19, 2016 at 9:32 pm #

    Many thanks, it is short and full of information.

  3. Shreyak Tiwari December 20, 2016 at 3:34 am #

    For walking forward validation it will consume a lot of time to validate after each single interation and even results won’t be much different between each iteration. Better way would be to increase h steps in each iteration and divide train and test data in that manner. Train data could be added for each h steps and test data could be for h steps for each iteration rather than single observation. This is just my sugestion from my point of view. No hard rules here.

    • Jason Brownlee December 20, 2016 at 7:27 am #

      Hi Shreyak,

      Yes, that would be a sampled version of walk-forward validation, a subset.

      This is pretty much what the multiple train-test splits provides in the sklearn TimeSeriesSplit object – if I understood you correctly.

  4. Saurabh Bhagvatula December 27, 2016 at 5:18 am #

    My query is related to walk forward validation:

    Suppose a time series forecasting model is trained with a set of data and gives a good evaluation with test-set in time_range-1 and model produces a function F1. For time_range-2 and another set of training and testing data model generates function F2. Similarly for time_range-N the model generate Function FN. How the different models when combined and implemented forecast the result based on forecasting function based of local model and not the combined model of all time range model, which may possibly be producing error in forecasting.

    • Jason Brownlee December 27, 2016 at 5:26 am #

      Hi Saurabh,

      Sorry, I don’t quite understand the last part of your question. Are you able to restate it?

  5. Ram January 20, 2017 at 12:11 am #

    I am just going through your posts on Time Series. Are you using any particular resource as a reference material for these things ?

  6. Ian February 11, 2017 at 10:01 am #

    Hi Jason

    Thanks so much for this in-depth post. My question is:
    Which performance measure should we use in selecting the model?
    For example, if I add one test subset at a time in a binary(1, 0) classification problem, the accuracy would be either 1 or 0.
    In this case, how should I select a model? Should I use other measures instead?

    I am building my model as stock price classification where 1 represents up, and 0 means down. I use TimeSeriesSplit and divide into T (sample size) – m (rolling window) + 1.

    Thanks a lot and I look forward to listening your insights!

    • Jason Brownlee February 12, 2017 at 5:33 am #

      Hi Ian,

      This is a problem specific question.

      Perhaps classification accuracy on the out of sample dataset would be a good way to pick a model in your case?

      • Ian February 12, 2017 at 9:22 am #


        Thanks so much for answering.
        If we walk one step forward every time just like what you illustrate in the Walk Forward Validation, doesn’t that mean the test dataset come from out of sample?

        Hope this is not too problem specific, and thanks again in advance.

        • Jason Brownlee February 13, 2017 at 9:11 am #

          Hi Ian,

          Walk forward validation is a method for estimating the skill of the model on out of sample data. We contrive out of sample and each time step one out of sample observation becomes in-sample.

          We can use the same model in ops, as long as the walk-forward is performed each time a new observation is received.

          Does that make sense?

  7. Magnus March 4, 2017 at 2:39 am #

    Thanks Jason for an informative post!
    If the time series is very long, e.g. minute values for 10 years, it will take a very long time to train. As I understand you, another way to do this is to fix the length of the training set, e.g. 2 years, but just move it, like this:
    Split 1: year 1+2 train, year 3 test
    Split 2: year 2+3 train, year 4 test

    Split 8: year 8+9 train, year 10 test
    Is this correct and valid?

    • Jason Brownlee March 6, 2017 at 10:51 am #

      Sounds good to me.

      Also consider how valuable the older data is to fit the model. It is possible data from 10 years ago is not predictive of today, depends on the problem of course.

  8. marwa May 11, 2017 at 12:14 am #

    Thank you for your post Jason.

    I would like to ask you which model we will chose if we have implementation purpose.
    In fact, for example if the time series is hour values of 3 years, walk forward could be applied in this way:

    Split 1: year 1 train, year 2 test and we will get model1, error of prediction 1
    Split 2: year 1+2 train, year 3 test and we will get model2, error of prediction 2

    which model should we then choose ?

    • Jason Brownlee May 11, 2017 at 8:32 am #

      Great question.

      Pick the model that best represents the performance/capability required for your application.

      • Elie Kawerk June 24, 2017 at 8:14 pm #


        I think that when Marwa mentioned ‘models’, she meant applying the same model (such as ARMA) on different data (corresponding to the expanding window).
        I think that the walk-forward method, just like k-fold CV, gives an array of metrics whose mean somehow corresponds to the true skill of the model.

        I think that when this mean is evaluated, the model should be trained on the entire dataset (check Practical Time Series Forecasting with R- Shmueli ) just like with K-fold CV.

        Please correct me if I am wrong.


        • Jason Brownlee June 25, 2017 at 6:01 am #

          Walk forward validation will give a mean estimate of the skill of the model.

          Walk forward validation requires some portion of the data be used to fit the model and some to evaluate it, and the portion for evaluation is stepped to be made available to training as we “walk forward”. We do not train on the entire training dataset, if we did and made a prediction, what would we compare the prediction to in order to estimate the skill of the model?

  9. Shifen June 29, 2017 at 5:46 pm #

    Dear Jason,
    Thanks so much for this in-depth post. My question is:
    If my time series are discontinuous(such as two weeks in March and two weeks in September), How should I divide the data set?
    If I use time series as supervised learning, it could lead to a sample containing data for March and September.
    This question has puzzled me for a long time and I look forward to hearing from you.

    • Jason Brownlee June 30, 2017 at 8:10 am #

      I don’t have a good answer.

      Perhaps try to fill in the missing time with 0s or nans.
      Perhaps try to ignore the missing blocks.
      Perhaps focus on building a model at a lower scale (month-wise).

  10. Nick July 7, 2017 at 12:57 pm #

    Hey Jason, can you comment on Rob Hyndman’s paper stating that CV can, in fact, be used for time-series data (

  11. Daniel July 14, 2017 at 7:54 am #

    Is there a way to store the model fit values in such a way that we can update the model after every iteration instead of recreate an entirely new one?
    My dataset has 55,000 samples and I want to run a test set of 5,000, but recreating 5,000 models would take roughly 80 hours. Thanks.

  12. Huzefa Barwaniwala October 4, 2017 at 3:51 am #

    Hi, Jason

    Thanks a lot for this post, I have recently gone through many for your blog post on time series forecasting and found it quite informative; especially the post on feature engineering for time series so it can be tackled with supervised learning algorithms.

    Now, if I have a time series data for demand forecasting, and I have used a lot of feature engineering on the ‘date’ variable to extract all the seasonality, for example, day, month, the day of the week, if that day was a holiday, quarter, season, etc. I have also used some FE on the target variable to create lag features, min, max, range, average, etc.

    My question to you is: Do I still need use backtesting/ Walk Forward Validation? or I can now use a simple k-fold cross validation since the order of time series won’t be important?

    Thanks a lot. Keep doing this awesome work.


    • Jason Brownlee October 4, 2017 at 5:50 am #

      It really depends on your data.

      In practice, I do recommend walk-forward validation when working with time series data. It is a great way to make sure you are not tricking yourself.

  13. Huzefa Barwaniwala October 5, 2017 at 3:22 am #


    Thank you for getting back. Yes, I agree with you. One more thing I realized is, I have made lags as a feature and if in any of the fold of CV a future data is used to predict past then it will act as a target leakage!


  14. Danilo November 4, 2017 at 6:14 am #

    Hi Jason

    Your posts are really amazing. I have learned a lot reading your articles. I really appreciate if you can help me with a doubt regarding backtest and transforming time series to supervised learning.

    May I used backtest, to identify the best lag for transforming time series to supervised learning ?

  15. annesoj November 4, 2017 at 7:49 pm #

    Hi Jason,

    Thank you so much for this post.
    However I will have a question that might seems stupid but…

    This give me a graphical version of the reality (on the train) and of my predictions (on the test). But it is not an evaluation of my model….

    How do I know using those methods, if my models is great or bad?

    Imagine I want to try an ARIMA (5,2) and an ARIMA (6,3). How do I do to pick the best one? How do I evaluate each one using “Walk Forward Validation”????

    To evaluate the first model, I can do the mean of the error, for each split, between the prediction and the real value?

    To pick the best model I can compare those mean between the 2 models?

    Would it be a good evaluation methods?

    Thank you again!

  16. Urfa January 22, 2018 at 10:38 am #

    Hi Jason,

    I have a set of monthly panel data from 2000 to 2015 and I want to predict future values. In detail, I want to predict one month ahead by using a (pooled) rolling regression with a fixed window size of 5 years. (I know, there are better alternatives for panel data like regression with fixed effects, but in my case, with pooled OLS I’m getting accurate predictions.) Regression model looks like this: y_{i,t+1}= b0+ b1*x_{i,t} + b2*x2_{i,t} +… + x10_{i,t} where t is the current month and i is the id.

    Furthermore, I select a new model in every step by using a dynamic model selection. In detail:

    1. Take a fixed windows size of five years and split it into a training and validation set. The first 58 months as training and the month 59 as validation set.

    2. Choose Explanatory Variables or rather a regression model by running a stepwise regression for model selection with the training and validation set and the Average Square Error of the validation set as a criterion.

    3. Take the data from month 60 and the regression model from step 2, to make a forecast for month 61.

    4. Go to step 1 and roll the window one month forward.

    I couldn’t find any literature where you select a new regression model or new explanatory variables at every step of the rolling regression. Do you know if there is any literature on that?

    Thank you!

    • Jason Brownlee January 23, 2018 at 7:48 am #


      Good question. Can’t think of a good source off the top of my head, I would be sniffing around applied stats books/papers or more likely applied economics works.

      • Urfa January 23, 2018 at 8:55 am #

        Thank you!

        Until now I can’t find anything on that approach and I searched several papers and books on that topic. I will keep searching! 🙂

        By the way, does this approach makes sense to you?

        • Jason Brownlee January 24, 2018 at 9:47 am #

          Hang in there.

          Generally, there is no one size fits all approach. Often you need to dive in and try stuff and see what suits the problem/data/project.

  17. Mohammed Helal February 6, 2018 at 5:57 am #

    Correct me if I’m wrong, but it seems to me that TimeSeriesSplit is very similar to the Forward Validation technique, with the exceptions that (1) there is no option for minimum sample size (or a sliding window necessarily), and (2) the predictions are done for a larger horizon.

    PS. Thanks a lot for your posts!

    • Jason Brownlee February 6, 2018 at 9:23 am #

      It is a one-time split, where as walk-forward validation splits on each time step from one point until the end of the dataset.

      Does that help?

  18. Alexis March 2, 2018 at 5:14 pm #

    Hi Jason, I don’t see why TimeSeriesSplit makes such a “complicated” formula to create a test set of constant size. I would rather make it as a proportion of the whole window at the first iteration, and then keep that length for the remaining steps. Would it be correct ?

    • Jason Brownlee March 3, 2018 at 8:06 am #

      Yes, nice one. You have essentially described a variation on walk forward validation.

  19. Tarun March 15, 2018 at 10:31 pm #

    Hi Jason,

    I have a query regarding Walk forward validation of TS. Let’s say I need to forecast for next 3 months (Jan-Mar 18) using last 5 years of data (Jan13-Dec 17).
    In principle I would want to use Walk forward as I would like to see how well the model generalizes to unseen data. I’d use your approach which is:

    1) Set Min no of observations : Jan12-Dec 16
    2) Expanding TEST window : Entire 2017, which means I would forecast next 3 points (Jan-Mar 17) in iteration 1 and in next iteration, Jan 17 becomes part of train and I predict for Feb-mar-April 17.I do it for entire 2017.

    My question is why do I need to RETRAIN model everytime I add 1 data point? Why can’t I just score the next 3 TEST points assuming the fact that model that I have trained before ITR1 is the best one?

    Can’t I select (let’s say) top 5 models from step 1,calculate their Average for all TEST samples (3 months window) and select the one with least RMSE?.

    Eagerly awaiting your reply!

    • Jason Brownlee March 16, 2018 at 6:19 am #

      You can, but the further you get from real obs as inputs the worse model skill will be come.

      This post will give you some additional ideas on performing a multi-step forecast without using obs as inputs:

      • Tarun March 16, 2018 at 10:55 pm #

        Hi Jason,

        Thanks for the reply.

        “the further you get from real obs” by that do you mean to say that I am not retraining my model using real data?

        • Jason Brownlee March 17, 2018 at 8:37 am #

          I mean that the longer the lead time – further you forecast into the future, the less stable/skillful the results.

          • Tarun March 19, 2018 at 4:44 pm #

            Thanks Jason. You are indeed doing a great job.

          • Jason Brownlee March 20, 2018 at 6:11 am #


  20. Ha Pham April 17, 2018 at 1:04 pm #

    Hi Jason,

    Thanks a lot for your post. I am working on a demand forecasting problem for thousands of products, and I only have sales data of two years. Unit of data point can be days, but for now I aggregate into weeks. about 60% of the products have lots of zero and some bursty sales weeks. The rest have more stable sales through out the years. I tried two approaches:
    – Using sales data of previous 4 weeks to train and predict sales of next week
    – Using sales data of year 1 to predict the whole sales data of next year with no update to the model

    My questions:
    – Is there any theoretical error in these approaches? I can clarify a few things more if you need
    – In this post you only talk about one time series. Can this be applied to my case where I have thousands of time series needed to be forecast at the same time?
    – For this kind of problem, which algorithm tend to give best result? Can an out-of-the-box algo like XGBoost do the job? I have browsed through some papers and they introduced different methods like Neural Networks or Bayesian methods, which haven’t touched yet.


    • Jason Brownlee April 17, 2018 at 2:54 pm #

      That sounds like a great problem.

      I’m eager to dive in and offer some advice, but it would be a big time investment for me, I just don’t have the capacity. I hope to have more posts on this topic soon.

      In general, I recommend testing a suite of modeling methods and framings of a problem to help you discover what works best for your specific dataset.

      I’m eager to hear how you go.

  21. Dicky Chou April 23, 2018 at 6:26 pm #

    Hi Jason
    I am a meteorologist currently working on a time series verification problem.
    My colleagues make forecasts every day and I hope to evaluate the accuracy of them.
    I find that there are some time shift between our forecast and the observation. For example, we think it will be raining at 5 am tomorrow. However, the rain happens at 4 or 6. If we use normal verification method, such as contingent table, we get a miss and a false alarm. However, I think this evaluation method is inappropriate in this case since we the weather condition at 4 and 5 are not independent, we just miss the temporal attribution of these data. Can you give me some suggest about how to evaluate this kind of times series data?

  22. Deniz May 16, 2018 at 10:09 am #

    Hi Jason,

    Is using AIC for forecasting a good method? Or should I use cross-validation while building forecasting models?

    • Jason Brownlee May 17, 2018 at 6:21 am #

      It really depends on your project goals and on your specific data.

  23. Mustafa Qamar-ud-Din June 16, 2018 at 7:10 pm #

    Thank you for informative series. I would probably have to read it again, but if you could please correct me whether Sliding Window and Backtest mean the same thing. In the sense that you move the window forward step at a time?

    • Jason Brownlee June 17, 2018 at 5:39 am #

      Sliding window refers to a way of framing a time series as a supervised learning problem.

      Backtesting is a general term for evaluating a model on historical data.

  24. Anthony The Koala June 20, 2018 at 10:38 am #

    Dear Dr Jason,
    For those having difficulty plotting the data sourced from the site , The following may be helpful before even using Python.

    Even if you imported the file from the website as a CSV file, the trouble is that there are NaN values and extraneous information at the bottom of the spreadsheet. It requires cleaning the file. Otherwise if the file is not cleaned, Python will produce error messages.
    (1) Open the sunspot.csv file into a spreadsheet program eg MS Excel
    (2) Leave the header at the top of the file alone.
    (3) Scroll down to the very end of the data file (2821 rows down). Delete the rows containing Nan and text “Zuerich monthly sunspot numbers 1749-1983”.
    (4) Save the file as sunspot.csv in CSV format
    (3) In Python import the data as usual

    Everything should be OK from that point.

    Thank you,
    Anthony of Sydney

  25. Gautam July 2, 2018 at 1:22 am #

    Hello Jason,
    You have become a one stop website for machine learning. Thank you for all the efforts!
    I am little stuck and validate my approach here, if you can:
    I am trying to predict a stock market index using multiple time series: ex say many commodity indexes besides the targeted index itself. Is this approach terribly wrong? If not can you please possible point to good start point. I am really stuck here badly. Appreciate your thoughts

  26. Gautam July 2, 2018 at 1:25 am #

    Just additional comment to my previous comment is that I am trying to design a multi time series problem using supervised ml method such as Random Forest or Elastic Net

  27. Kingsley Udeh July 12, 2018 at 7:39 pm #

    Hi Jason,

    Thanks as always.

    Please how do I train and evaluate my model within the loop of a Walk Forward Validation approach?

    Within the Walk Forward Validation, after choosing my min training size, I created, say for, eg.

    range of train to len(records):
    train, test = X_obs[0:i], X_obs[i:i+1]
    # Fit Model
    history =, train_y, epochs=1000, batch_size=4192, validation_data= (test_X, test_y), verbose=0, shuffle=False)
    # Evaluate Model
    loss = model.evaluate(test_X, test_y, verbose=0)

    At the end, I have 10 different loss or validation scores. Is the last saved model the average of all the 10 models? How do I make predictions and calculate the RMSE for the average model?

    I’m still learning the Walk Forward Validation method and will appreciate your help in guiding me on the right thing to do.

    I look forward to hearing from you soon .

    • Jason Brownlee July 13, 2018 at 7:38 am #

      I recommend not using a validation set when fitting the model. The skill score is calcualted based on the predictions made within the walk-forward validation loop.

      • Kingsley Udeh August 3, 2018 at 7:15 am #

        I used validation set because I wanted to monitor the validation loss value with modelcheckpoint. Thus, I would pick the best model and see how it would perform on a new or independent test set.

        In addition, I would use the method or approach for the the hyperparamenter tunings to fit a final model and compare the final model with the model from modelcheckpoint.

  28. Abdessamad July 30, 2018 at 7:42 pm #

    Hi Jason,

    Thanks a lot for your post. You said in the Walk Forward Validation section that “”In the above case, 2,820 models would be created and evaluated.”” Is it not 2,320 since we use the 500 first observations as the minimum ?


  29. Prince Grover August 10, 2018 at 8:15 am #

    Hi Jason,

    Thanks for the article. I like the walk forward validation approach. I am currently using the same approach in one of the problem and have a question that I would like to discuss with you.

    Q: How can we make train, validation and test split using walk forward validation approach? We generally split data into 3 parts and keep a separate test data for final evaluation. If we are keeping a window of width w and sliding it over next days, I can use to either tune hyperparameters or final validation score. What about test score and generalizability of our model?

    Thanks in advance!

    • Jason Brownlee August 10, 2018 at 2:16 pm #

      Good question.

      Perhaps choose a period over which the model will make predictions, not be updated with true values and the holdout set can be used as validation for tuning the model?

  30. Philip P August 18, 2018 at 12:55 am #


    So, I’m wondering how these folds from Walk Forward Validation would be passed into a python pipeline or as a CV object into a sklearn model like xgboost. I’ve used GridSearchCV to create the cross-validation folds before. My project at work has sales data for a number of stores each week. I’m creating a model that will predict sales 4 weeks out by each store. Right now, I have a max of 80 weeks of data. I figured to start with a minimum train size of 52 weeks and test on the next 4 weeks. Each fold would jump 4 weeks ahead. Here, n_train = 52 and max_week = 80. My code and output are below. Thanks so much!

    for i in range(n_train, max_week):
    if i % 4 == 0:
    train, test = df[(df.WeekCount_ID >= 1) & (df.WeekCount_ID i) & (df.WeekCount_ID <= i + 4)]
    print('train=%d, test=%d' % (len(train), len(test)))

    train=3155, test=260
    train=3415, test=260
    train=3675, test=260
    train=3935, test=260
    train=4195, test=272
    train=4467, test=282
    train=4749, test=287

    • Jason Brownlee August 18, 2018 at 5:40 am #

      Good question.

      I write my own validation and grid search procedures for time series, it’s also my general recommendation in order to give more control. The sklearn tools are not suited to time series data.

      • Philip P. August 21, 2018 at 12:56 am #

        Jason, thanks for the quick reply. So, for someone who is learning all of this concurrently (machine learning, time series, python, sql, etc) and not sure how to write my own python procedures, is this custom code of yours something that you cover in any of your books? If not, is this something that you would share or that I could find posted on another forum? Thanks again.

        • Jason Brownlee August 21, 2018 at 6:18 am #

          I give an example of custom code for backtesting in almost every time series example I provide.

          I have a new book coming out in a week or two with many such examples.

  31. Dimos September 20, 2018 at 4:30 pm #

    Hi Jason,

    Amazing tutorial!!!!!!!!

    Let’s assume that i have training data for periods 1-100 and i want to make predictions for periods 101-120. Should i predict the target variable for period 101 and then as an input dataset predict the period 102 etc?

    Many thanks

  32. Tianyu September 24, 2018 at 8:51 pm #

    Hi Jason,
    May I ask two questions?
    1. How to apply early stopping in walk forward validation to select the model in each walk forward step?
    2. I think for time series data, we can Convert a Time Series to a Supervised Learning Problem. As the result, each sample is consist of past time step data as input and one target output. Every sample is now independent and there is no time order existed when using stateless LSTM for training. We can now shuffle all the samples and split the data as training and validation set as normal. Correct me if I am wrong.

    • Jason Brownlee September 25, 2018 at 6:20 am #

      You can do early stopping on the fitting of the model prior to making a prediction in each step of the validation process.

      Perhaps. It might depend on the specifics of your domain.

      • Tianyu September 25, 2018 at 8:05 am #

        Thanks for your reply.
        If the model is to predict classification problem. The accuracy for each step will only be 0 or 1, which cannot be used for validation based early stopping.

        • Jason Brownlee September 25, 2018 at 2:45 pm #

          Why not?

          • Tianyu September 26, 2018 at 6:18 am #

            Do you mean we can make it like if for 10 epochs’ accuracy is 1 then stop training? But in this situation how to compare two models in two epochs with same accuracy=1? I mean if there are many samples for validation, I can save the best model with highest val_acc by check point function from Keras.

          • Jason Brownlee September 26, 2018 at 2:21 pm #

            Not sure I follow.

            Early stopping with time series is hard, but I think it is possible (happy to be proven wrong). Careful attention will need to be paid to exactly what samples are used as the validation set each step.

            I don’t have a worked example, sorry.

  33. Venkata phanikrishna September 30, 2018 at 9:54 pm #

    Hi Jason,
    I am new to the ML. I understood ML topics theoretically. Coming to the implementation case, really it is very hard for me. Through your website, I did some implementation work. Thanks for your help.

    Coming to my question,
    how to use ML binary classification concepts in case of nonstationary data (Example: EEG data)?

    At present, with the help of available samples, I train the model using KV fold cross-validation.

    y_pred = cross_val_predict(clf,MyX,MyY,cv=10)

    every time I am getting the same results.

    but if I shuffle the samples before training using below syntax, every time I am getting different results.
    from sklearn.utils import shuffle
    mydataset = shuffle(df1)

    how to find the best model in such cases.

    • Jason Brownlee October 1, 2018 at 6:25 am #

      It’s not valid to use cross validation for time series data, regression or classification.

      The train/test data must be split in such a way as to respect the temporal ordering and the model is never trained on data from the future and only tested on data from the future.

      • Will November 13, 2018 at 7:04 am #

        There has been a paper published here By Rob Hyndman which claims that if your problem is a purely autoregressive problem (as it would be for the framing of an ML problem as a supervised learning problem) then it is in fact valid to use K-Fold cross validation on time series, provided the residuals produced by the model are themselves uncorrelated.

        The paper can be found here:

        • Jason Brownlee November 13, 2018 at 7:24 am #

          Nice find, thanks. I’ll have to give it a read.

  34. Yue Lee October 25, 2018 at 2:15 am #

    In this post, it is explained that a Time Series problem could be reframed as a machine learning one with inputs and outputs. Could we consider in this case that each row is an independent observation and use Cross Validation , Nested Cross validation or any method for hyperparameters tuning and validation?

    • Jason Brownlee October 25, 2018 at 8:03 am #

      Nearly. The problem is that rows that contain information about the future in the training set will bias the model.

Leave a Reply