How to Get Started with Deep Learning for Time Series Forecasting (7-Day Mini-Course)

Deep Learning for Time Series Forecasting Crash Course.

Bring Deep Learning methods to Your Time Series project in 7 Days.

Time series forecasting is challenging, especially when working with long sequences, noisy data, multi-step forecasts and multiple input and output variables.

Deep learning methods offer a lot of promise for time series forecasting, such as the automatic learning of temporal dependence and the automatic handling of temporal structures like trends and seasonality.

In this crash course, you will discover how you can get started and confidently develop deep learning models for time series forecasting problems using Python in 7 days.

This is a big and important post. You might want to bookmark it.

Let’s get started.

How to Get Started with Deep Learning for Time Series Forecasting (7-Day Mini-Course)

How to Get Started with Deep Learning for Time Series Forecasting (7-Day Mini-Course)
Photo by Brian Richardson, some rights reserved.

Who Is This Crash-Course For?

Before we get started, let’s make sure you are in the right place.

The list below provides some general guidelines as to who this course was designed for.

You need to know:

  • You need to know the basics of time series forecasting.
  • You need to know your way around basic Python, NumPy and Keras for deep learning.

You do NOT need to know:

  • You do not need to be a math wiz!
  • You do not need to be a deep learning expert!
  • You do not need to be a time series expert!

This crash course will take you from a developer that knows a little machine learning to a developer who can bring deep learning methods to your own time series forecasting project.

Note: This crash course assumes you have a working Python 2 or 3 SciPy environment with at least NumPy and Keras 2 installed. If you need help with your environment, you can follow the step-by-step tutorial here:

Crash-Course Overview

This crash course is broken down into 7 lessons.

You could complete one lesson per day (recommended) or complete all of the lessons in one day (hardcore). It really depends on the time you have available and your level of enthusiasm.

Below are 7 lessons that will get you started and productive with deep learning for time series forecasting in Python:

  • Lesson 01: Promise of Deep Learning
  • Lesson 02: How to Transform Data for Time Series
  • Lesson 03: MLP for Time Series Forecasting
  • Lesson 04: CNN for Time Series Forecasting
  • Lesson 05: LSTM for Time Series Forecasting
  • Lesson 06: CNN-LSTM for Time Series Forecasting
  • Lesson 07: Encoder-Decoder LSTM Multi-step Forecasting

Each lesson could take you 60 seconds or up to 30 minutes. Take your time and complete the lessons at your own pace. Ask questions and even post results in the comments below.

The lessons expect you to go off and find out how to do things. I will give you hints, but part of the point of each lesson is to force you to learn where to go to look for help on and about the deep learning, time series forecasting and the best-of-breed tools in Python (hint, I have all of the answers directly on this blog, use the search box).

I do provide more help in the form of links to related posts because I want you to build up some confidence and inertia.

Post your results in the comments, I’ll cheer you on!

Hang in there, don’t give up.

Note: This is just a crash course. For a lot more detail and 25 fleshed out tutorials, see my book on the topic titled “Deep Learning for Time Series Forecasting“.

Need help with Deep Learning for Time Series?

Take my free 7-day email crash course now (with sample code).

Click to sign-up and also get a free PDF Ebook version of the course.

Download Your FREE Mini-Course

Lesson 01: Promise of Deep Learning

In this lesson, you will discover the promise of deep learning methods for time series forecasting.

Generally, neural networks like Multilayer Perceptrons or MLPs provide capabilities that are offered by few algorithms, such as:

  • Robust to Noise. Neural networks are robust to noise in input data and in the mapping function and can even support learning and prediction in the presence of missing values.
  • Nonlinear. Neural networks do not make strong assumptions about the mapping function and readily learn linear and nonlinear relationships.
  • Multivariate Inputs. An arbitrary number of input features can be specified, providing direct support for multivariate forecasting.
  • Multi-step Forecasts. An arbitrary number of output values can be specified, providing
    direct support for multi-step and even multivariate forecasting.

For these capabilities alone, feedforward neural networks may be useful for time series forecasting.

Your Task

For this lesson you must suggest one capability from both Convolutional Neural Networks and Recurrent Neural Networks that may be beneficial in modeling time series forecasting problems.

Post your answer in the comments below. I would love to see what you discover.

More Information

In the next lesson, you will discover how to transform time series data for time series forecasting.

Lesson 02: How to Transform Data for Time Series

In this lesson, you will discover how to transform your time series data into a supervised learning format.

The majority of practical machine learning uses supervised learning.

Supervised learning is where you have input variables (X) and an output variable (y) and you use an algorithm to learn the mapping function from the input to the output. The goal is to approximate the real underlying mapping so well that when you have new input data, you can predict the output variables for that data.

Time series data can be phrased as supervised learning.

Given a sequence of numbers for a time series dataset, we can restructure the data to look like a supervised learning problem. We can do this by using previous time steps as input variables and use the next time step as the output variable.

For example, the series:

Can be transformed into samples with input and output components that can be used as part of a training set to train a supervised learning model like a deep learning neural network.

This is called a sliding window transformation as it is just like sliding a window across prior observations that are used as inputs to the model in order to predict the next value in the series. In this case the window width is 3 time steps.

Your Task

For this lesson you must develop Python code to transform the daily female births dataset into a supervised learning format with some number of inputs and one output.

You can download the dataset from here: daily-total-female-births.csv

Post your answer in the comments below. I would love to see what you discover.

More Information

In the next lesson, you will discover how to develop a Multilayer Perceptron deep learning model for forecasting a univariate time series.

Lesson 03: MLP for Time Series Forecasting

In this lesson, you will discover how to develop a Multilayer Perceptron model or MLP for univariate time series forecasting.

We can define a simple univariate problem as a sequence of integers, fit the model on this sequence and have the model predict the next value in the sequence. We will frame the problem to have 3 inputs and 1 output, for example: [10, 20, 30] as input and [40] as output.

First, we can define the model. We will define the number of input time steps as 3 via the input_dim argument on the first hidden layer. In this case we will use the efficient Adam version of stochastic gradient descent and optimizes the mean squared error (‘mse‘) loss function.

Once the model is defined, it can be fit on the training data and the fit model can be used to make a prediction.

The complete example is listed below.

Running the example will fit the model on the data then predict the next out-of-sample value.

Given [50, 60, 70] as input, the model correctly predicts 80 as the next value in the sequence.

Your Task

For this lesson you must download the daily female births dataset, split it into train and test sets and develop a model that can make reasonably accurate predictions on the test set.

You can download the dataset from here: daily-total-female-births.csv

Post your answer in the comments below. I would love to see what you discover.

More Information

In the next lesson, you will discover how to develop a Convolutional Neural Network model for forecasting a univariate time series.

Lesson 04: CNN for Time Series Forecasting

In this lesson, you will discover how to develop a Convolutional Neural Network model or CNN for univariate time series forecasting.

We can define a simple univariate problem as a sequence of integers, fit the model on this sequence and have the model predict the next value in the sequence. We will frame the problem to have 3 inputs and 1 output, for example: [10, 20, 30] as input and [40] as output.

An important difference from the MLP model is that the CNN model expects three-dimensional input with the shape [samples, timesteps, features]. We will define the data in the form [samples, timesteps] and reshape it accordingly.

We will define the number of input time steps as 3 and the number of features as 1 via the input_shape argument on the first hidden layer.

We will use one convolutional hidden layer followed by a max pooling layer. The filter maps are then flattened before being interpreted by a Dense layer and outputting a prediction. The model uses the efficient Adam version of stochastic gradient descent and optimizes the mean squared error (‘mse‘) loss function.

Once the model is defined, it can be fit on the training data and the fit model can be used to make a prediction.

The complete example is listed below.

Running the example will fit the model on the data then predict the next out-of-sample value.

Given [50, 60, 70] as input, the model correctly predicts 80 as the next value in the sequence.

Your Task

For this lesson you must download the daily female births dataset, split it into train and test sets and develop a model that can make reasonably accurate predictions on the test set.

You can download the dataset from here: daily-total-female-births.csv

Post your answer in the comments below. I would love to see what you discover.

More Information

In the next lesson, you will discover how to develop a Long Short-Term Memory network model for forecasting a univariate time series.

Lesson 05: LSTM for Time Series Forecasting

In this lesson, you will discover how to develop a Long Short-Term Memory Neural Network model or LSTM for univariate time series forecasting.

We can define a simple univariate problem as a sequence of integers, fit the model on this sequence and have the model predict the next value in the sequence. We will frame the problem to have 3 inputs and 1 output, for example: [10, 20, 30] as input and [40] as output.

An important difference from the MLP model, and like the CNN model, is that the LSTM model expects three-dimensional input with the shape [samples, timesteps, features]. We will define the data in the form [samples, timesteps] and reshape it accordingly.

We will define the number of input time steps as 3 and the number of features as 1 via the input_shape argument on the first hidden layer.

We will use one LSTM layer to process each input sub-sequence of 3 time steps, followed by a Dense layer to interpret the summary of the input sequence. The model uses the efficient Adam version of stochastic gradient descent and optimizes the mean squared error (‘mse‘) loss function.

Once the model is defined, it can be fit on the training data and the fit model can be used to make a prediction.

The complete example is listed below.

Running the example will fit the model on the data then predict the next out-of-sample value.

Given [50, 60, 70] as input, the model correctly predicts 80 as the next value in the sequence.

Your Task

For this lesson you must download the daily female births dataset, split it into train and test sets and develop a model that can make reasonably accurate predictions on the test set.

You can download the dataset from here: daily-total-female-births.csv

Post your answer in the comments below. I would love to see what you discover.

More Information

In the next lesson, you will discover how to develop a hybrid CNN-LSTM model for a univariate time series forecasting problem.

Lesson 06: CNN-LSTM for Time Series Forecasting

In this lesson, you will discover how to develop a hybrid CNN-LSTM model for univariate time series forecasting.

The benefit of this model is that the model can support very long input sequences that can be read as blocks or subsequences by the CNN model, then pieced together by the LSTM model.

We can define a simple univariate problem as a sequence of integers, fit the model on this sequence and have the model predict the next value in the sequence. We will frame the problem to have 4 inputs and 1 output, for example: [10, 20, 30, 40] as input and [50] as output.

When using a hybrid CNN-LSTM model, we will further divide each sample into further subsequences. The CNN model will interpret each sub-sequence and the LSTM will piece together the interpretations from the subsequences. As such, we will split each sample into 2 subsequences of 2 times per subsequence.

The CNN will be defined to expect 2 time steps per subsequence with one feature. The entire CNN model is then wrapped in TimeDistributed wrapper layers so that it can be applied to each subsequence in the sample. The results are then interpreted by the LSTM layer before the model outputs a prediction.

The model uses the efficient Adam version of stochastic gradient descent and optimizes the mean squared error (‘mse’) loss function.

Once the model is defined, it can be fit on the training data and the fit model can be used to make a prediction.

The complete example is listed below.

Running the example will fit the model on the data then predict the next out-of-sample value.

Given [50, 60, 70, 80] as input, the model correctly predicts 90 as the next value in the sequence.

Your Task

For this lesson you must download the daily female births dataset, split it into train and test sets and develop a model that can make reasonably accurate predictions on the test set.

You can download the dataset from here: daily-total-female-births.csv

Post your answer in the comments below. I would love to see what you discover.

More Information

In the next lesson, you will discover how to develop an Encoder-Decoder LSTM network model for multi-step time series forecasting.

Lesson 07: Encoder-Decoder LSTM Multi-step Forecasting

In this lesson, you will discover how to develop an Encoder-Decoder LSTM Network model for multi-step time series forecasting.

We can define a simple univariate problem as a sequence of integers, fit the model on this sequence and have the model predict the next two values in the sequence. We will frame the problem to have 3 inputs and 2 outputs, for example: [10, 20, 30] as input and [40, 50] as output.

The LSTM model expects three-dimensional input with the shape [samples, timesteps, features]. We will define the data in the form [samples, timesteps] and reshape it accordingly. The output must also be shaped this way when using the Encoder-Decoder model.

We will define the number of input time steps as 3 and the number of features as 1 via the input_shape argument on the first hidden layer.

We will define an LSTM encoder to read and encode the input sequences of 3 time steps. The encoded sequence will be repeated 2 times by the model for the two output time steps required by the model using a RepeatVector layer. These will be fed to a decoder LSTM layer before using a Dense output layer wrapped in a TimeDistributed layer that will produce one output for each step in the output sequence.

The model uses the efficient Adam version of stochastic gradient descent and optimizes the mean squared error (‘mse‘) loss function.

Once the model is defined, it can be fit on the training data and the fit model can be used to make a prediction.

The complete example is listed below.

Running the example will fit the model on the data then predict the next two out-of-sample values.

Given [50, 60, 70] as input, the model correctly predicts [80, 90] as the next two values in the sequence.

Your Task

For this lesson you must download the daily female births dataset, split it into train and test sets and develop a model that can make reasonably accurate predictions on the test set.

You can download the dataset from here: daily-total-female-births.csv

Post your answer in the comments below. I would love to see what you discover.

More Information

The End!
(Look How Far You Have Come)

You made it. Well done!

Take a moment and look back at how far you have come.

You discovered:

  • The promise of deep learning neural networks for time series forecasting problems.
  • How to transform a time series dataset into a supervised learning problem.
  • How to develop a Multilayer Perceptron model for a univariate time series forecasting problem.
  • How to develop a Convolutional Neural Network model for a univariate time series forecasting problem.
  • How to develop a Long Short-Term Memory network model for a univariate time series forecasting problem.
  • How to develop a Hybrid CNN-LSTM model for a univariate time series forecasting problem.
  • How to develop an Encoder-Decoder LSTM model for a multi-step time series forecasting problem.

This is just the beginning of your journey with deep learning for time series forecasting. Keep practicing and developing your skills.

Take the next step and check out my book on deep learning for time series.

Summary

How Did You Go With The Mini-Course?
Did you enjoy this crash course?

Do you have any questions? Were there any sticking points?
Let me know. Leave a comment below.


Develop Deep Learning models for Time Series Today!

Deep Learning for Time Series Forecasting

Develop Your Own Forecasting models in Minutes

…with just a few lines of python code

Discover how in my new Ebook:
Deep Learning for Time Series Forecasting

It provides self-study tutorials on topics like: CNNs, LSTMs,
Multivariate Forecasting, Multi-Step Forecasting and much more…

Finally Bring Deep Learning to your Time Series Forecasting Projects

Skip the Academics. Just Results.

Click to learn more.


29 Responses to How to Get Started with Deep Learning for Time Series Forecasting (7-Day Mini-Course)

  1. Sam September 4, 2018 at 8:50 pm #

    Hi Jason,
    Love the tutorials, I’m starting to feel as though I understand how to produce my own model.

    I’m currently trying to develop an LSTM that analyses a time series dataset of energy consumption, which has a strong seasonal pattern (though the season interval is quite irregular). It consists of around 8 seasonal cycles with about 45000 data points. I would like to produce a model that I can train on this dataset which is able to simulate data of the shape I already have; without walk forward validation (i.e. I would like to be able to predict the next value with the last value of my dataset as input then use the prediction as the input for the next prediction).

    Does this seem like a sensible approach? I looked into SARIMAs as well but could not produce a close pattern. I’m fairly new to data science and machine learning and so far have found your tutorials invaluable.

    Thanks!

  2. litost September 7, 2018 at 10:58 pm #

    Lesson 01: my level of understanding of CNN got stuck with dimension issue. However, i am able to perform digit classification with ANN. Accuracy level is arnd 92% as reported in many places.

    Looking forward with CNN and +

  3. Manuel Dias September 10, 2018 at 7:13 pm #

    Hi Jason,

    I found your post very interesting, since I use alternative algorithms to predict some seasonal data in several scenarios. In these scenarios I use auto ARIMA with R platform and although some results are satisfying, the development platform and the ARIMA tuning process is not very feasible. Thus if I could find a more feasible algorithm and platform, it would be wonderful.

    I tested your examples 3 to 5 with some seasonal scenarios (simples sin(x) and linear functions, with seasonal characteristics), but found the predicted results very poor: I tried to change several parameters (inputs with 3, 4 and 5 values; normalizing both input and output values; select other activation/optimizer/loss parameters beside relu/adam options) but the predicted outputs was always a linear function and very far from the expected output. For the same scenarios, the auto ARIMA provides much better predicted results.

    I will test with other remaining algorithms (06 and 07) to check for better results: however, if you have any suggestion on how can we improve them, please advice.

    • Jason Brownlee September 11, 2018 at 6:27 am #

      Perhaps try tuning the models to your problem?
      Perhaps try seasonal differencing your data first?
      Perhaps try hybrid models?

  4. komal_123 September 17, 2018 at 2:50 pm #

    Love this blog. This blog gives useful information to me. I like this post and thanks for providing. …!!!!!

  5. David September 20, 2018 at 8:41 am #

    Jason you mentioned that (hint, I have all of the answers directly on this blog, use the search box).
    How can I access to answer through search bar. Am I missing something?

    • Jason Brownlee September 20, 2018 at 2:26 pm #

      Type in what you need help with, e.g. “LSTM time series”, look through the results, read some of the posts.

      Does that help?

  6. David September 20, 2018 at 9:26 am #

    #Lesson 02: How to Transform Data for Time Series

  7. Harry G. September 30, 2018 at 1:10 am #

    Great as always, Jason!

    I have a question as regards your last example with the Encoder-Decoder LSTM Multi-step Forecasting: Is it possible to turn it into a category-predicting solution?

    I mean, let’s suppose we have an image or sound as input and we want to ouput characters or words that are ecoded as integers. For example:
    the input would be x=[0.9,0.8,0.3…]
    and the output would be y=[0,1…]

    How is that possible? So far I’ve tried changing the loss from ‘mse’ to ‘sparse_categorical_crossentropy’ and the number of outputs of the last Dense layer from 1 to 3 (supposing I want to output two integers from 0 to 2). However, the loss never drops below 1.0986 and of course the model isn’t learning anything. I’ve also tried normalizing the input numbers within a range of 0 to 1, but still nothing. Any ideas? Thanks!

    • Jason Brownlee September 30, 2018 at 6:05 am #

      Yes, I explain how here:
      https://machinelearningmastery.com/faq/single-faq/how-can-i-change-a-neural-network-from-regression-to-classification

      I also have many examples of sequence classification on the blog for text (sentiment analysis) and activity recognition that may help.

      • Harry Garrison September 30, 2018 at 8:12 am #

        Thanks for the reply!
        I took a look at the link you provided and I slightly changed my code accordingly. It worked better than before and this time the loss really started dropping.

        While I was experimenting with a toy dataset I’ve built, two more questions came to mind:

        1) Can the time distributed layer be used as some kind of attention mechanism? Keras (to my knowledge) still hasn’t officially implemented an attention mechanism, but I thought that the timedistributed could do the trick. But then again I might be wrong.

        2) Is there any substantial difference between using one-hot encoding versus using integers for a multiclass classification problem? I am having trouble implementing the one-hot encoding and I opted for using simple integers to represent classes. Should I force it with one-hot encoding?

        Thanks once more for your precious time and help and keep up the good work!

        • Jason Brownlee October 1, 2018 at 6:22 am #

          Well done!

          No, time distributed allows you to use a sub-model (automatically) within a broader model.

          Yes, no official attention yet, which I think is complete madness. If they don’t get their act together soon the pytorch project is going to overtake (and kill) them.

          Yes, I remember classical papers on the topic talking about the onehot/softmax giving the model more flexibility – hence it is a best practice when number of classes is >2. Perhaps try both for your problem and go with what works.

  8. JG October 8, 2018 at 8:57 pm #

    Thks Jason for the tutorial: I think it is a great act of generosity from you !

    I am starting Time Series for the first time and I get two main ideas (flavors of Time Series approach), I would like to check out with you, as opposed or vs the “classical” regression/classification that does not care about data time ordering:

    1) in time series of univariate (or 1 feature), the SEQUENCE meaning (is the number of inputs), within a set of samples and has a direct correspondence with the features of classical regression/classification approach like this :

    number of Features in Regression/Classification == of number of inputs selected within the sequence in Time Series.

    therefore the term “features” in Time Series (here only one because of your first univariate approach in this tutorial) has a different value (or meaning) of the term “feature” in equivalent regression/classification approach (here 3 because a sequence of 3 input data)

    2) If I change the number of outputs in the output sequence (I think you called it as “multi-step”) this is totally equivalent to multi-categorical classification (for example using the same quantity of output neurons in the output layer (1 for each category) approach.

    Do you agree with this first “manual” equivalence between Time Series vs Regression/Classification approach?

    • Jason Brownlee October 9, 2018 at 8:43 am #

      Not sure I follow.

      Generally, if most models are sequence-unaware, like MLPs. In which case lag obs are features.

      Some models are sequence-aware, like RNNs and CNNs. In which case lag obs are handled directly and parallel time series are features.

      Multi-step is different from multi-class classification. Same idea though, change the output layer to have n nodes, one for each class, but use a softmax activation.

      Does that help?

  9. JG October 9, 2018 at 8:19 pm #

    Tks for your suggestions!

    I am trying to summarize (“in terms of tensors in and out”) my previous knowledge of MLP, CNN models vs right now LSTM (RNN models) of TS (time series), applied to approach different problems.

    I.e. we used MLP/CNN for classification issues such are as Image processing (e.g. CIFAR-10 for multi-class classification) or multi-class classification (e.g. 3 types of iris flowers) or binary classification (e.g., pimas diabetes y/n), or linear Regression (e.g. continuos Boston Houses pricing).

    I mean, I want to represent the input/output ML/DL model process in terms of geometry “tensors” (or best 3D matrices), to get the whole idea of the models processing.
    That is to say, in CIFAR-10 I have images input (for training/validation/test) in terms of 3D matrix [samples-rows- , features – columns – X, channels] …so the meaning of features it is clear (the multi-variate X independent variables or pixels of image), and the output matrix is clear for CIFAR-10 [for each sample of image in rows, the Y dependent variable of the class is in columns].

    Briefly, tensor input in “pimas” case is [ 768 samples of patients in rows, 8 features or X dependent variables] and the output tensor is [ for each sample or patient in rows, the yes/no diabetes class].
    For iris [samples flowers in rows, the 4 features of flowers or X in columns] and the output is [samples in rows of each flower, and Y of the 3 type of iris in columns]. In Boston Prices the input “tensor” is [506 samples of houses, 13 features or X in columns] and the output “tensor” is [for each sample or house in rows, the continuos value of the house or Y].

    So, when you talk about TS (time series I do not know if anyone else called it “TS”), for example in the case of “daily birth” case input tensor under MLP model with e.g 7days of week as the time steps of input (in my case), and 1 output day label to be predicted , I can think geometrically in “TS” as input tensor of [sample in rows, time steps (e.g.7 days) in columns – in that case is like X -features- but really they are not !] and output tensor [for each sample in row , the day value of prediction Y].
    But Now if I change to CNN model for “TS” applications the input “tensors” must be 3D, so [samples in rows -e.g. 52 weeks in my case-, time steps in cols -e.g.7 days-, and 1 feature] and the output tensor is [for each sample in row , the Y or day of prediction in cols]. It is easy to extrapolate that when we have multiple-steps at the outputs, under this “vision” we would have the correspondent time steps numbers in cols.

    Why I talk too much? Here it is my answer. Because for us, the beginners, we lost easily in matrices dimensions and shapes during the models process and, consequently we make mistake because matrices shapes does not match during the model process. And even worst because we lost the whole idea of the model processing, because we are not able to see “geometrically” the case or problem in terms of input “tensors” and output “tensors”, so we get lost easily and then we do not follow next ideas that teachers as you are introducing in the tutorial.

    As conclusion, here it is my recommendation for teaching those cases and machine learning models concepts, to try to “visualize” the problems from the beginning , clearly at the problem introduction, in terms of those “input tensors” and “output tensors” shape and meaning…so the next ideas, and subtles that the teacher introduce from the post or tutorial will get much more more easier…at least this is my own experience..

    to see always this tensors coming in and coming out within the blackbox of ML/DeepLearning ..:-))

    regards,
    JG

  10. Steph October 25, 2018 at 7:22 pm #

    Hi Jason,

    Thank you for this tutorial, it looks really helpful!! 🙂

    For lesson 02, this is the function I wrote:

    (heavily inspired by https://machinelearningmastery.com/time-series-prediction-lstm-recurrent-neural-networks-python-keras/)

    Cheers,
    Steph

  11. Fredrik Kant November 5, 2018 at 7:19 pm #

    (My answer to) Lession 01:
    I currently are re-reading Simple Complexity (Neil Johnsson) were one of the needed criteries for a Complex Systems is that there exist some kind of feedback, some kind of memory.
    The stock market is a Complex System were different actors interact and the stock price is an reflection of that.
    So when it it comes to RNN the time capability to create some kind of memory using previous data as feedback I see as benifical.
    When it comes to CNN I see the “filters” as benifical. For example in Technical analysis there is a lot of patterns(=filters) that is used to determent resist levels etc. One could perhaps also use different “ok” patterns to detect anomalies in the data. Which in realtime applications (I been working on a trading desk for many years) could be crusial to avoid misstakes.

  12. Volka November 21, 2018 at 12:25 am #

    Thanks a lot for the great tutorial. Just wondering why I get the following error when running lesson 5, 6, and 7. Can you please tell me how to fix it?

    Using TensorFlow backend.
    Traceback (most recent call last):
    File “time_series.py”, line 16, in
    model.add(LSTM(100, activation=’relu’, input_shape=(3, 1)))
    File “C:\Users\Volka\Miniconda2\envs\Tensorflow\lib\site-packages\keras\engine\sequential.py”, line 165, in add
    layer(x)
    File “C:\Users\Volka\Miniconda2\envs\Tensorflow\lib\site-packages\keras\layers\recurrent.py”, line 532, in __call__
    return super(RNN, self).__call__(inputs, **kwargs)
    File “C:\Users\Volka\Miniconda2\envs\Tensorflow\lib\site-packages\keras\engine\base_layer.py”, line 457, in __call__
    output = self.call(inputs, **kwargs)
    File “C:\Users\Volka\Miniconda2\envs\Tensorflow\lib\site-packages\keras\layers\recurrent.py”, line 2194, in call
    initial_state=initial_state)
    File “C:\Users\Volka\Miniconda2\envs\Tensorflow\lib\site-packages\keras\layers\recurrent.py”, line 649, in call
    input_length=timesteps)
    File “C:\Users\Volka\Miniconda2\envs\Tensorflow\lib\site-packages\keras\backend\tensorflow_backend.py”, line 3011, in rnn
    maximum_iterations=input_length)
    TypeError: while_loop() got an unexpected keyword argument ‘maximum_iterations’

    • Jason Brownlee November 21, 2018 at 7:52 am #

      Are you able to confirm that your TensorFlow and Keras versions are up to date?

      • Volka November 21, 2018 at 4:14 pm #

        Thanks a lot. I updated tensorflow and it worked 🙂

Leave a Reply