[New Book] Click to get The Beginner's Guide to Data Science!
Use the offer code 20offearlybird to get 20% off. Hurry, sale ends soon!

How to Control Neural Network Model Capacity With Nodes and Layers

The capacity of a deep learning neural network model controls the scope of the types of mapping functions that it is able to learn.

A model with too little capacity cannot learn the training dataset meaning it will underfit, whereas a model with too much capacity may memorize the training dataset, meaning it will overfit or may get stuck or lost during the optimization process.

The capacity of a neural network model is defined by configuring the number of nodes and the number of layers.

In this tutorial, you will discover how to control the capacity of a neural network model and how capacity impacts what a model is capable of learning.

After completing this tutorial, you will know:

  • Neural network model capacity is controlled both by the number of nodes and the number of layers in the model.
  • A model with a single hidden layer and sufficient number of nodes has the capability of learning any mapping function, but the chosen learning algorithm may or may not be able to realize this capability.
  • Increasing the number of layers provides a short-cut to increasing the capacity of the model with fewer resources, and modern techniques allow learning algorithms to successfully train deep models.

Kick-start your project with my new book Better Deep Learning, including step-by-step tutorials and the Python source code files for all examples.

Let’s get started.

  • Update Jan/2020: Updated for changes in scikit-learn v0.22 API.
How to Control Neural Network Model Capacity With Nodes and Layers

How to Control Neural Network Model Capacity With Nodes and Layers
Photo by Bernard Spragg. NZ, some rights reserved.

Tutorial Overview

This tutorial is divided into five parts; they are:

  1. Controlling Neural Network Model Capacity
  2. Configure Nodes and Layers in Keras
  3. Multi-Class Classification Problem
  4. Change Model Capacity With Nodes
  5. Change Model Capacity With Layers

Controlling Neural Network Model Capacity

The goal of a neural network is to learn how to map input examples to output examples.

Neural networks learn mapping functions. The capacity of a network refers to the range or scope of the types of functions that the model can approximate.

Informally, a model’s capacity is its ability to fit a wide variety of functions.

— Pages 111-112, Deep Learning, 2016.

A model with less capacity may not be able to sufficiently learn the training dataset. A model with more capacity can model more different types of functions and may be able to learn a function to sufficiently map inputs to outputs in the training dataset. Whereas a model with too much capacity may memorize the training dataset and fail to generalize or get lost or stuck in the search for a suitable mapping function.

Generally, we can think of model capacity as a control over whether the model is likely to underfit or overfit a training dataset.

We can control whether a model is more likely to overfit or underfit by altering its capacity.

— Pages 111, Deep Learning, 2016.

The capacity of a neural network can be controlled by two aspects of the model:

  • Number of Nodes.
  • Number of Layers.

A model with more nodes or more layers has a greater capacity and, in turn, is potentially capable of learning a larger set of mapping functions.

A model with more layers and more hidden units per layer has higher representational capacity — it is capable of representing more complicated functions.

— Pages 428, Deep Learning, 2016.

The number of nodes in a layer is referred to as the width.

Developing wide networks with one layer and many nodes was relatively straightforward. In theory, a network with enough nodes in the single hidden layer can learn to approximate any mapping function, although in practice, we don’t know how many nodes are sufficient or how to train such a model.

The number of layers in a model is referred to as its depth.

Increasing the depth increases the capacity of the model. Training deep models, e.g. those with many hidden layers, can be computationally more efficient than training a single layer network with a vast number of nodes.

Modern deep learning provides a very powerful framework for supervised learning. By adding more layers and more units within a layer, a deep network can represent functions of increasing complexity.

— Pages 167, Deep Learning, 2016.

Traditionally, it has been challenging to train neural network models with more than a few layers due to problems such as vanishing gradients. More recently, modern methods have allowed the training of deep network models, allowing the developing of models of surprising depth that are capable of achieving impressive performance on challenging problems in a wide range of domains.

Want Better Results with Deep Learning?

Take my free 7-day email crash course now (with sample code).

Click to sign-up and also get a free PDF Ebook version of the course.

Configure Nodes and Layers in Keras

Keras allows you to easily add nodes and layers to your model.

Configuring Model Nodes

The first argument of the layer specifies the number of nodes used in the layer.

Fully connected layers for the Multilayer Perceptron, or MLP, model are added via the Dense layer.

For example, we can create one fully-connected layer with 32 nodes as follows:

Similarly, the number of nodes can be specified for recurrent neural network layers in the same way.

For example, we can create one LSTM layer with 32 nodes (or units) as follows:

Convolutional neural networks, or CNN, don’t have nodes, instead specify the number of filter maps and their shape. The number and size of filter maps define the capacity of the layer.

We can define a two-dimensional CNN with 32 filter maps, each with a size of 3 by 3, as follows:

Configuring Model Layers

Layers are added to a sequential model via calls to the add() function and passing in the layer.

Fully connected layers for the MLP can be added via repeated calls to add passing in the configured Dense layers; for example:

Similarly, the number of layers for a recurrent network can be added in the same way to give a stacked recurrent model.

An important difference is that recurrent layers expect a three-dimensional input, therefore the prior recurrent layer must return the full sequence of outputs rather than the single output for each node at the end of the input sequence.

This can be achieved by setting the “return_sequences” argument to “True“. For example:

Convolutional layers can be stacked directly, and it is common to stack one or two convolutional layers together followed by a pooling layer, then repeat this pattern of layers; for example:

Now that we know how to configure the number of nodes and layers for models in Keras, we can look at how the capacity affects model performance on a multi-class classification problem.

Multi-Class Classification Problem

We will use a standard multi-class classification problem as the basis to demonstrate the effect of model capacity on model performance.

The scikit-learn class provides the make_blobs() function that can be used to create a multi-class classification problem with the prescribed number of samples, input variables, classes, and variance of samples within a class.

We can configure the problem to have a specific number of input variables via the “n_features” argument, and a specific number of classes or centers via the “centers” argument. The “random_state” can be used to seed the pseudorandom number generator to ensure that we always get the same samples each time the function is called.

For example, the call below generates 1,000 examples for a three class problem with two input variables.

The results are the input and output elements of a dataset that we can model.

In order to get a feeling for the complexity of the problem, we can plot each point on a two-dimensional scatter plot and color each point by class value.

The complete example is listed below.

Running the example creates a scatter plot of the entire dataset. We can see that the chosen standard deviation of 2.0 means that the classes are not linearly separable (separable by a line), causing many ambiguous points.

This is desirable as it means that the problem is non-trivial and will allow a neural network model to find many different “good enough” candidate solutions.

Scatter Plot of Blobs Dataset With Three Classes and Points Colored by Class Value

Scatter Plot of Blobs Dataset With Three Classes and Points Colored by Class Value

In order to explore model capacity, we need more complexity in the problem than three classes and two variables.

For the purposes of the following experiments, we will use 100 input features and 20 classes; for example:

Change Model Capacity With Nodes

In this section, we will develop a Multilayer Perceptron model, or MLP, for the blobs multi-class classification problem and demonstrate the effect that the number of nodes has on the ability of the model to learn.

We can start off by developing a function to prepare the dataset.

The input and output elements of the dataset can be created using the make_blobs() function as described in the previous section.

Next, the target variable must be one hot encoded. This is so that the model can learn to predict the probability of an input example belonging to each of the 20 classes.

We can use the to_categorical() Keras utility function to do this, for example:

Next, we can split the 1,000 examples in half and use 500 examples as the training dataset and 500 to evaluate the model.

The create_dataset() function below ties these elements together and returns the train and test sets in terms of the input and output elements.

We can call this function to prepare the dataset.

Next, we can define a function that will create the model, fit it on the training dataset, and then evaluate it on the test dataset.

The model needs to know the number of input variables in order to configure the input layer and the number of target classes in order to configure the output layer. These properties can be extracted from the training dataset directly.

We will define an MLP model with a single hidden layer that uses the rectified linear activation function and the He random weight initialization method.

The output layer will use the softmax activation function in order to predict a probability for each target class. The number of nodes in the hidden layer will be provided via an argument called “n_nodes“.

The model will be optimized using stochastic gradient descent with a modest learning rate of 0.01 with a high momentum of 0.9, and a categorical cross entropy loss function will be used, suitable for multi-class classification.

The model will be fit for 100 training epochs, then the model will be evaluated on the test dataset.

Tying these elements together, the evaluate_model() function below takes the number of nodes and dataset as arguments and returns the history of the training loss at the end of each epoch and the accuracy of the final model on the test dataset.

We can call this function with different numbers of nodes to use in the hidden layer.

The problem is relatively simple; therefore, we will review the performance of the model with 1 to 7 nodes.

We would expect that as the number of nodes is increased, that this would increase the capacity of the model and allow the model to better learn the training dataset, at least to a point limited by the chosen configuration for the learning algorithm (e.g. learning rate, batch size, and epochs).

The test accuracy for each configuration will be printed and the learning curves of training accuracy with each configuration will be plotted.

The full code listing is provided below for completeness.

Running the example first prints the test accuracy for each model configuration.

Note: Your results may vary given the stochastic nature of the algorithm or evaluation procedure, or differences in numerical precision. Consider running the example a few times and compare the average outcome.

In this case, we can see that as the number of nodes is increased, the capacity of the model to learn the problem is increased. This results in a progressive lowering of the generalization error of the model on the test dataset until 6 and 7 nodes when the model learns the problem perfectly.

A line plot is also created showing cross entropy loss on the training dataset for each model configuration (1 to 7 nodes in the hidden layer) over the 100 training epochs.

We can see that as the number of nodes is increased, the model is able to better decrease the loss, e.g. to better learn the training dataset. This plot shows the direct relationship between model capacity, as defined by the number of nodes in the hidden layer and the model’s ability to learn.

Line Plot of Cross Entropy Loss Over Training Epochs for an MLP on the Training Dataset for the Blobs Multi-Class Classification Problem When Varying Model Nodes

Line Plot of Cross Entropy Loss Over Training Epochs for an MLP on the Training Dataset for the Blobs Multi-Class Classification Problem When Varying Model Nodes

The number of nodes can be increased to the point (e.g. 1,000 nodes) where the learning algorithm is no longer able to sufficiently learn the mapping function.

Change Model Capacity With Layers

We can perform a similar analysis and evaluate how the number of layers impacts the ability of the model to learn the mapping function.

Increasing the number of layers can often greatly increase the capacity of the model, acting like a computational and learning shortcut to modeling a problem. For example, a model with one hidden layer of 10 nodes is not equivalent to a model with two hidden layers with five nodes each. The latter has a much greater capacity.

The danger is that a model with more capacity than is required is likely to overfit the training data, and as with a model that has too many nodes, a model with too many layers will likely be unable to learn the training dataset, getting lost or stuck during the optimization process.

First, we can update the evaluate_model() function to fit an MLP model with a given number of layers.

We know from the previous section that an MLP with about seven or more nodes fit for 100 epochs will learn the problem perfectly. We will, therefore, use 10 nodes in each layer to ensure the model has enough capacity in just one layer to learn the problem.

The updated function is listed below, taking the number of layers and dataset as arguments and returning the training history and test accuracy of the model.

Given that a single hidden layer model has enough capacity to learn this problem, we will explore increasing the number of layers to the point where the learning algorithm becomes unstable and can no longer learn the problem.

If the chosen modeling problem was more complex, we could explore increasing the layers and review the improvements in model performance to a point of diminishing returns.

In this case, we will evaluate the model with 1 to 5 layers, with the expectation that at some point, the number of layers will result in a model that the chosen learning algorithm is unable to adapt to the training data.

Tying these elements together, the complete example is listed below.

Running the example first prints the test accuracy for each model configuration.

Note: Your results may vary given the stochastic nature of the algorithm or evaluation procedure, or differences in numerical precision. Consider running the example a few times and compare the average outcome.

In this case, we can see that the model is capable of learning the problem well with up to three layers, then begins to falter. We can see that performance really drops with five layers and is expected to continue to fall if the number of layers is increased further.

A line plot is also created showing cross entropy loss on the training dataset for each model configuration (1 to 5 layers) over the 100 training epochs.

We can see that the dynamics of the model with 1, 2, and 3 models (blue, orange and green) are pretty similar, learning the problem quickly.

Surprisingly, training loss with four and five layers shows signs of initially doing well, then leaping up, suggesting that the model is likely stuck with a sub-optimal set of weights rather than overfitting the training dataset.

Line Plot of Cross Entropy Loss Over Training Epochs for an MLP on the Training Dataset for the Blobs Multi-Class Classification Problem When Varying Model Layers

Line Plot of Cross Entropy Loss Over Training Epochs for an MLP on the Training Dataset for the Blobs Multi-Class Classification Problem When Varying Model Layers

The analysis shows that increasing the capacity of the model via increasing depth is a very effective tool that must be used with caution as it can quickly result in a model with a large capacity that may not be capable of learning the training dataset easily.

Extensions

This section lists some ideas for extending the tutorial that you may wish to explore.

  • Too Many Nodes. Update the experiment of increasing nodes to find the point where the learning algorithm is no longer capable of learning the problem.
  • Repeated Evaluation. Update an experiment to use the repeated evaluation of each configuration to counter the stochastic nature of the learning algorithm.
  • Harder Problem. Repeat the experiment of increasing layers on a problem that requires the increased capacity provided by increased depth in order to perform well.

If you explore any of these extensions, I’d love to know.

Further Reading

This section provides more resources on the topic if you are looking to go deeper.

Posts

Books

API

Articles

Summary

In this tutorial, you discovered how to control the capacity of a neural network model and how capacity impacts what a model is capable of learning.

Specifically, you learned:

  • Neural network model capacity is controlled both by the number of nodes and the number of layers in the model.
  • A model with a single hidden layer and a sufficient number of nodes has the capability of learning any mapping function, but the chosen learning algorithm may or may not be able to realize this capability.
  • Increasing the number of layers provides a short-cut to increasing the capacity of the model with fewer resources, and modern techniques allow learning algorithms to successfully train deep models.

Do you have any questions?
Ask your questions in the comments below and I will do my best to answer.

Develop Better Deep Learning Models Today!

Better Deep Learning

Train Faster, Reduce Overftting, and Ensembles

...with just a few lines of python code

Discover how in my new Ebook:
Better Deep Learning

It provides self-study tutorials on topics like:
weight decay, batch normalization, dropout, model stacking and much more...

Bring better deep learning to your projects!

Skip the Academics. Just Results.

See What's Inside

30 Responses to How to Control Neural Network Model Capacity With Nodes and Layers

  1. Avatar
    KingDeimons July 23, 2019 at 4:02 pm #

    I have a problem that requires 20000 input nodes. What number of layers and nodes in each hidden layer would be good starting point to start experiments?

  2. Avatar
    Elsa August 24, 2019 at 10:48 am #

    Hi Jason, may I know why the variable X (trainX and testX too) need no one hot encoding?

  3. Avatar
    Andy November 20, 2019 at 4:12 pm #

    is it possible to use these techniques with k-fold cross-validation rather than a separate test set?

  4. Avatar
    sukhpal December 16, 2019 at 2:03 pm #

    why optimization in stages ?

    • Avatar
      Jason Brownlee December 17, 2019 at 6:27 am #

      What do you mean exactly? Perhaps you can elaborate?

  5. Avatar
    Durjoy Dhruba December 28, 2019 at 2:13 am #

    Instead of connecting every nodes in the hidden layer, how can we select only even nodes or odd nodes to connect?

    • Avatar
      Jason Brownlee December 28, 2019 at 7:49 am #

      You can do this with custom code, but not with default Dense layers I believe.

  6. Avatar
    Chris January 30, 2020 at 1:35 am #

    Hey, i noticed one thing, increasing number of nodes to extremes (like 512) and then even adding couple more layers makes the model more accurate on test sample. Even though number of predictors is low, about 50. Sample size is around 1million but it is an unbalanced dataset. And i try to predict low frequency event.

  7. Avatar
    Mudit May 14, 2020 at 9:13 pm #

    Hi Jason,

    How we can increase the capacity of machine learning models?

    • Avatar
      Jason Brownlee May 15, 2020 at 6:01 am #

      For neural nets, add more layers and nodes.

      Other models – it depends on the model.

  8. Avatar
    Abhi Bhagat September 10, 2020 at 4:45 pm #

    .
    In the above problem
    data fits(trains) on Xtrain, ytrain
    data evaluates on Xtest, yest

    High num. of nodes gives high Capacity .
    High capacity causes to Over fitting.

    1.
    Increasing the nodes increases the capacity to learn the model; using 7 nodes the model learns problem perfectly.
    shouldn’t using 7nodes probably over-fit the data?
    But instead its performing very well on the test set.

    2.
    Can’t understand which is better,
    using 7 nodes
    OR
    using 5 nodes(seams to have moderate capacity).?
    Please help.

    • Avatar
      Jason Brownlee September 11, 2020 at 5:50 am #

      We are not trying to solve the test problem, rather we are showing how changing the model impacts model behavior.

      A larger capacity may or may not overfit.

  9. Avatar
    Robert Mound October 18, 2020 at 10:47 am #

    Sorry if this is a dumb question, trying to get up to speed with machine learning, and I’m not sure where to look for this question specifically. Say I have a learned model, but I want to build in some adjustable parameters for the model itself. For example, a model has been trained to mimic the change in an audio signal, looking at input and output samples. That works well. But let’s say I want to add a gain adjustment to that model. I figure I could re-train the data with the gain in the example files set at various levels, but I’m not sure how to go about this. Are nodes and layers the sort of thing that could accomplish this, or is there any other way to add sort of components of the model that work together, and are adjustable?

    • Avatar
      Jason Brownlee October 18, 2020 at 1:26 pm #

      You could use your existing model as an input to a new model or create a new model for the problem.

      • Avatar
        Robert Mound October 18, 2020 at 2:58 pm #

        Thanks for the reply. I’m a bit confused though, I mean that I want to have the model adjustable, meaning that I want to be able to adjust the gain on the resultant model. The idea is to build an interface to adjust any parameters that I train, if that sort of thing is possible. If I simply use the existing model as an input to a new model, won’t I just end up with a new model with a new gain setting?

        • Avatar
          Jason Brownlee October 19, 2020 at 6:36 am #

          Not sure I follow, sorry. Perhaps experiment/prototype and see if the model achieves your desired outcome/system requirements.

  10. Avatar
    Hiago Matheus Brajato November 28, 2020 at 8:09 am #

    Hi Jason!! There is a month I’m trying to solve a speech emotion recognition problem of EMO-DB (7 classes).

    The structure of my MLP is as follows:
    1 input layer
    2 hidden layers (sigmoid activation function)
    1 output layer (sigmoid activation function)

    But I cannot have good results on my validation sets, despite the problem is a little hard, my features are not so bad…. I would like to ask you what kind of action I could take for increasing my accuracy and having a lower MSE in training….really thanks

  11. Avatar
    Krisma Becker June 4, 2021 at 11:19 pm #

    Hi Jason,
    I’m currently working on a project which is about regression problem.
    The dataset is really small (less than 500 samples).
    In this case, I’ve built a NN with only one hidden layer of around 4 to 5 nodes.
    I would love if the increase of the number of nodes will improve the model performance?
    I’ve tried to make this experience by using more nodes and evaluating the error. From what I’ve seen, there is no trends (it’s more likely to be zig-zag).

    Is it normal?
    How can I explain that?

    • Avatar
      Jason Brownlee June 5, 2021 at 5:30 am #

      Try it and see, you may need to adjust other aspects of the model like learning rate.

  12. Avatar
    honey August 30, 2021 at 12:55 pm #

    I want to increase the depth of U-net model. the filters are 64,128,256,512.. I want to go deeper . how can I do so?

    • Avatar
      Adrian Tam September 1, 2021 at 7:45 am #

      Increasing depth means adding more layers. The simplest is on the sequential model, call add() multiple times with Dense() layers.

  13. Avatar
    Heramb Skanda December 10, 2021 at 1:47 pm #

    in a dense neural network can we make some changes to back propogation such that only one of the layer’s weights and biases are changed and leaving the others un-changed ??

    • Avatar
      Adrian Tam December 10, 2021 at 1:57 pm #

      Yes, you have to mark “layer.trainable = True” or “layer.trainable = False” for each layer.

  14. Avatar
    Sunila Akbar March 6, 2022 at 4:58 am #

    Hi Jason,

    Nice tutorial.

    I already have a CNN model having 7 layers, 4 convolutional, 3 fully connected. Now, if the number of input features increase from 8 to 14, should I experiment with different filter size, more number of layers, or more number of filters/neurons per layer?

Leave a Reply