Overview of Some Deep Learning Libraries

Machine learning is a broad topic. Deep learning, in particular, is a way of using neural networks for machine learning. A neural network is probably a concept older than machine learning, dating back to the 1950s. Unsurprisingly, there were many libraries created for it.

The following aims to give an overview of some of the famous libraries for neural networks and deep learning.

After finishing this tutorial, you will learn:

  • Some of the deep learning or neural network libraries
  • The functional difference between two common libraries, PyTorch and TensorFlow

Let’s get started.

Overview of some deep learning libraries
Photo by Francesco Ungaro. Some rights reserved.


This tutorial is in three parts; they are:

  • The C++ Libraries
  • Python Libraries
  • PyTorch and TensorFlow

The C++ Libraries

Deep learning has gained attention in the last decade. Before that, there was little confidence in how to train a neural network with many layers. However, understanding how to build a multilayer perceptron was around for many years.

Before we had deep learning, probably the most famous neural network library was libann. It is a library for C++, and the functionality is limited due to its age. This library has since stopped development. A newer library for C++ is OpenNN, which allows modern C++ syntax.

But that’s pretty much all for C++. The rigid syntax of C++ may be why we do not have too many libraries for deep learning. The training phase of a deep learning project is about experiments. We want some tools that allow us to iterate faster. Hence a dynamic programming language could be a better fit. Therefore, you see Python come on the scene.

Python Libraries

One of the earliest libraries for deep learning is Caffe. It was developed at U.C. Berkeley specifically for computer vision problems. While it is developed in C++, it serves as a library with a Python interface. Hence we can build our project in Python with the network defined in a JSON-like syntax.

Chainer is another library in Python. It is an influential one because the syntax makes a lot of sense. While it is less common nowadays, the API in Keras and PyTorch bears a resemblance to Chainer. The following is an example from Chainer’s documentation, and you may mistake it as Keras or PyTorch:

The other obsoleted library is Theano. It has ceased development, but once upon a time, it was a major library for deep learning. In fact, the earlier version of the Keras library allows you to choose between a Theano or TensorFlow backend. Indeed, neither Theano nor TensorFlow are deep learning libraries precisely. Rather, they are tensor libraries that make matrix operations and differentiation handy, upon which deep learning operations can be built. Hence these two are considered replacements for each other from Keras’s perspective.

CNTK from Microsoft and Apache MXNet are the two other libraries worth mentioning. They are large with interfaces for multiple languages. Python, of course, is one of them. CNTK has C# and C++ interfaces, while MXNet provides interfaces for Java, Scala, R, Julia, C++, Clojure, and Perl. But recently, Microsoft decided to stop developing CNTK. But MXNet does have some momentum, and it is probably the most popular library after TensorFlow and PyTorch.

Below is an example of using MXNet via the R interface. Conceptually, you see the syntax is similar to Keras’s functional API:

PyTorch and TensorFlow

PyTorch and TensorFlow are the two major libraries nowadays. In the past, when TensorFlow was in version 1.x, they were vastly different. But as TensorFlow absorbed Keras as part of its library, these two libraries mostly work similarly.

PyTorch is backed by Facebook, and its syntax has been stable over the years. There are also a lot of existing models that we can borrow. The common way of defining a deep learning model in PyTorch is to create a class:

But there is also a sequential syntax to make the code more concise:

TensorFlow in version 2.x adopted Keras as part of its libraries. In the past, these two were separate projects. In TensorFlow 1.x, we need to build a computation graph, set up a session, and derive gradients from a session for the deep learning model. Hence it is a bit too verbose. Keras is designed as a library to hide all these low-level details.

The same network as above can be produced by TensorFlow’s Keras syntax as follows:

One major difference between PyTorch and Keras syntax is in the training loop. In Keras, we just need to assign the loss function, the optimization algorithm, the dataset, and some other parameters to the model. Then we have a fit() function to do all the training work, as follows:

But in PyTorch, we need to write our own training loop code:

This may not be an issue if you’re experimenting with a new design of a network in which you want to have more control over how the loss is calculated and how the optimizer updates the model weights. But otherwise, you will appreciate the simpler syntax from Keras.

Note that both PyTorch and TensorFlow are libraries with a Python interface. Therefore, it is possible to have an interface for other languages too. For example, there are Torch for R and TensorFlow for R.

Also, note that the libraries mentioned above are full-featured libraries that include training and prediction. If you consider a production environment where you make use of a trained model, there could be a wider choice. TensorFlow has a “TensorFlow Lite” counterpart that allows a trained model to be run on a mobile or the web. Intel also has an OpenVINO library that aims to optimize the performance in prediction.

Further Reading

Below are the links to the libraries we mentioned above:


In this post, you discovered various deep learning libraries and some of their characteristics. Specifically, you learned:

  • What are the libraries available for C++ and Python
  • How the Chainer library influenced the syntax in building a deep learning model
  • The relationship between Keras and TensorFlow 2.x
  • What are the differences between PyTorch and TensorFlow

5 Responses to Overview of Some Deep Learning Libraries

  1. Avatar
    Ciprian July 2, 2022 at 3:06 pm #

    Great overview!

    • Avatar
      James Carmichael July 3, 2022 at 12:58 pm #

      Thank you for the feedback Ciprian!

  2. Avatar
    Kevin July 4, 2022 at 11:18 pm #

    Thanks for the overview with sample code snippets!

  3. Avatar
    Romeo December 16, 2022 at 9:36 am #

    Thank you for the great post!

    Rapidly these libraries are moving away from academic institutions to being backed by commercial companies. Eventually this left me wondering that as deep learning moves from research to production do we expect a need for more C++ and java libraries for machine learning? Versus maybe if there were more C++ and java libraries, would more companies would be picking up machine learning solutions? Overall, I’m looking forward to looking into more of those production environment libraries you mentioned. –Romeo

    • Avatar
      James Carmichael December 17, 2022 at 7:57 am #

      You raise some interesting questions Romeo! The answers depend upon specific goals (i.e. research, product development, education…etc).

Leave a Reply