SALE! Use code blackfriday for 40% off everything!
Hurry, sale ends soon! Click to see the full catalog.

# How to Read and Display Videos Using OpenCV

Digital videos are close relatives of digital images because they are, indeed, made up of many digital images that are sequentially displayed in rapid succession to create the effect of moving visual data.

The OpenCV library provides several methods to work with videos, such as reading video data from different sources as well as accessing several of their properties.

In this tutorial, you will familiarise yourself with the most basic OpenCV operations that are essential when working with videos.

After completing this tutorial, you will know:

• How a digital video is formulated as a close relative of digital images.
• How the image frames comprising a video are read from a camera.
• How the image frames comprising a video are read from a saved video file.

Let’s get started.

Reading and Displaying Videos Using OpenCV
Photo by Thomas William, some rights reserved.

## Tutorial Overview

This tutorial is divided into three parts; they are:

• How is a Video Formulated?
• Reading and Displaying Image Frames From a Camera
• Reading and Displaying Image Frames From a Video File

## How is a Video Formulated?

We have seen that a digital image is made up of pixels, with each pixel being characterised by its spatial coordinates inside the image space, and its intensity or gray level value.

We have also mentioned that a grayscale image, comprising a single channel, can be described by a 2D function, I(x, y), where x and y denote the aforementioned spatial coordinates, and the value of I at any image position (x, y) denotes the pixel intensity.

An RGB image, in turn, can be described by three of these 2D functions, IR(x, y), IG(x, y) and IB(x, y), corresponding to its Red, Green and Blue channels, respectively.

In describing digital video we shall be adding in an extra dimension, t, which denotes time. The reason for doing so is that digital video is, in fact, made up of digital images that are sequentially displayed in rapid succession across a period of time. Within the context of video, we shall be referring to these images as image frames. The rate at which frames are displayed in succession is referred to as frame rate and is measured in frames per second, or FPS in short.

Hence, if we had to pick an image frame out of a grayscale video at a specific time instance, t, we would describe it by the function, I(x, y, t), which now includes a temporal dimension.

Similarly, if we had to pick an image frame out of an RGB video at a specific time instance, t, we would describe it by three functions, IR(x, y, t), IG(x, y, t) and IB(x, y, t), corresponding to its Red, Green and Blue channels, respectively.

Our formulation tells us that the data contained in digital video is time-dependent, which means that the data changes over time.

In simpler terms, this means that the intensity value of a pixel with coordinates (x, y) at time instance, t, will likely be different from its intensity value at another time instance, (t + 1). This change in intensity values might be coming from the fact that the physical scene that is being recorded is in itself changing, but also from the presence of noise in the video data (originating, for instance, from the camera sensor itself).

## Reading and Displaying Image Frames From a Camera

In order to read image frames either from a camera that is connected to your computer, or a video file that is stored on your hard disk, our first step will be to create a VideoCapture object to work with. The required argument is either the index value of type int corresponding to the camera to read from, or the video file name.

Let’s start first by grabbing image frames from a camera.

If you have a webcam that is built into, or connected to your computer, then you may index it by a value of 0. If you have additional connected cameras that you would, otherwise, wish to read from, then you may index them with a value of 1, 2 etc, depending on how many cameras you have available.

Before attempting to read and display images frames, it would be sensible to check that a connection to the camera has been established successfully. The capture.isOpened() method can be used for this purpose, which returns False in case the connection could not be established:

If the camera has, otherwise, been successfully connected to, we may proceed to read the image frames by making use of the capture.read() method as follows:

This method returns the next image frame in frame, together with a boolean value ret that is True if an image frame has been successfully grabbed or, conversely, False if the method has returned an empty image. The latter can happen if, for instance, the camera has been disconnected.

Displaying the grabbed image frame works in the same way as we had done for the still images, using the imshow method:

Always keep in mind that when working with OpenCV, each image frame is read in BGR color format.

In the complete code listing, we’re going to place the code above inside a while loop that will keep on grabbing image frames from the camera until the user terminates it. For the purpose of letting the user terminate the while loop, we will include the following two lines of code:

Here, the waitKey function stops and waits for a keyboard event for the specified amount of milliseconds. It returns the code of the pressed key, or -1 if no keyboard event is generated until the specified time has elapsed. In our particular case, we have specified a time window of length 25ms, and we are checking for an ASCII code of 27 that corresponds to pressing the Esc key. When the Esc key is pressed, the while loop is terminated by a break command.

The very last lines of code that we shall be including serve to stop the video capture, deallocate the memory, and close the window being used for image display:

In some laptop computer, you will see a small LED lighted up next to your built-in webcam when the video capture is in use. You need to stop the video capture to turn off that LED. It doesn’t matter if your program is reading from the camera. You also need to stop the video capture before another program can use your webcam.

The complete code listing is as follows:

## Reading and Displaying Image Frames From a Video File

It is, alternatively, possible to read image frames from a video file stored on your hard disk. OpenCV supports many video formats. For this purpose, we are going to modify our code to specify a path to a video file rather than an index to a camera.

I have downloaded this video, renamed it to Iceland.mp4 and saved it to a local folder called, Videos.

I can see from the video properties displayed on my local drive that the video is made up of image frames of dimensions, 1920 x 1080 pixels, and that it runs at a frame rate of 25 fps.

In order to read the image frames of this video, we shall be modifying the following line of code as follows:

It is also possible to get several properties of the capture object, such as the image frames’ width and height, as well as the frame rate:

The complete code listing is as follows:

Video has time dimension. But in OpenCV, you are dealing with one frame at a time. This can make the video processing consistent with image processing so you can reuse the techniques from one to another.

We may include other lines of code inside the while loop to process every image frame after this has been grabbed by the capture.read() method. One example is to convert each BGR image frame into grayscale, for which we may use the same cvtColor method that we had used for converting still images:

What other transformations can you think of to apply to the image frames?

This section provides more resources on the topic if you are looking to go deeper.

## Summary

In this tutorial, you familiarised yourself with the most basic OpenCV operations that are essential when working with videos.

Specifically, you learned:

• How a digital video is formulated as a close relative of digital images.
• How the image frames comprising a video are read from a camera.
• How the image frames comprising a video are read from a saved video file.

Do you have any questions?