[New Book] Click to get The Beginner's Guide to Data Science!
Use the offer code 20offearlybird to get 20% off. Hurry, sale ends soon!

Web Crawling in Python

In the old days, it was a tedious job to collect data, and it was sometimes very expensive. Machine learning projects cannot live without data. Luckily, we have a lot of data on the web at our disposal nowadays. We can copy data from the web to create our dataset. We can manually download files and save them to the disk. But we can do it more efficiently by automating the data harvesting. There are several tools in Python that can help the automation.

After finishing this tutorial, you will learn:

  • How to use the requests library to read online data using HTTP
  • How to read tables on web pages using pandas
  • How to use Selenium to emulate browser operations

Kick-start your project with my new book Python for Machine Learning, including step-by-step tutorials and the Python source code files for all examples.

Let’s get started!

Web Crawling in Python
Photo by Ray Bilcliff. Some rights reserved.

Overview

This tutorial is divided into three parts; they are:

  • Using the requests library
  • Reading tables on the web using pandas
  • Reading dynamic content with Selenium

Using the Requests Library

When we talk about writing a Python program to read from the web, it is inevitable that we can’t avoid the requests library. You need to install it (as well as BeautifulSoup and lxml that we will cover later):

It provides you with an interface that allows you to interact with the web easily.

The very simple use case would be to read a web page from a URL:

If you’re familiar with HTTP, you can probably recall that a status code of 200 means the request is successfully fulfilled. Then we can read the response. In the above, we read the textual response and get the HTML of the web page. Should it be a CSV or some other textual data, we can get them in the text attribute of the response object. For example, this is how we can read a CSV from the Federal Reserve Economics Data:

If the data is in the form of JSON, we can read it as text or even let requests decode it for you. For example, the following is to pull some data from GitHub in JSON format and convert it into a Python dictionary:

But if the URL gives you some binary data, such as a ZIP file or a JPEG image, you need to get them in the content attribute instead, as this would be the binary data. For example, this is how we can download an image (the logo of Wikipedia):

Given we already obtained the web page, how should we extract the data? This is beyond what the requests library can provide to us, but we can use a different library to help. There are two ways we can do it, depending on how we want to specify the data.

The first way is to consider the HTML as a kind of XML document and use the XPath language to extract the element. In this case, we can make use of the lxml library to first create a document object model (DOM) and then search by XPath:

XPath is a string that specifies how to find an element. The lxml object provides a function xpath() to search the DOM for elements that match the XPath string, which can be multiple matches. The XPath above means to find an HTML element anywhere with the <span> tag and with the attribute data-testid matching “TemperatureValue” and class beginning with “CurrentConditions.” We can learn this from the developer tools of the browser (e.g., the Chrome screenshot below) by inspecting the HTML source.

This example is to find the temperature of New York City, provided by this particular element we get from this web page. We know the first element matched by the XPath is what we need, and we can read the text inside the <span> tag.

The other way is to use CSS selectors on the HTML document, which we can make use of the BeautifulSoup library:

In the above, we first pass our HTML text to BeautifulSoup. BeautifulSoup supports various HTML parsers, each with different capabilities. In the above, we use the lxml library as the parser as recommended by BeautifulSoup (and it is also often the fastest). CSS selector is a different mini-language, with pros and cons compared to XPath. The selector above is identical to the XPath we used in the previous example. Therefore, we can get the same temperature from the first matched element.

The following is a complete code to print the current temperature of New York according to the real-time information on the web:

As you can imagine, you can collect a time series of the temperature by running this script on a regular schedule. Similarly, we can collect data automatically from various websites. This is how we can obtain data for our machine learning projects.

Reading Tables on the Web Using Pandas

Very often, web pages will use tables to carry data. If the page is simple enough, we may even skip inspecting it to find out the XPath or CSS selector and use pandas to get all tables on the page in one shot. It is simple enough to be done in one line:

The read_html() function in pandas reads a URL and finds all the tables on the page. Each table is converted into a pandas DataFrame and then returns all of them in a list. In this example, we are reading the various interest rates from the Federal Reserve, which happens to have only one table on this page. The table columns are identified by pandas automatically.

Chances are that not all tables are what we are interested in. Sometimes, the web page will use a table merely as a way to format the page, but pandas may not be smart enough to tell. Hence we need to test and cherry-pick the result returned by the read_html() function.

Want to Get Started With Python for Machine Learning?

Take my free 7-day email crash course now (with sample code).

Click to sign-up and also get a free PDF Ebook version of the course.

Reading Dynamic Content With Selenium

A significant portion of modern-day web pages is full of JavaScripts. This gives us a fancier experience but becomes a hurdle to use as a program to extract data. One example is Yahoo’s home page, which, if we just load the page and find all news headlines, there are far fewer than what we can see on the browser:

This is because web pages like this rely on JavaScript to populate the content. Famous web frameworks such as AngularJS or React are behind powering this category. The Python library, such as requests, does not understand JavaScript. Therefore, you will see the result differently. If the data you want to fetch from the web is one of them, you can study how the JavaScript is invoked and mimic the browser’s behavior in your program. But this is probably too tedious to make it work.

The other way is to ask a real browser to read the web page rather than using requests. This is what Selenium can do. Before we can use it, we need to install the library:

But Selenium is only a framework to control browsers. You need to have the browser installed on your computer as well as the driver to connect Selenium to the browser. If you intend to use Chrome, you need to download and install ChromeDriver too. You need to put the driver in the executable path so that Selenium can invoke it like a normal command. For example, in Linux, you just need to get the chromedriver executable from the ZIP file downloaded and put it in /usr/local/bin.

Similarly, if you’re using Firefox, you need the GeckoDriver. For more details on setting up Selenium, you should refer to its documentation.

Afterward, you can use a Python script to control the browser behavior. For example:

The above code works as follows. We first launch the browser in headless mode, meaning we ask Chrome to start but not display on the screen. This is important if we want to run our script remotely as there may not be any GUI support. Note that every browser is developed differently, and thus the options syntax we used is specific to Chrome. If we use Firefox, the code would be this instead:

After we launch the browser, we give it a URL to load. But since it takes time for the network to deliver the page, and the browser will take time to render it, we should wait until the browser is ready before we proceed to the next operation. We detect if the browser has finished rendering by using JavaScript. We make Selenium run a JavaScript code for us and tell us the result using the execute_script() function. We leverage Selenium’s WebDriverWait tool to run it until it succeeds or until a 30-second timeout. As the page is loaded, we scroll to the bottom of the page so the JavaScript can be triggered to load more content. Then we wait for one second unconditionally to make sure the browser triggered the JavaScript, then wait until the page is ready again. Afterward, we can extract the news headline element using XPath (or alternatively using a CSS selector). Because the browser is an external program, we are responsible for closing it in our script.

Using Selenium is different from using the requests library in several aspects. First, you never have the web content in your Python code directly. Instead, you refer to the browser’s content whenever you need it. Hence the web elements returned by the find_elements() function refer to objects inside the external browser, so we must not close the browser before we finish consuming them. Secondly, all operations should be based on browser interaction rather than network requests. Thus you need to control the browser by emulating keyboard and mouse movements. But in return, you have the full-featured browser with JavaScript support. For example, you can use JavaScript to check the size and position of an element on the page, which you will know only after the HTML elements are rendered.

There are a lot more functions provided by the Selenium framework that we can cover here. It is powerful, but since it is connected to the browser, using it is more demanding than the requests library and much slower. Usually, this is the last resort for harvesting information from the web.

Further Reading

Another famous web crawling library in Python that we didn’t cover above is Scrapy. It is like combining the requests library with BeautifulSoup into one. The web protocol is complex. Sometimes we need to manage web cookies or provide extra data to the requests using the POST method. All these can be done with the requests library with a different function or extra arguments. The following are some resources for you to go deeper:

Articles

API documentation

Books

Summary

In this tutorial, you saw the tools we can use to fetch content from the web.

Specifically, you learned:

  • How to use the requests library to send the HTTP request and extract data from its response
  • How to build a document object model from HTML so we can find some specific information on a web page
  • How to read tables on a web page quickly and easily using pandas
  • How to use Selenium to control a browser to tackle dynamic content on a web page

Get a Handle on Python for Machine Learning!

Python For Machine Learning

Be More Confident to Code in Python

...from learning the practical Python tricks

Discover how in my new Ebook:
Python for Machine Learning

It provides self-study tutorials with hundreds of working code to equip you with skills including:
debugging, profiling, duck typing, decorators, deployment, and much more...

Showing You the Python Toolbox at a High Level for
Your Projects


See What's Inside

14 Responses to Web Crawling in Python

  1. Avatar
    YF April 22, 2022 at 12:52 pm #

    there is a typo here.

    df = ppd.read_csv(csvbuffer)

    the “ppd” should be “pd”

    • Avatar
      James Carmichael April 24, 2022 at 3:36 am #

      Thank you for the feedback YF!

  2. Avatar
    Selda April 22, 2022 at 5:13 pm #

    Thank you for this detailed and useful page. One thing though, isn’t this web scraping? As far as I know web crawling is finding or discovering URLs or links on the web.

    • Avatar
      James Carmichael April 24, 2022 at 3:35 am #

      Hi Selda…Yes, this technique is often referred to as web scraping.

  3. Avatar
    Luis Gonzalez April 23, 2022 at 12:04 am #

    error:
    df = ppd.read_csv(csvbuffer)

    correct:

    df = pd.read_csv(csvbuffer)

    • Avatar
      James Carmichael April 24, 2022 at 3:32 am #

      Hi Luis…Thank you for the feedback!

  4. Avatar
    suchi banerjee April 26, 2022 at 2:21 pm #

    Great and comprehensive article. It helps in understanding different approaches. However, without any standard template for website development it is really difficult to have a generic tool to collect data from the web pages. Do you have any thought on this?

  5. Avatar
    Rina May 29, 2022 at 7:52 am #

    Thank your for the comprehensive steps. Can I know how to crawl Tweets data? Can anyone here provide the link?

  6. Avatar
    Billie June 10, 2022 at 2:53 pm #

    Thank you for the tutorial. Can I know how to crawl data from private github repos?

    • Avatar
      James Carmichael June 11, 2022 at 9:04 am #

      Thank you for the feedback Billie! I am not aware of a method to perform that task.

  7. Avatar
    Abdul Hannan December 26, 2022 at 12:46 am #

    Hy , hope you are fine .
    Can I Know how to Crawl any Data from Any Website?

Leave a Reply