7 Ways to Handle Large Data Files for Machine Learning

Exploring and applying machine learning algorithms to datasets that are too large to fit into memory is pretty common.

This leads to questions like:

  • How do I load my multiple gigabyte data file?
  • Algorithms crash when I try to run my dataset; what should I do?
  • Can you help me with out-of-memory errors?

In this post, I want to offer some common suggestions you may want to consider.

7 Ways to Handle Large Data Files for Machine Learning

7 Ways to Handle Large Data Files for Machine Learning
Photo by Gareth Thompson, some rights reserved.

1. Allocate More Memory

Some machine learning tools or libraries may be limited by a default memory configuration.

Check if you can re-configure your tool or library to allocate more memory.

A good example is Weka, where you can increase the memory as a parameter when starting the application.

2. Work with a Smaller Sample

Are you sure you need to work with all of the data?

Take a random sample of your data, such as the first 1,000 or 100,000 rows. Use this smaller sample to work through your problem before fitting a final model on all of your data (using progressive data loading techniques).

I think this is a good practice in general for machine learning to give you quick spot-checks of algorithms and turnaround of results.

You may also consider performing a sensitivity analysis of the amount of data used to fit one algorithm compared to the model skill. Perhaps there is a natural point of diminishing returns that you can use as a heuristic size of your smaller sample.

3. Use a Computer with More Memory

Do you have to work on your computer?

Perhaps you can get access to a much larger computer with an order of magnitude more memory.

For example, a good option is to rent compute time on a cloud service like Amazon Web Services that offers machines with tens of gigabytes of RAM for less than a US dollar per hour.

I have found this approach very useful in the past.

See the post:

4. Change the Data Format

Is your data stored in raw ASCII text, like a CSV file?

Perhaps you can speed up data loading and use less memory by using another data format. A good example is a binary format like GRIB, NetCDF, or HDF.

There are many command line tools that you can use to transform one data format into another that do not require the entire dataset to be loaded into memory.

Using another format may allow you to store the data in a more compact form that saves memory, such as 2-byte integers, or 4-byte floats.

5. Stream Data or Use Progressive Loading

Does all of the data need to be in memory at the same time?

Perhaps you can use code or a library to stream or progressively load data as-needed into memory for training.

This may require algorithms that can learn iteratively using optimization techniques such as stochastic gradient descent, instead of algorithms that require all data in memory to perform matrix operations such as some implementations of linear and logistic regression.

For example, the Keras deep learning library offers this feature for progressively loading image files and is called flow_from_directory.

Another example is the Pandas library that can load large CSV files in chunks.

6. Use a Relational Database

Relational databases provide a standard way of storing and accessing very large datasets.

Internally, the data is stored on disk can be progressively loaded in batches and can be queried using a standard query language (SQL).

Free open source database tools like MySQL or Postgres can be used and most (all?) programming languages and many machine learning tools can connect directly to relational databases. You can also use a lightweight approach, such as SQLite.

I have found this approach to be very effective in the past for very large tabular datasets.

Again, you may need to use algorithms that can handle iterative learning.

7. Use a Big Data Platform

In some cases, you may need to resort to a big data platform.

That is, a platform designed for handling very large datasets, that allows you to use data transforms and machine learning algorithms on top of it.

Two good examples are Hadoop with the Mahout machine learning library and Spark wit the MLLib library.

I do believe that this is a last resort when you have exhausted the above options, if only for the additional hardware and software complexity this brings to your machine learning project.

Nevertheless, there are problems where the data is very large and the previous options will not cut it.

Summary

In this post, you discovered a number of tactics that you can use when dealing with very large data files for machine learning.

Are there other methods that you know about or have tried?
Share them in the comments below.

Have you try any of these methods?
Let me know in the comments.

11 Responses to 7 Ways to Handle Large Data Files for Machine Learning

  1. Chris May 29, 2017 at 6:59 pm #

    If the raw data is seperated by line break, such as csv EDIFACT, ect. Then there is a feature in almost every language I am aware of that will read only 1 line at a time using a socket stream. Which typically how any (buzzword alert) big data solution does it under the hood, nothing magic, hard, or revolutionary about it actually you’ll find pretty much any simple GitHub repo doing it if they read files.
    Any beginner coder should encounter this and universities should absolutely be teaching such a basic concept in any computer science related degree where you are required to read from a file..

    Just thought I’d shed some light on this fact, the 7 ways are actually 7 things that if you see them as an example in blog posts you should immediately leave the site and never return.

    • Jason Brownlee June 2, 2017 at 12:24 pm #

      Thanks for the input Chris, there are a lot of different types of machine learning practitioners out there.

    • MicrobicTiger June 2, 2017 at 1:24 pm #

      Hi Chris,

      What if your data were geographic points with values, each line represented a different point and you were looking to recognize patterns across clusters of points with varying cluster geometries? How would line by line source data reading help me here?

  2. felipe almeida May 30, 2017 at 4:43 pm #

    Some of the tips are a little bit obvious but overall it’s good. You could give more examples in each topic, such as “use file format X for cases like Y”.. Also, you could mention things like using stochastic gradient descent or other kinds of online learning, where you feed the examples one at a time.

  3. felipe almeida May 30, 2017 at 4:45 pm #

    Oh yeah, you could also mention using sparse (rather than dense) matrices, as they take much less space and some algorithms (like SVM) can handle sparse feature matrices directly. Here’s a link explaining that for sklearn.

  4. Peter Marelas May 30, 2017 at 9:29 pm #

    A few things I would suggest if you are a python user.

    For out-of-core pre-processing:

    – Transform the data using a dask dataframe or array (it can read various formats, CSV, etc)
    – Once you are done save the dask dataframe or array to a parquet file for future out-of-core pre-processing (see pyarrow)

    For in-memory processing:

    – Use smaller data types where you can, i.e. int8, float16, etc.
    – If it still doesn’t fit in-memory convert the dask dataframe to a sparse pandas dataframe

    For Big Data try Greenplum (free) https://greenplum.org/. It is a derivative of Postgres. Benefit being queries are processed across cores in parallel. Also has a mature machine learning plugin called MADlib.

  5. Lee Zee June 20, 2017 at 4:37 am #

    Can feature selection applications identify features that are comprised of parts of multiple columns in a large datasets? Or, will each identified predictive feature be restricted to data from a single column of data?

    • Jason Brownlee June 20, 2017 at 6:42 am #

      Often they focus on single columns. Perhaps you can dip into research and find some more complex methods.

Leave a Reply