Last Updated on June 21, 2022
When you work on a computer vision project, you probably need to preprocess a lot of image data. This is time-consuming, and it would be great if you could process multiple images in parallel. Multiprocessing is the ability of a system to run multiple processors at one time. If you had a computer with a single processor, it would switch between multiple processes to keep all of them running. However, most computers today have at least a multi-core processor, allowing several processes to be executed at once. The Python Multiprocessing Module is a tool for you to increase your scripts’ efficiency by allocating tasks to different processes.
After completing this tutorial, you will know:
- Why we would want to use multiprocessing
- How to use basic tools in the Python multiprocessing module
Kick-start your project with my new book Python for Machine Learning, including step-by-step tutorials and the Python source code files for all examples.
Let’s get started.
Multiprocessing in Python
Photo by Thirdman. Some rights reserved.
Overview
This tutorial is divided into four parts; they are:
- Benefits of multiprocessing
- Basic multiprocessing
- Multiprocessing for real use
- Using joblib
Benefits of Multiprocessing
You may ask, “Why Multiprocessing?” Multiprocessing can make a program substantially more efficient by running multiple tasks in parallel instead of sequentially. A similar term is multithreading, but they are different.
A process is a program loaded into memory to run and does not share its memory with other processes. A thread is an execution unit within a process. Multiple threads run in a process and share the process’s memory space with each other.
Python’s Global Interpreter Lock (GIL) only allows one thread to be run at a time under the interpreter, which means you can’t enjoy the performance benefit of multithreading if the Python interpreter is required. This is what gives multiprocessing an upper hand over threading in Python. Multiple processes can be run in parallel because each process has its own interpreter that executes the instructions allocated to it. Also, the OS would see your program in multiple processes and schedule them separately, i.e., your program gets a larger share of computer resources in total. So, multiprocessing is faster when the program is CPU-bound. In cases where there is a lot of I/O in your program, threading may be more efficient because most of the time, your program is waiting for the I/O to complete. However, multiprocessing is generally more efficient because it runs concurrently.
Basic multiprocessing
Let’s use the Python Multiprocessing module to write a basic program that demonstrates how to do concurrent programming.
Let’s look at this function, task()
, that sleeps for 0.5 seconds and prints before and after the sleep:
1 2 3 4 5 6 |
import time def task(): print('Sleeping for 0.5 seconds') time.sleep(0.5) print('Finished sleeping') |
To create a process, we simply say so using the multiprocessing module:
1 2 3 4 |
... import multiprocessing p1 = multiprocessing.Process(target=task) p2 = multiprocessing.Process(target=task) |
The target
argument to the Process()
specifies the target function that the process runs. But these processes do not run immediately until we start them:
1 2 3 |
... p1.start() p2.start() |
A complete concurrent program would be as follows:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 |
import multiprocessing import time def task(): print('Sleeping for 0.5 seconds') time.sleep(0.5) print('Finished sleeping') if __name__ == "__main__": start_time = time.perf_counter() # Creates two processes p1 = multiprocessing.Process(target=task) p2 = multiprocessing.Process(target=task) # Starts both processes p1.start() p2.start() finish_time = time.perf_counter() print(f"Program finished in {finish_time-start_time} seconds") |
We must fence our main program under if __name__ == "__main__"
or otherwise the multiprocessing
module will complain. This safety construct guarantees Python finishes analyzing the program before the sub-process is created.
However, there is a problem with the code, as the program timer is printed before the processes we created are even executed. Here’s the output for the code above:
1 2 3 4 5 |
Program finished in 0.012921249988721684 seconds Sleeping for 0.5 seconds Sleeping for 0.5 seconds Finished sleeping Finished sleeping |
We need to call the join()
function on the two processes to make them run before the time prints. This is because three processes are going on: p1
, p2
, and the main process. The main process is the one that keeps track of the time and prints the time taken to execute. We should make the line of finish_time
run no earlier than the processes p1
and p2
are finished. We just need to add this snippet of code immediately after the start()
function calls:
1 2 3 |
... p1.join() p2.join() |
The join()
function allows us to make other processes wait until the processes that had join()
called on it are complete. Here’s the output with the join statements added:
1 2 3 4 5 |
Sleeping for 0.5 seconds Sleeping for 0.5 seconds Finished sleeping Finished sleeping Program finished in 0.5688213340181392 seconds |
With similar reasoning, we can make more processes run. The following is the complete code modified from above to have 10 processes:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 |
import multiprocessing import time def task(): print('Sleeping for 0.5 seconds') time.sleep(0.5) print('Finished sleeping') if __name__ == "__main__": start_time = time.perf_counter() processes = [] # Creates 10 processes then starts them for i in range(10): p = multiprocessing.Process(target = task) p.start() processes.append(p) # Joins all the processes for p in processes: p.join() finish_time = time.perf_counter() print(f"Program finished in {finish_time-start_time} seconds") |
Want to Get Started With Python for Machine Learning?
Take my free 7-day email crash course now (with sample code).
Click to sign-up and also get a free PDF Ebook version of the course.
Multiprocessing for Real Use
Starting a new process and then joining it back to the main process is how multiprocessing works in Python (as in many other languages). The reason we want to run multiprocessing is probably to execute many different tasks concurrently for speed. It can be an image processing function, which we need to do on thousands of images. It can also be to convert PDFs into plaintext for the subsequent natural language processing tasks, and we need to process a thousand PDFs. Usually, we will create a function that takes an argument (e.g., filename) for such tasks.
Let’s consider a function:
1 2 |
def cube(x): return x**3 |
If we want to run it with arguments 1 to 1,000, we can create 1,000 processes and run them in parallel:
1 2 3 4 5 6 7 8 9 10 11 |
import multiprocessing def cube(x): return x**3 if __name__ == "__main__": # this does not work processes = [multiprocessing.Process(target=cube, args=(x,)) for x in range(1,1000)] [p.start() for p in processes] result = [p.join() for p in processes] print(result) |
However, this will not work as you probably have only a handful of cores in your computer. Running 1,000 processes is creating too much overhead and overwhelming the capacity of your OS. Also, you may have exhausted your memory. The better way is to run a process pool to limit the number of processes that can be run at a time:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
import multiprocessing import time def cube(x): return x**3 if __name__ == "__main__": pool = multiprocessing.Pool(3) start_time = time.perf_counter() processes = [pool.apply_async(cube, args=(x,)) for x in range(1,1000)] result = [p.get() for p in processes] finish_time = time.perf_counter() print(f"Program finished in {finish_time-start_time} seconds") print(result) |
The argument for multiprocessing.Pool()
is the number of processes to create in the pool. If omitted, Python will make it equal to the number of cores you have in your computer.
We use the apply_async()
function to pass the arguments to the function cube
in a list comprehension. This will create tasks for the pool to run. It is called “async
” (asynchronous) because we didn’t wait for the task to finish, and the main process may continue to run. Therefore the apply_async()
function does not return the result but an object that we can use, get()
, to wait for the task to finish and retrieve the result. Since we get the result in a list comprehension, the order of the result corresponds to the arguments we created in the asynchronous tasks. However, this does not mean the processes are started or finished in this order inside the pool.
If you think writing lines of code to start processes and join them is too explicit, you can consider using map()
instead:
1 2 3 4 5 6 7 8 9 10 11 12 13 |
import multiprocessing import time def cube(x): return x**3 if __name__ == "__main__": pool = multiprocessing.Pool(3) start_time = time.perf_counter() result = pool.map(cube, range(1,1000)) finish_time = time.perf_counter() print(f"Program finished in {finish_time-start_time} seconds") print(result) |
We don’t have the start and join here because it is hidden behind the pool.map()
function. What it does is split the iterable range(1,1000)
into chunks and runs each chunk in the pool. The map function is a parallel version of the list comprehension:
1 |
result = [cube(x) for x in range(1,1000)] |
But the modern-day alternative is to use map
from concurrent.futures
, as follows:
1 2 3 4 5 6 7 8 9 10 11 12 13 |
import concurrent.futures import time def cube(x): return x**3 if __name__ == "__main__": with concurrent.futures.ProcessPoolExecutor(3) as executor: start_time = time.perf_counter() result = list(executor.map(cube, range(1,1000))) finish_time = time.perf_counter() print(f"Program finished in {finish_time-start_time} seconds") print(result) |
This code is running the multiprocessing
module under the hood. The beauty of doing so is that we can change the program from multiprocessing to multithreading by simply replacing ProcessPoolExecutor
with ThreadPoolExecutor
. Of course, you have to consider whether the global interpreter lock is an issue for your code.
Using joblib
The package joblib
is a set of tools to make parallel computing easier. It is a common third-party library for multiprocessing. It also provides caching and serialization functions. To install the joblib
package, use the command in the terminal:
1 |
pip install joblib |
We can convert our previous example into the following to use joblib
:
1 2 3 4 5 6 7 8 9 10 11 |
import time from joblib import Parallel, delayed def cube(x): return x**3 start_time = time.perf_counter() result = Parallel(n_jobs=3)(delayed(cube)(i) for i in range(1,1000)) finish_time = time.perf_counter() print(f"Program finished in {finish_time-start_time} seconds") print(result) |
Indeed, it is intuitive to see what it does. The delayed()
function is a wrapper to another function to make a “delayed” version of the function call. Which means it will not execute the function immediately when it is called.
Then we call the delayed function multiple times with different sets of arguments we want to pass to it. For example, when we give integer 1
to the delayed version of the function cube
, instead of computing the result, we produce a tuple, (cube, (1,), {})
for the function object, the positional arguments, and keyword arguments, respectively.
We created the engine instance with Parallel()
. When it is invoked like a function with the list of tuples as an argument, it will actually execute the job as specified by each tuple in parallel and collect the result as a list after all jobs are finished. Here we created the Parallel()
instance with n_jobs=3
, so there will be three processes running in parallel.
We can also write the tuples directly. Hence the code above can be rewritten as:
1 |
result = Parallel(n_jobs=3)((cube, (i,), {}) for i in range(1,1000)) |
The benefit of using joblib
is that we can run the code in multithread by simply adding an additional argument:
1 |
result = Parallel(n_jobs=3, prefer="threads")(delayed(cube)(i) for i in range(1,1000)) |
And this hides all the details of running functions in parallel. We simply use a syntax not too much different from a plain list comprehension.
Further Reading
This section provides more resources on the topic if you are looking to go deeper.
Books
- High Performance Python, 2nd edition, by Micha Gorelick and Ian Ozsvald
APIs
- joblib
- multiprocessing in Python standard library
- concurrent.futures in Python standard library
Summary
In this tutorial, you learned how we run Python functions in parallel for speed. In particular, you learned:
- How to use the
multiprocessing
module in Python to create new processes that run a function - The mechanism of launching and completing a process
- The use of process pool in
multiprocessing
for controlled multiprocessing and the counterpart syntax inconcurrent.futures
- How to use the third-party library
joblib
for multiprocessing
Awesome
Love it, Jason!
You are very welcome Soc!
Do you know of any implementation of parallelizing the Differential Evolution algorithm?
Absolutely great stuff. Thank you for putting this together!
Love it!
Thank you for the feedback Jaron!
Hadn’t used joblib before, very handy!
I suggest briefly mentioning a bit more the performance benefits of multithreading (e.g., avoiding memory forking), and situations when multiprocessing is not performant and thus a threading-friendly language like Julia, C++, etc. (many of which Python can interop with) would be more appropriate.
Great feedback Jesse!
Dear Daniel,
First thank you for your tutorial on multiprocessing.
What is the distinction between multiprocessing in python and threading in python?
While multiprocessing and threading aim to have a separate flow of execution, it seems that multiprocessing is conducted in the order of execution, while the order of thread execution is determined by the computer’s OS.
Could you do a tutorial on threading such that you can compare and contrast the outcomes.
Thank you,
Anthony of Sydney
Sorry to tell, you misunderstood how threads and processes run. Both are not deterministic but depends on the OS. It is the OS scheduler to control which process to run as well as which thread in a process gets the CPU. The difference between multiprocessing and multithreading is whether you share the “context”. Each process get a separate piece of memory but all threads share the same piece of memory. Hence you will care about race condition on accessing a variable in multithreading. But you will ask how two processes can communicate if they can’t see the variable on the other side.
Dear Dr Adrian,
Thank you,
Nevertheless, a tutorial on multithreading would help plus a comparison between multithreading and multiprocessing in the tutorial could assist.
Many thanks
Anthony of Sydney
Hi,
I ran the cube example using both joblib and multiprocessing but for me, multiprocessing is always slower than simply calling the function in a for loop. Am I doing something wrong or can you explain why this is happening?
Hi Taaresh…Please elaborate on the characteristics of your input data so that we may better assist you.