Multiprocessing in Python

When you work on a computer vision project, you probably need to preprocess a lot of image data. This is time-consuming, and it would be great if you could process multiple images in parallel. Multiprocessing is the ability of a system to run multiple processors at one time. If you had a computer with a single processor, it would switch between multiple processes to keep all of them running. However, most computers today have at least a multi-core processor, allowing several processes to be executed at once. The Python Multiprocessing Module is a tool for you to increase your scripts’ efficiency by allocating tasks to different processes.

After completing this tutorial, you will know:

  • Why we would want to use multiprocessing
  • How to use basic tools in the Python multiprocessing module

Kick-start your project with my new book Python for Machine Learning, including step-by-step tutorials and the Python source code files for all examples.

Let’s get started.

Multiprocessing in Python
Photo by Thirdman. Some rights reserved.

Overview

This tutorial is divided into four parts; they are:

  • Benefits of multiprocessing
  • Basic multiprocessing
  • Multiprocessing for real use
  • Using joblib

Benefits of Multiprocessing

You may ask, “Why Multiprocessing?” Multiprocessing can make a program substantially more efficient by running multiple tasks in parallel instead of sequentially. A similar term is multithreading, but they are different.

A process is a program loaded into memory to run and does not share its memory with other processes. A thread is an execution unit within a process. Multiple threads run in a process and share the process’s memory space with each other.

Python’s Global Interpreter Lock (GIL) only allows one thread to be run at a time under the interpreter, which means you can’t enjoy the performance benefit of multithreading if the Python interpreter is required. This is what gives multiprocessing an upper hand over threading in Python. Multiple processes can be run in parallel because each process has its own interpreter that executes the instructions allocated to it. Also, the OS would see your program in multiple processes and schedule them separately, i.e., your program gets a larger share of computer resources in total. So, multiprocessing is faster when the program is CPU-bound. In cases where there is a lot of I/O in your program, threading may be more efficient because most of the time, your program is waiting for the I/O to complete. However, multiprocessing is generally more efficient because it runs concurrently.

Basic multiprocessing

Let’s use the Python Multiprocessing module to write a basic program that demonstrates how to do concurrent programming.

Let’s look at this function, task(), that sleeps for 0.5 seconds and prints before and after the sleep:

To create a process, we simply say so using the multiprocessing module:

The target argument to the Process() specifies the target function that the process runs. But these processes do not run immediately until we start them:

A complete concurrent program would be as follows:

We must fence our main program under if __name__ == "__main__" or otherwise the multiprocessing module will complain. This safety construct guarantees Python finishes analyzing the program before the sub-process is created.

However, there is a problem with the code, as the program timer is printed before the processes we created are even executed. Here’s the output for the code above:

We need to call the join() function on the two processes to make them run before the time prints. This is because three processes are going on: p1, p2, and the main process. The main process is the one that keeps track of the time and prints the time taken to execute. We should make the line of finish_time run no earlier than the processes p1 and p2 are finished. We just need to add this snippet of code immediately after the start() function calls:

The join() function allows us to make other processes wait until the processes that had join() called on it are complete. Here’s the output with the join statements added:

With similar reasoning, we can make more processes run. The following is the complete code modified from above to have 10 processes:

Want to Get Started With Python for Machine Learning?

Take my free 7-day email crash course now (with sample code).

Click to sign-up and also get a free PDF Ebook version of the course.

Multiprocessing for Real Use

Starting a new process and then joining it back to the main process is how multiprocessing works in Python (as in many other languages). The reason we want to run multiprocessing is probably to execute many different tasks concurrently for speed. It can be an image processing function, which we need to do on thousands of images. It can also be to convert PDFs into plaintext for the subsequent natural language processing tasks, and we need to process a thousand PDFs. Usually, we will create a function that takes an argument (e.g., filename) for such tasks.

Let’s consider a function:

If we want to run it with arguments 1 to 1,000, we can create 1,000 processes and run them in parallel:

However, this will not work as you probably have only a handful of cores in your computer. Running 1,000 processes is creating too much overhead and overwhelming the capacity of your OS. Also, you may have exhausted your memory. The better way is to run a process pool to limit the number of processes that can be run at a time:

The argument for multiprocessing.Pool() is the number of processes to create in the pool. If omitted, Python will make it equal to the number of cores you have in your computer.

We use the apply_async() function to pass the arguments to the function cube in a list comprehension. This will create tasks for the pool to run. It is called “async” (asynchronous) because we didn’t wait for the task to finish, and the main process may continue to run. Therefore the apply_async() function does not return the result but an object that we can use, get(), to wait for the task to finish and retrieve the result. Since we get the result in a list comprehension, the order of the result corresponds to the arguments we created in the asynchronous tasks. However, this does not mean the processes are started or finished in this order inside the pool.

If you think writing lines of code to start processes and join them is too explicit, you can consider using map() instead:

We don’t have the start and join here because it is hidden behind the pool.map() function. What it does is split the iterable range(1,1000) into chunks and runs each chunk in the pool. The map function is a parallel version of the list comprehension:

But the modern-day alternative is to use map from concurrent.futures, as follows:

This code is running the multiprocessing module under the hood. The beauty of doing so is that we can change the program from multiprocessing to multithreading by simply replacing ProcessPoolExecutor with ThreadPoolExecutor. Of course, you have to consider whether the global interpreter lock is an issue for your code.

Using joblib

The package joblib is a set of tools to make parallel computing easier. It is a common third-party library for multiprocessing. It also provides caching and serialization functions. To install the joblib package, use the command in the terminal:

We can convert our previous example into the following to use joblib:

Indeed, it is intuitive to see what it does. The delayed() function is a wrapper to another function to make a “delayed” version of the function call. Which means it will not execute the function immediately when it is called.

Then we call the delayed function multiple times with different sets of arguments we want to pass to it. For example, when we give integer 1 to the delayed version of the function cube, instead of computing the result, we produce a tuple, (cube, (1,), {}) for the function object, the positional arguments, and keyword arguments, respectively.

We created the engine instance with Parallel(). When it is invoked like a function with the list of tuples as an argument, it will actually execute the job as specified by each tuple in parallel and collect the result as a list after all jobs are finished. Here we created the Parallel() instance with n_jobs=3, so there will be three processes running in parallel.

We can also write the tuples directly. Hence the code above can be rewritten as:

The benefit of using joblib is that we can run the code in multithread by simply adding an additional argument:

And this hides all the details of running functions in parallel. We simply use a syntax not too much different from a plain list comprehension.

Further Reading

This section provides more resources on the topic if you are looking to go deeper.

Books

APIs

Summary

In this tutorial, you learned how we run Python functions in parallel for speed. In particular, you learned:

  • How to use the multiprocessing module in Python to create new processes that run a function
  • The mechanism of launching and completing a process
  • The use of process pool in multiprocessing for controlled multiprocessing and the counterpart syntax in concurrent.futures
  • How to use the third-party library joblib for multiprocessing

Get a Handle on Python for Machine Learning!

Python For Machine Learning

Be More Confident to Code in Python

...from learning the practical Python tricks

Discover how in my new Ebook:
Python for Machine Learning

It provides self-study tutorials with hundreds of working code to equip you with skills including:
debugging, profiling, duck typing, decorators, deployment, and much more...

Showing You the Python Toolbox at a High Level for
Your Projects


See What's Inside

15 Responses to Multiprocessing in Python

  1. Avatar
    Viswanath thatha April 29, 2022 at 5:55 am #

    Awesome

  2. Avatar
    Soc April 29, 2022 at 3:36 pm #

    Love it, Jason!

    • Avatar
      James Carmichael April 30, 2022 at 10:20 am #

      You are very welcome Soc!

  3. Avatar
    Charles Brauer April 30, 2022 at 2:11 am #

    Do you know of any implementation of parallelizing the Differential Evolution algorithm?

    • Avatar
      Jeremy April 30, 2022 at 3:35 am #

      Absolutely great stuff. Thank you for putting this together!

  4. Avatar
    Jaron April 30, 2022 at 12:23 pm #

    Love it!

    • Avatar
      James Carmichael May 2, 2022 at 9:26 am #

      Thank you for the feedback Jaron!

  5. Avatar
    Jesse May 3, 2022 at 5:53 am #

    Hadn’t used joblib before, very handy!

    I suggest briefly mentioning a bit more the performance benefits of multithreading (e.g., avoiding memory forking), and situations when multiprocessing is not performant and thus a threading-friendly language like Julia, C++, etc. (many of which Python can interop with) would be more appropriate.

    • Avatar
      James Carmichael May 3, 2022 at 11:14 pm #

      Great feedback Jesse!

  6. Avatar
    Anthony The Koala May 4, 2022 at 11:59 am #

    Dear Daniel,
    First thank you for your tutorial on multiprocessing.

    What is the distinction between multiprocessing in python and threading in python?

    While multiprocessing and threading aim to have a separate flow of execution, it seems that multiprocessing is conducted in the order of execution, while the order of thread execution is determined by the computer’s OS.

    Could you do a tutorial on threading such that you can compare and contrast the outcomes.

    Thank you,
    Anthony of Sydney

    • Adrian Tam
      Adrian Tam May 6, 2022 at 11:27 am #

      Sorry to tell, you misunderstood how threads and processes run. Both are not deterministic but depends on the OS. It is the OS scheduler to control which process to run as well as which thread in a process gets the CPU. The difference between multiprocessing and multithreading is whether you share the “context”. Each process get a separate piece of memory but all threads share the same piece of memory. Hence you will care about race condition on accessing a variable in multithreading. But you will ask how two processes can communicate if they can’t see the variable on the other side.

      • Avatar
        Anthony The Koala May 7, 2022 at 10:58 am #

        Dear Dr Adrian,
        Thank you,
        Nevertheless, a tutorial on multithreading would help plus a comparison between multithreading and multiprocessing in the tutorial could assist.
        Many thanks
        Anthony of Sydney

  7. Avatar
    Taaresh November 30, 2022 at 4:07 pm #

    Hi,

    I ran the cube example using both joblib and multiprocessing but for me, multiprocessing is always slower than simply calling the function in a for loop. Am I doing something wrong or can you explain why this is happening?

    • Avatar
      James Carmichael December 1, 2022 at 8:23 am #

      Hi Taaresh…Please elaborate on the characteristics of your input data so that we may better assist you.

  8. Avatar
    John February 7, 2024 at 5:11 am #

    The author is Daniel Chung, not Jason Brownlee correct? The picture on the side is confusing as to the attribution of this article if that is the case. (Jason writes great content too, of course.)

    Thanks for a succinct and helpful article, Daniel.

Leave a Reply