Abstract
We present mpi4py.futures, a lightweight, asynchronous task execution framework targeting the Python programming language and using the Message Passing Interface (MPI) for interprocess communication. mpi4py.futures follows the interface of the concurrent.futures package from the Python standard library and can be used as its drop-in replacement, while allowing applications to scale over multiple compute nodes. We discuss the design, implementation, and feature set of mpi4py.futures and compare its performance to other solutions on both shared and distributed memory architectures. On a shared-memory system, we show mpi4py.futures to consistently outperform Python's concurrent.futures with speedup ratios between 1.4X and 3.7X in throughput (tasks per second) and between 1.9X and 2.9X in bandwidth. On a Cray XC40 system, we compare mpi4py.futures to Dask - a well-known Python parallel computing package. Although we note more varied results, we show mpi4py.futures to outperform Dask in most scenarios.
Original language | English (US) |
---|---|
Pages (from-to) | 611-622 |
Number of pages | 12 |
Journal | IEEE Transactions on Parallel and Distributed Systems |
Volume | 34 |
Issue number | 2 |
DOIs | |
State | Published - Feb 1 2023 |
Keywords
- distributed computing
- high performance computing
- master-worker
- MPI
- multiprocessing
- parallel programming models
- parallelism
- Python
- task execution
ASJC Scopus subject areas
- Signal Processing
- Hardware and Architecture
- Computational Theory and Mathematics