MPI (Message Passing Interface)

May 31, 2023

What Is MPI?

Message Passing Interface (MPI) is a standardized and portable message-passing library used in parallel computing, particularly for high-performance computing (HPC) applications. It was developed to facilitate communication between different processes running on a distributed memory system, such as clusters and supercomputers. The goal of MPI is to provide an efficient, flexible, and consistent method of exchanging data and synchronizing tasks across a wide range of architectures and platforms.

MPI includes a comprehensive set of functions and routines that allow developers to manage communication between processes. These functions include point-to-point communication, collective operations, data types, process topologies, and parallel I/O, among others. MPI is widely used in various fields, including computational science, engineering, and big data analytics, as it provides a consistent and efficient way to build parallel applications that can scale to thousands of processing units. There are several implementations of MPI available.

MPI Communication Modes

MPI communication modes determine the way in which data is exchanged between processes. They can impact the performance and behavior of a parallel application. Here are common MPI communication modes:

Synchronous (Blocking) mode:

The sender process waits for the receiver's acknowledgment before proceeding, ensuring the data is received. Example function: MPI_Ssend().

Asynchronous (Non-blocking) mode:

The sender process continues execution without waiting for the receiver, allowing concurrent computation and communication. Example functions: MPI_Isend() and MPI_Irecv().

Buffered mode:

MPI uses a temporary buffer for outgoing messages, enabling the sender to continue without waiting for the receiver. Example function: MPI_Bsend().

Standard (Eager) mode:

The communication behavior depends on the implementation and message size, offering a balance between performance and simplicity. Example function: MPI_Send().

Ready mode:

The sender assumes the receiver is prepared to accept the message, leading to improved performance in predictable scenarios but undefined behavior if the receiver is not ready. Example functions: MPI_Rsend() and MPI_Irsend().

Key Features of MPI

MPI key features include point-to-point communication for direct data exchange between two processes, collective communication for group-wide data exchange and synchronization, process groups and communicators for organizing and managing subsets of processes, derived data types for handling complex data structures, process topologies for mapping processes to specific geometries, parallel I/O for concurrent file access, and dynamic process management for runtime creation and control of processes. MPI's portability and performance optimizations across various platforms and architectures make it a popular choice for high-performance computing applications in diverse fields.

MPI Implementations

MPI implementations refer to the specific software libraries that provide the functionality defined by the MPI standard. These libraries allow developers to create parallel applications that can run on various platforms and architectures. There are several MPI implementations available, with some of the most widely used ones being:

Open MPI:

Open MPI is an open source implementation of the MPI standard, designed to be high-performance, flexible, and easy to use. It is a collaborative project between multiple research institutions and organizations, and it supports a wide range of platforms, interconnects, and operating systems. Open MPI is actively maintained and regularly updated to incorporate new features and optimizations.


MPICH is another popular open source implementation of MPI, originally developed by the Argonne National Laboratory. It focuses on providing a high-quality, efficient, and portable MPI library that can be used on various platforms, including clusters, supercomputers, and multi-core systems. MPICH is widely used in both research and production environments and serves as the foundation for several other MPI implementations, such as Microsoft MPI and Intel MPI.

Intel MPI:

Intel MPI is a proprietary implementation of the MPI standard developed by Intel Corporation. It is designed to deliver high performance, especially on Intel-based architectures, by taking advantage of Intel-specific hardware features and optimizations. Intel MPI is compatible with both MPICH and Open MPI and provides additional performance tuning and profiling tools for parallel application development.