CIQ

Message Passing Interface, MPI

January 19, 2024

A standard that defines a number of different operations for intra- or inter-node communication of data between CPU cores. One of the two primary paradigms of how parallel computing is done in HPC, MPI-based communications allow for a single program to spread itself over potentially dozens, hundreds, thousands, or in the largest cases, millions of CPU cores at once in order to gain a performance increase by having many cores working together on the same overall set of calculations at once. There are a number of different implementations of the MPI standard out there built by different organizations, but at their core, they are all the same set of operations. These operations are things like data transfers between two nodes, data transfers from one node to every other node, data transfers from every node to every other node, etc., to allow for flexibility in supporting the needs of various applications.

There are also exotic forms of MPI (or sometimes, more so “MPI”) that provide similar operations for communications directly between GPUs and other devices.