Parallel Computing, Parallelism
Running calculations from a given computational job somehow over many resources at once, accelerating their completion by allowing them to run “in parallel.” HPC enables parallel computing by providing the resources that large-scale parallel computing needs in order to run efficiently. There are two primary paradigms for how parallel computing is done in HPC: embarrassingly parallel and MPI. While these both accomplish parallel computing, they are suited for vastly different workloads and have often very different toolsets and methods involved in their individual implementation for accelerating a given workload.
To what extent the efforts to parallelize a given application will speed up that application is dependent upon how much of the application’s overall runtime is spent on the parallelized portions.