Massively parallel processing

Overview
in the 1990s massively parallel processing (MPP) represented a revolutionary new computer design. Most traditional computers had one computational processor, and traditional computer development had focused on making this processor faster and more efficient. However, the potential for continued increases in speed was reaching the limits imposed by the physical properties of the materials used to build the processor.

MPP promised speeds far surpassing those of vector supercomputers by breaking computational problems into many separate parts and having a large number of processors tackle those parts simultaneously. Speed is achieved largely through the sheer number of processors operating simultaneously, rather than through any exceptional power in each processor. In fact, many MPP designs used commercial, off-the-shelf processors, such as those found in personal computers or scientific workstations, and may include hundreds or even thousands of these processors.

Most MPP designs are intended to be scalable; that is, the machines function effectively in configurations that range from a small number of processors to a very large number of processors. While the number of processors may vary, the system's basic architecture and system software are constant. Thus these machines can be tailored to match a wide variety of computing demands.

The concept of massive parallelism can be implemented in many different ways. Efficient methods must be developed to break up large computational problems and assign tasks to individual processors. Likewise, new methods must be devised for efficiently managing communications among the processors.

Source

 * High-Performance Computing: Advanced Research Projects Agency Should Do More to Foster Program Goals, at 12-13.