Resources | Subject Notes | Computer Science
Massively parallel computers (MPCs) represent a significant departure from traditional sequential processing architectures. They achieve high computational throughput by employing a large number of processors working concurrently on different parts of a problem. This approach is particularly well-suited for computationally intensive tasks that can be effectively divided into smaller, independent sub-tasks.
There are several architectural models for implementing massively parallel computing. These models differ in how processors are organized and how they communicate.
In a shared memory multiprocessor, all processors have access to a common memory space. This allows for easy communication and data sharing between processors. However, it can also lead to contention for memory access, which can limit performance. Cache coherence protocols are used to maintain data consistency.
Feature | Description |
---|---|
Memory Space | Shared by all processors |
Communication | Direct memory access |
Complexity | Higher complexity due to cache coherence |
Scalability | Limited scalability due to memory contention |
Distributed memory multiprocessors have their own local memory, and processors communicate with each other by sending messages over a network. This architecture is more scalable than shared memory systems because it avoids memory contention. However, communication overhead can be a significant performance bottleneck. Message passing interfaces (MPI) are commonly used for communication.
Feature | Description |
---|---|
Memory Space | Local to each processor |
Communication | Message passing |
Complexity | Lower complexity compared to shared memory |
Scalability | High scalability |
Hybrid multiprocessors combine elements of both shared and distributed memory architectures. They offer a balance between performance and scalability. For example, a hybrid system might have multiple nodes, each of which is a shared memory multiprocessor, and these nodes are connected by a network.
Developing and programming massively parallel systems presents several challenges:
MPCs are used in a wide range of applications, including:
Figure: Suggested diagram of a distributed memory massively parallel computer architecture.