Afzal Badshah, PhD

Shared and Distributed Memory in Parallel Computing

In parallel and distributed computing, memory management becomes crucial when dealing with multiple processors working together. Two prominent approaches exist: shared memory and distributed memory. This tutorial will delve into these concepts, highlighting their key differences, advantages, disadvantages, and applications. Visit the detailed tutorial on Parallel and Distributed Computing.

Shared Memory

Shared Memory

Shared memory systems provide a single, unified memory space accessible by all processors in a computer. Imagine a whiteboard where multiple people can write and read simultaneously.

Physically, the memory resides in a central location, accessible by all processors through a high-bandwidth connection like a memory bus. Hardware enforces data consistency, ensuring all processors see the same value when accessing a shared memory location.

Hardware Mechanisms for Shared Memory

Memory Bus: The shared memory resides in a central location (DRAM) and is connected to all processors via a high-bandwidth memory bus. This bus acts as a critical communication channel, allowing processors to fetch and store data from the shared memory. However, with multiple processors vying for access, the bus can become a bottleneck, limiting scalability.

Cache Coherence: To ensure all processors see the same value when accessing a shared memory location, cache coherence protocols are implemented. These protocols maintain consistency between the central memory and the private caches of each processor. There are various cache coherence protocols with varying trade-offs between performance and complexity.

Synchronization and Coordination

Shared memory programming offers a simpler model compared to distributed memory, but it’s not without its challenges. Since multiple processors can access and modify shared data concurrently, ensuring data consistency and preventing race conditions is crucial. Programmers need to employ synchronization primitives like locks, semaphores, and monitors to control access to shared resources and coordinate execution between threads. Choosing the appropriate synchronization mechanism depends on the specific needs of the program.

Complexities and Challenges

Advantages

Disadvantages

Applications

Distributed Memory

Distributed Memory

Distributed memory systems consist of independent processors, each with its local private memory. There’s no single shared memory space. Communication between processors happens explicitly by sending and receiving messages.

Processors communicate through a network like Ethernet or a dedicated interconnection network. Software protocols manage data exchange and ensure consistency.

Hardware Mechanism for Distributed Computing

In distributed memory systems, there’s no single hardware mechanism for memory management since each processor has its own private memory. The hardware focus shifts towards enabling communication and interaction between these independent memory spaces. Here’s a breakdown of the key hardware components involved:

Processors: Each node in the distributed system consists of a processor (CPU) with its own local memory (DRAM) for storing program instructions and data. These processors are responsible for executing the distributed program and managing their local memory.

Network Interface Controller (NIC): Each processor is equipped with a Network Interface Controller (NIC). This hardware component acts as the communication bridge between the processor and the network. It facilitates sending and receiving messages containing data or instructions to and from other processors in the system.

Interconnection Network: The processors are interconnected through a dedicated network. This network allows processors to exchange messages with each other. Common network topologies used in distributed memory systems include:

Advantages

Disadvantages

Applications:

Choosing Between Shared and Distributed Memory

The choice between shared and distributed memory depends on several factors:

Shared and distributed memory represent two fundamental approaches to memory management in parallel and distributed systems. Understanding their strengths and weaknesses is crucial for BSCS students venturing into parallel programming. By carefully considering the problem requirements, students can make informed decisions about which memory architecture best suits their needs.

Exit mobile version