Blocking and Non-blocking Communication in MPI

Blocking and Non-blocking Communication in MPI

In parallel computing with MPI (Message Passing Interface), communication between processes plays a crucial role in achieving efficient parallelization of algorithms. Two common approaches to communication are blocking and non-blocking communication. You can visit the detailed tutorial on MPI with Python here.

Blocking Communication

Blocking communication involves processes halting their execution until the communication operation is complete. In MPI, blocking communication functions like comm.send() and comm.recv() ensure that the sender waits until the receiver receives the message, and vice versa. Blocking communication is often used when processes need to synchronize their execution or when the sender and receiver must coordinate closely. While blocking communication simplifies program logic and synchronization, it can lead to potential performance bottlenecks if processes spend significant time waiting for communication to complete. Let’s see the below code;

from mpi4py import MPI
comm = MPI.COMM_WORLD
rank = comm.Get_rank()
if rank == 0:
    data = {'a': 7, 'b': 3.14}
    comm.send(data, dest=1, tag=11)
elif rank == 1:
    data = comm.recv(source=0, tag=11)

Explanation

  • Import MPI: The code begins by importing the MPI module from mpi4py library, which provides MPI functionalities for Python programs.
  • Initialize MPI Communicator: The code initializes the MPI communicator comm representing all processes participating in the computation.
  • Get Rank: Each process in the communicator obtains its rank using comm.Get_rank() to determine its identity in the communicator.
  • Conditional Execution: Depending on the rank of the process:
    • If the rank is 0:
      • Create a Python dictionary data containing some sample data.
      • Use comm.send() to send the data to process 1 (dest=1) with a specified tag (tag=11).
    • If the rank is 1:
      • Use comm.recv() to receive data from process 0 (source=0) with the specified tag (tag=11). The received data is stored in the data variable.
  • Blocking Communication: Both comm.send() and comm.recv() are blocking operations. This means that the sender (comm.send()) will be blocked until the receiver (comm.recv()) receives the message, and vice versa.
  • Data Transfer: In this program, the dictionary data is sent from process 0 to process 1 using blocking communication. Process 1 waits to receive the data sent by process 0 before continuing its execution.

Non-blocking Communication

Non-blocking communication, on the other hand, allows processes to continue their execution immediately after initiating communication operations, without waiting for the operations to complete. In MPI, non-blocking communication functions like comm.isend() and comm.irecv() return a request object immediately, enabling processes to overlap computation with communication. Non-blocking communication is particularly useful in scenarios where processes can perform useful work while waiting for communication to progress. By overlapping computation with communication, non-blocking communication can improve overall performance and scalability in parallel applications. Let’s see the below code;

from mpi4py import MPI
comm = MPI.COMM_WORLD
rank = comm.Get_rank()
if rank == 0:
    data = {'a': 7, 'b': 3.14}
    req = comm.isend(data, dest=1, tag=11)
    req.wait()
elif rank == 1:
    req = comm.irecv(source=0, tag=11)
    data = req.wait()

Explanation

  • Import MPI: Similar to the blocking communication program, this code starts by importing the MPI module from mpi4py library.
  • Initialize MPI Communicator and Get Rank: The MPI communicator comm is initialized, and the rank of the process is obtained using comm.Get_rank().
  • Conditional Execution: Depending on the rank of the process:
    • If the rank is 0:
      • Create a Python dictionary data containing some sample data.
      • Use comm.isend() to initiate the non-blocking sending of the data to process 1 (dest=1) with a specified tag (tag=11). The request object req is returned.
      • Wait for the completion of the send operation using req.wait().
    • If the rank is 1:
      • Use comm.irecv() to initiate the non-blocking receiving of data from process 0 (source=0) with the specified tag (tag=11). The request object req is returned.
      • Wait for the completion of the receive operation using req.wait(). The received data is stored in the data variable.
  • Non-blocking Communication: In contrast to blocking communication, non-blocking communication operations (comm.isend() and comm.irecv()) do not block the execution of the process. Instead, they return a request object immediately, allowing the process to perform other tasks while the communication operation progresses asynchronously.
  • Data Transfer: Similarly, the dictionary data is sent from process 0 to process 1, but this time using non-blocking communication. Process 1 initiates the receive operation and waits for the data to be received asynchronously.

Material

Download the programs (code), covering the MPI4Py.

279 thoughts on “Blocking and Non-blocking Communication in MPI

  1. You can join in the webcam chat with the girls directly with your own cam. We offer you cam chat with sound. So you have your hands free and can whisper your fantasies into the natural cam girls ear and hear the women moaning and smacking in front of the webcam.

  2. If you feel like getting to know my wild and crazy side, don’t let me stop you, give me a call and visit me on hot cam show right away. Can you imagine even more? Just let me know and you can add me to your favorites, then I will inform you regularly what’s new with me… But I don’t want to over-text you, so now it’s up to you

Leave a Reply

Your email address will not be published. Required fields are marked *

%d bloggers like this:
Verified by MonsterInsights