Point to Point Communication in MPI

Point to Point Communication in MPI

MPI (Message Passing Interface) is a standardized and widely used communication protocol for parallel computing. It allows processes running on different nodes of a parallel system to communicate with each other. MPI is available in several programming languages, including C, C++, and Python, among others. In this tutorial, we’ll focus on using MPI in Python, specifically with the mpi4py library. The detailed tutorial of MPI with a python can be visited here.

Availability of MPI

MPI is available in multiple programming languages, making it accessible to a wide range of developers. Here’s a brief overview of its availability:

  • C: MPI is commonly used in C programming for high-performance computing applications. Libraries such as Open MPI and MPICH provide implementations of MPI for C.
  • C++: C++ developers can also utilize MPI for parallel programming. MPI bindings for C++ are available, allowing seamless integration with existing C++ codebases.
  • Python: MPI is accessible in Python through the mpi4py library. mpi4py provides Python bindings for MPI, enabling developers to write parallel programs in Python and leverage the power of MPI for distributed computing.

Simple MPI Program in Python

Now, let’s dive into a simple MPI program written in Python using mpi4py. This program demonstrates basic message passing between multiple processes.

!pip install mpi4py

from mpi4py import MPI

def main():
    # Initialize MPI
    comm = MPI.COMM_WORLD
    rank = comm.Get_rank()
    size = comm.Get_size()

    # Perform different tasks based on the rank
    if rank == 0:
        # Master process
        data = {'message': 'Hello from Master!'}
        # Send data to other processes
        for i in range(1, size):
            comm.send(data, dest=i, tag=0)
        print("Master sent data to other processes.")
    else:
        # Worker processes
        # Receive data from master
        data = comm.recv(source=0, tag=0)
        print(f"Worker {rank} received data:", data['message'])

if __name__ == "__main__":
    main()

Explanation of Each Line

Now, let’s break down the above code and explain each line:

  1. !pip install mpi4py: This line installs the mpi4py library using pip. It ensures that mpi4py is available in the Google Colab environment.
  2. from mpi4py import MPI: This line imports the MPI module from the mpi4py library, allowing us to use MPI functionality in our Python code.
  3. def main():: This line defines the main function of our program.
  4. comm = MPI.COMM_WORLD: This line initializes MPI and creates a communicator object representing the group of processes. MPI.COMM_WORLD is a predefined communicator that includes all processes in the MPI job.
  5. rank = comm.Get_rank(): This line retrieves the rank of the current process within the communicator. The rank is a unique identifier assigned to each process in the communicator.
  6. size = comm.Get_size(): This line retrieves the total number of processes in the communicator. It returns the size of the MPI job.
  7. if rank == 0:: This line checks if the current process has rank 0, which is typically the master process.
  8. data = {'message': 'Hello from Master!'}: This line creates a dictionary containing a message to be sent from the master process to the worker processes.
  9. for i in range(1, size):: This line iterates over the ranks of the worker processes (excluding rank 0).
  10. comm.send(data, dest=i, tag=0): This line sends the data dictionary to the worker process with rank i. The dest parameter specifies the destination rank, and the tag parameter provides a unique identifier for the message.
  11. print("Master sent data to other processes."): This line prints a message indicating that the master process has sent data to the worker processes.
  12. else:: This line defines the block of code to be executed by worker processes (i.e., processes with ranks other than 0).
  13. data = comm.recv(source=0, tag=0): This line receives data sent by the master process. The source parameter specifies the source rank from which to receive the message.
  14. print(f"Worker {rank} received data:", data['message']): This line prints a message indicating that the worker process with the current rank has received data from the master process.
  15. if __name__ == "__main__":: This line ensures that the main function is executed when the script is run as the main program.
  16. main(): This line calls the main function to execute the MPI program.

In this tutorial, we’ve covered a simple MPI program written in Python using the mpi4py library. We’ve explained each line of the code and provided insights into MPI and its availability in different programming languages. With this foundation, you can start exploring parallel programming and distributed computing using MPI in Python.

Material

Download the programs (code), covering the MPI4Py.

454 thoughts on “Point to Point Communication in MPI

  1. canada pharmacy [url=https://canadapharmast.online/#]safe canadian pharmacy[/url] 77 canadian pharmacy

  2. mail order pharmacy india [url=http://indiapharmast.com/#]mail order pharmacy india[/url] cheapest online pharmacy india

Leave a Reply

Your email address will not be published. Required fields are marked *

%d bloggers like this:
Verified by MonsterInsights