{"id":3016,"date":"2024-03-27T15:46:40","date_gmt":"2024-03-27T10:46:40","guid":{"rendered":"https:\/\/afzalbadshah.com\/?p=3016"},"modified":"2024-05-13T16:01:46","modified_gmt":"2024-05-13T11:01:46","slug":"blocking-and-non-blocking-communication-in-mpi","status":"publish","type":"post","link":"https:\/\/afzalbadshah.com\/index.php\/2024\/03\/27\/blocking-and-non-blocking-communication-in-mpi\/","title":{"rendered":"Blocking and Non-blocking Communication in MPI"},"content":{"rendered":"\n<p>In parallel computing with MPI (Message Passing Interface), communication between processes plays a crucial role in achieving efficient parallelization of algorithms. Two common approaches to communication are blocking and non-blocking communication. <a href=\"https:\/\/afzalbadshah.com\/index.php\/category\/courses\/mpi-with-python\/\" target=\"_blank\" rel=\"noopener\" title=\"\">You can visit the detailed tutorial on MPI with Python here. <\/a><\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>Blocking Communication<\/strong><\/h2>\n\n\n\n<p>Blocking communication involves processes halting their execution until the communication operation is complete. In MPI, blocking communication functions like <code>comm.send()<\/code> and <code>comm.recv()<\/code> ensure that the sender waits until the receiver receives the message, and vice versa. Blocking communication is often used when processes need to synchronize their execution or when the sender and receiver must coordinate closely. While blocking communication simplifies program logic and synchronization, it can lead to potential performance bottlenecks if processes spend significant time waiting for communication to complete. Let&#8217;s see the below code;<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>from mpi4py import MPI\ncomm = MPI.COMM_WORLD\nrank = comm.Get_rank()\nif rank == 0:\n    data = {'a': 7, 'b': 3.14}\n    comm.send(data, dest=1, tag=11)\nelif rank == 1:\n    data = comm.recv(source=0, tag=11)<\/code><\/pre>\n\n\n\n<p><strong>Explanation<\/strong><\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Import MPI:<\/strong> The code begins by importing the MPI module from mpi4py library, which provides MPI functionalities for Python programs.<\/li>\n\n\n\n<li><strong>Initialize MPI Communicator:<\/strong> The code initializes the MPI communicator <code>comm<\/code> representing all processes participating in the computation.<\/li>\n\n\n\n<li><strong>Get Rank:<\/strong> Each process in the communicator obtains its rank using <code>comm.Get_rank()<\/code> to determine its identity in the communicator.<\/li>\n\n\n\n<li><strong>Conditional Execution:<\/strong> Depending on the rank of the process:\n<ul class=\"wp-block-list\">\n<li>If the rank is 0:\n<ul class=\"wp-block-list\">\n<li>Create a Python dictionary <code>data<\/code> containing some sample data.<\/li>\n\n\n\n<li>Use <code>comm.send()<\/code> to send the data to process 1 (<code>dest=1<\/code>) with a specified tag (<code>tag=11<\/code>).<\/li>\n<\/ul>\n<\/li>\n\n\n\n<li>If the rank is 1:\n<ul class=\"wp-block-list\">\n<li>Use <code>comm.recv()<\/code> to receive data from process 0 (<code>source=0<\/code>) with the specified tag (<code>tag=11<\/code>). The received data is stored in the <code>data<\/code> variable.<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<\/li>\n\n\n\n<li><strong>Blocking Communication:<\/strong> Both <code>comm.send()<\/code> and <code>comm.recv()<\/code> are blocking operations. This means that the sender (<code>comm.send()<\/code>) will be blocked until the receiver (<code>comm.recv()<\/code>) receives the message, and vice versa.<\/li>\n\n\n\n<li><strong>Data Transfer:<\/strong> In this program, the dictionary <code>data<\/code> is sent from process 0 to process 1 using blocking communication. Process 1 waits to receive the data sent by process 0 before continuing its execution.<\/li>\n<\/ul>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>Non-blocking Communication<\/strong><\/h2>\n\n\n\n<p>Non-blocking communication, on the other hand, allows processes to continue their execution immediately after initiating communication operations, without waiting for the operations to complete. In MPI, non-blocking communication functions like <code>comm.isend()<\/code> and <code>comm.irecv()<\/code> return a request object immediately, enabling processes to overlap computation with communication. Non-blocking communication is particularly useful in scenarios where processes can perform useful work while waiting for communication to progress. By overlapping computation with communication, non-blocking communication can improve overall performance and scalability in parallel applications. Let&#8217;s see the below code;<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>from mpi4py import MPI\ncomm = MPI.COMM_WORLD\nrank = comm.Get_rank()\nif rank == 0:\n    data = {'a': 7, 'b': 3.14}\n    req = comm.isend(data, dest=1, tag=11)\n    req.wait()\nelif rank == 1:\n    req = comm.irecv(source=0, tag=11)\n    data = req.wait()<\/code><\/pre>\n\n\n\n<p><strong>Explanation<\/strong><\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Import MPI:<\/strong> Similar to the blocking communication program, this code starts by importing the MPI module from mpi4py library.<\/li>\n\n\n\n<li><strong>Initialize MPI Communicator and Get Rank:<\/strong> The MPI communicator <code>comm<\/code> is initialized, and the rank of the process is obtained using <code>comm.Get_rank()<\/code>.<\/li>\n\n\n\n<li><strong>Conditional Execution:<\/strong> Depending on the rank of the process:\n<ul class=\"wp-block-list\">\n<li>If the rank is 0:\n<ul class=\"wp-block-list\">\n<li>Create a Python dictionary <code>data<\/code> containing some sample data.<\/li>\n\n\n\n<li>Use <code>comm.isend()<\/code> to initiate the non-blocking sending of the data to process 1 (<code>dest=1<\/code>) with a specified tag (<code>tag=11<\/code>). The request object <code>req<\/code> is returned.<\/li>\n\n\n\n<li>Wait for the completion of the send operation using <code>req.wait()<\/code>.<\/li>\n<\/ul>\n<\/li>\n\n\n\n<li>If the rank is 1:\n<ul class=\"wp-block-list\">\n<li>Use <code>comm.irecv()<\/code> to initiate the non-blocking receiving of data from process 0 (<code>source=0<\/code>) with the specified tag (<code>tag=11<\/code>). The request object <code>req<\/code> is returned.<\/li>\n\n\n\n<li>Wait for the completion of the receive operation using <code>req.wait()<\/code>. The received data is stored in the <code>data<\/code> variable.<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<\/li>\n\n\n\n<li><strong>Non-blocking Communication:<\/strong> In contrast to blocking communication, non-blocking communication operations (<code>comm.isend()<\/code> and <code>comm.irecv()<\/code>) do not block the execution of the process. Instead, they return a request object immediately, allowing the process to perform other tasks while the communication operation progresses asynchronously.<\/li>\n\n\n\n<li><strong>Data Transfer:<\/strong> Similarly, the dictionary <code>data<\/code> is sent from process 0 to process 1, but this time using non-blocking communication. Process 1 initiates the receive operation and waits for the data to be received asynchronously.<\/li>\n<\/ul>\n\n\n\n<h2 class=\"wp-block-heading\">Material<\/h2>\n\n\n\n<p><a href=\"https:\/\/drive.google.com\/drive\/folders\/1vKaZIsBGLhzew2DKmmYfMCSrPAo4jXe-?usp=sharing\" target=\"_blank\" rel=\"noopener\" title=\"\">Download the programs (code), covering the MPI4Py.<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>In parallel computing with MPI (Message Passing Interface), communication between processes plays a crucial role in achieving efficient parallelization of algorithms. Two common approaches to communication are blocking and non-blocking communication. You can visit the detailed tutorial on MPI with Python here. Blocking Communication Blocking communication involves processes halting their execution until the communication operation is complete. In MPI, blocking communication functions like comm.send() and comm.recv() ensure that the sender waits until the receiver receives the message, and vice versa&#8230;.<\/p>\n<p class=\"read-more\"><a class=\"btn btn-default\" href=\"https:\/\/afzalbadshah.com\/index.php\/2024\/03\/27\/blocking-and-non-blocking-communication-in-mpi\/\"> Read More<span class=\"screen-reader-text\">  Read More<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":3025,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"jetpack_post_was_ever_published":false,"_jetpack_newsletter_access":"","_jetpack_dont_email_post_to_subs":false,"_jetpack_newsletter_tier_id":0,"_jetpack_memberships_contains_paywalled_content":false,"_jetpack_memberships_contains_paid_content":false,"footnotes":"","jetpack_publicize_message":"","jetpack_publicize_feature_enabled":true,"jetpack_social_post_already_shared":false,"jetpack_social_options":{"image_generator_settings":{"template":"highway","default_image_id":0,"enabled":false},"version":2}},"categories":[506],"tags":[550,49,540,551,474],"class_list":["post-3016","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-mpi-with-python","tag-blocking","tag-communication","tag-mpi","tag-non-bloking","tag-parallel-computing"],"aioseo_notices":[],"jetpack_publicize_connections":[],"jetpack_featured_media_url":"https:\/\/i0.wp.com\/afzalbadshah.com\/wp-content\/uploads\/2024\/03\/MPI-Python-1.png?fit=1280%2C720&ssl=1","jetpack_sharing_enabled":true,"jetpack_likes_enabled":true,"jetpack_shortlink":"https:\/\/wp.me\/pf3emP-ME","jetpack-related-posts":[],"amp_enabled":true,"_links":{"self":[{"href":"https:\/\/afzalbadshah.com\/index.php\/wp-json\/wp\/v2\/posts\/3016","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/afzalbadshah.com\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/afzalbadshah.com\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/afzalbadshah.com\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/afzalbadshah.com\/index.php\/wp-json\/wp\/v2\/comments?post=3016"}],"version-history":[{"count":5,"href":"https:\/\/afzalbadshah.com\/index.php\/wp-json\/wp\/v2\/posts\/3016\/revisions"}],"predecessor-version":[{"id":3423,"href":"https:\/\/afzalbadshah.com\/index.php\/wp-json\/wp\/v2\/posts\/3016\/revisions\/3423"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/afzalbadshah.com\/index.php\/wp-json\/wp\/v2\/media\/3025"}],"wp:attachment":[{"href":"https:\/\/afzalbadshah.com\/index.php\/wp-json\/wp\/v2\/media?parent=3016"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/afzalbadshah.com\/index.php\/wp-json\/wp\/v2\/categories?post=3016"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/afzalbadshah.com\/index.php\/wp-json\/wp\/v2\/tags?post=3016"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}