Topic: “Introduction to MPI – Part II”
Speaker: Pawel Pomorski, SHARCNET
Webinar link: SN-Seminars Vidyo room
(NOTE: This talk is a continuation of Part I talk given on Nov. 11, 2015. If you missed it, the recording is posted on SHARCNET’s YouTube channel.)
This talk will build on the introduction to MPI (Message Passing Interface) Part I talk, introducing some more advanced features such as collective and non-blocking communications. Collective communications are implemented in a set of standard MPI routines, and they permit efficient exchange of information between processes without extra effort from the programmer when communication occurs in a standard, structured pattern. Examples of collective communications include broadcasts and reductions. Non-blocking communications allow to overlap communication with computation. Since communications are generally slow compared to computations, having such an overlap is often necessary to produce an efficient MPI code. The example programs in this talk will be implemented in C.
Need help attending a webinar? See the SHARCNET Help Wiki.