This workshop is directed at scientists and engineers who want to enable their code to run on parallel computers ranging from small multi-core machines to large clusters. The course covers both shared-memory approaches using the OpenMP compiler directives, and distributed-memory frameworks using the Message-Passing Interface MPI.
To address the growing availability of multi-node clusters with multi-core nodes, a substantial portion of the course focuses on hybrid approaches that combine OpenMP and MPI to achieve optimal utilization of such resources. To this end, we will use a recently developed – and freely available – library that implements a “double-layer master-slave” parallel model and requires a minimum of user programming.
We will also devote some time to the discussion of the “HPCVL Working Template” which was designed to facilitate code management, timing, and debugging for serial, OpenMP, and MPI code, and any combination thereof.
Course participants will be able to access dedicated resources at HPCVL for practical exercises through the HPCVL Secure Portal from a local desktop or labtop computer. For this, no special client software is required, as all resources are supplied server-side by HPCVL.
For more details and registration information: