Description:

GPUs are powerful devices, but one of their main weaknesses is the data transfer to and from the host device. A new feature appeared recently: GPU-to-GPU communications, allowing to bypass the need to send the data back through the host, with appropriate hardware support.

This workshop will cover how this can be achieved in MPI programs, as well as discuss other intermediate-level MPI topics.

You can join in-person OR virtually. This is a free workshop. Please register only if you plan to attend.

Please join at 12:45pm (AWST) to ensure you can access the system. The class starts promptly at 1:15pm (AWST).

What will I learn in this 3-hour, hands-on workshop?
This workshop revolves around MPI, used for distributed memory programming. For the GPU portion, we will also use HIP, which is used on Pawsey’s Setonix supercomputer. Join us to learn how to:

  • achieve GPU-to-GPU communications in MPI.
  • receive messages of unknown size.
  • use one-sided operations in MPI.
  • efficiently reuse known communication patterns.
  • use miscellaneous other MPI features that are good to know.

Start: Monday, 11 December 2023 @ 12:45

End: Monday, 11 December 2023 @ 16:15

Duration: 3 hours

Timezone: Perth

Venue: 1 Bryce Ave

City: Kensington  Country: Australia  Postcode: 6151

Prerequisites:

Pre-requisites:
You must be familiar with at least one of the following languages:
C/C++ (Refresher videos) or
Fortran
You need to know the basics of MPI and, ideally, the basics of HIP.

Learning Objectives:
  • GPU-aware MPI
  • Probe-based message receiving (MPIProbe / MPIGet_count / RDV vs EAGER protocols)
  • MPI RMA (MPIGet / MPIPut / MPIAccumulate / MPIWinfence / MPIWinlock MPIWinunlock / MPIWinpost / MPIWinstart / MPIWincomplete / MPIWin_wait)
  • Persistent communication requests (MPI*init / MPIStart / MPIWait)
  • Miscellaneous features
    • Pack / unpack vs derived datatypes (MPITypevector / MPITypestruct / MPITypecommit / MPITypecreate_resized)
    • Finding neighbours vs virtual topologies programming and neighbourhood collective operations (MPICart* / MPIGraph* / MPINeighbor*)
    • Shared memory and hybrid programming (MPIInitthreads, MPICommsplit_type).
Eligibility:
  • Open to all

Organiser: Pawsey Supercomputing Research Center

Contact: training@pawsey.org.au

Host institution: Pawsey Supercomputing Research Centre, Pawsey

Keywords: MPI, GPU to GPU Communication

Event type:
  • Workshop

Cost Basis: Free to all

GPU to GPU Communication with MPI and Other Topics https://dresa.org.au/events/gpu-to-gpu-communication-with-mpi-and-other-topics GPUs are powerful devices, but one of their main weaknesses is the data transfer to and from the host device. A new feature appeared recently: GPU-to-GPU communications, allowing to bypass the need to send the data back through the host, with appropriate hardware support. This workshop will cover how this can be achieved in MPI programs, as well as discuss other intermediate-level MPI topics. You can join in-person OR virtually. This is a free workshop. Please register only if you plan to attend. Please join at 12:45pm (AWST) to ensure you can access the system. The class starts promptly at 1:15pm (AWST). What will I learn in this 3-hour, hands-on workshop? This workshop revolves around MPI, used for distributed memory programming. For the GPU portion, we will also use HIP, which is used on Pawsey’s Setonix supercomputer. Join us to learn how to: - achieve GPU-to-GPU communications in MPI. - receive messages of unknown size. - use one-sided operations in MPI. - efficiently reuse known communication patterns. - use miscellaneous other MPI features that are good to know. 2023-12-11 12:45:00 UTC 2023-12-11 16:15:00 UTC Pawsey Supercomputing Research Center 1 Bryce Ave, Kensington, Australia 1 Bryce Ave Kensington Australia 6151 Pawsey Supercomputing Research CentrePawsey training@pawsey.org.au [] [] workshop open_to_all MPIGPU to GPU Communication