Register event
9 events found

Keywords: Setonix  or OpenMP 

  • P’Con – Exascale Data Management with ADIOS

    7 December 2021

    P’Con – Exascale Data Management with ADIOS https://dresa.org.au/events/p-con-exascale-data-management-with-adios Speakers: Speaker: Scott Klasky, Norbert Podhorszki (Oak Ridge National Laboratories) ADIOS is a high performance publish/subscribe I/O framework which has been designed and developed for the exascale computing era. In a snapshot, ADIOS: - Is integrated into most of the popular analysis and visualization packages. - Has strict continuous integration practices, providing stable, portable and efficient I/O services. - Has a programming interface designed for easy switching from files to streams (on-HPC machines) to streams over the Wide Area Network. In this presentation, the speaker will introduce the ADIOS I/O framework and describe how it is being used, including: - In data intensive supercomputing simulations, using it for data storage and retrieval moving petabytes of data in a single job, - In coupled simulations using it for data movement between the simulation codes and to in situ analysis and in situ visualization services, - In AI applications, for collecting training data from ensembles of computations on the fly, and - In collaborations between experimental facilities and HPC centers to stream experimental data for near-real-time decision making. We will present how ADIOS was used in the 2020 Gordon Bell finalist paper from Pawsey and ORNL about the workflow for simulating and processing the full-scale low-frequency telescope data of SKA Phase 1 on the Summit supercomputer at ORNL. ADIOS is also a research framework for new I/O technologies, pushing the boundaries beyond current use cases. Our research focuses on data reduction with trusted lossy compression techniques, data refactoring for easier on-demand retrieval, hierarchical storage to speed up access to the most important data, asynchronous I/O techniques and others. The techniques in ADIOS have been developed for over 15 years through collaborations with science applications, and we are always seeking new collaborations and new challenges that will define the I/O landscape of the future. We hope this presentation will allow the audience to learn about how others are using extreme I/O in their fields and also stimulate the audience to bring new challenges to us. Exascale Data Management with ADIOS session, is part of the first PaCER Conference – P’con: A week where Pawsey continues setting the pace for exascale. 2021-12-07 09:30:00 UTC 2021-12-07 11:30:00 UTC Pawsey Supercomputing Research Centre Pawsey Supercomputing Research Centre training@pawsey.org.au [] [] webinarconference open_to_all ADIOSsupercomputersupercomputingPawseyPaCERSetonix
  • P’Con – Porting multi-GPU SELF-Fluids code to HIPFort

    8 December 2021

    P’Con – Porting multi-GPU SELF-Fluids code to HIPFort https://dresa.org.au/events/p-con-porting-multi-gpu-self-fluids-code-to-hipfort Speaker: By Dr Joseph Schoonover, Fluid Numerics LLC During this talk, we will share our experience with porting process for SELF-Fluids from multi-GPU CUDA-Fortran to multi-GPU HIPFort. The talk will cover the design principles and roadmap for SELF and the strategy to port from Nvidia-only platforms to AMD & Nvidia GPUs. We’ll discuss hurdles encountered along the way and considerations for developing multi-GPU accelerated applications in Fortran. SELF is an object-oriented Fortran library that supports the implementation of Spectral Element Methods for solving partial differential equations. SELF-Fluids is an implementation of SELF that solves the compressible Navier Stokes equations on CPU only and GPU accelerated compute platforms using the Discontinuous Galerkin Spectral Element Method. The SELF API is designed based on the assumption that SEM developers and researchers need to be able to implement derivatives in 1-D and divergence, gradient, and curl in 2-D and 3-D on scalar, vector, and tensor functions using spectral collocation, continuous Galerkin, and discontinuous Galerkin spectral element methods. Additionally, as we enter the Exascale era, we are currently faced with a zoo of compute hardware that is available. Because of this, SELF routines provide support for GPU acceleration through AMD’s HIP and support for multi-core, multi-node, and multi-GPU platforms with MPI. SELF and SELF-Fluids are publicly available online at https://github.com/fluidnumerics/self Porting multi-GPU SELF-Fluids code to HIPFort, is part of the first PaCER Conference – P’con: A week where Pawsey continues setting the pace for exascale. 2021-12-08 09:30:00 UTC 2021-12-08 11:30:00 UTC Pawsey Supercomputing Research Centre Pawsey Supercomputing Research Centre training@pawsey.org.au [] [] webinarconference open_to_all GPUsHipFortsupercomputersupercomputingSetonix
  • Using Supercomputers: Parts 1 and 2

    22 - 23 August 2022

    Using Supercomputers: Parts 1 and 2 https://dresa.org.au/events/using-supercomputers-parts-1-and-2 In this course, students will be introduced to supercomputers and what makes them different to other computers. A typical supercomputing architecture, parallelism, compute resource sharing, command line interfaces and other key concepts and practices will be discussed. Students will practice what they learn using Pawsey's new supercomputer Setonix. 2022-08-22 10:00:00 UTC 2022-08-23 13:00:00 UTC Pawsey Supercomputing Research Centre Pawsey Supercomputing Research Centre training@pawsey.org.au [] [] workshopwebinar open_to_all supercomputingsupercomputersUnixSetonix
  • Evaluate Application Performance using TAU and E4S

    4 - 5 April 2023

    Evaluate Application Performance using TAU and E4S https://dresa.org.au/events/evaluate-application-performance-using-tau-and-e4s This workshop is for computational scientists who want to evaluate the performance of their parallel, scientific applications. In this workshop, you will learn about the Extreme-scale Scientific Software Stack and the TAU Performance System® and its interfaces to other tools and libraries. During the hands-on portion of the workshop, the instructor will attempt to collect and analyse performance data for additional user codes. Attendees are welcome to contact the instructor ahead of time to begin collecting data to discuss at the workshop. More Detail on the Workshop To meet the needs of computational scientists to evaluate the performance of their parallel, scientific applications, we will focus on the use of E4S containers and TAU performance data collection, analysis, and performance optimization. After demonstrating how performance data (both profile and trace data) can be collected using TAU’s (Tuning and Analysis Utilities) automated instrumentation, attention will then turn to how to instrument key codes and to analyse the performance data collected to explain where the time is spent. The workshop will include sample codes that illustrate the different instrumentation and measurement choices. Topics will cover generating performance profiles and traces with memory utilization and headroom, I/O, and interfaces to ROCm, including ROCProfiler and ROCTracer with support for collecting hardware performance data. The workshop will cover instrumentation of OpenMP programs using OpenMP Tools Interface (OMPT), including support for target offload and measurement of a program’s memory footprint. We will demonstrate scalable tracing using OTF2 and visualization using the Vampir trace analysis tool. Performance data analysis using ParaProf and PerfExplorer will be demonstrated using the performance data management framework (TAUdb) that includes TAU’s performance database. 2023-04-04 09:00:00 UTC 2023-04-05 15:00:00 UTC Pawsey Supercomputing Research Centre Pawsey Supercomputing Research Centre training@pawsey.org.au [] [] 20 workshop open_to_all supercomputingGPUsTAUE4SOpenMPperformance
  • OpenCL workshop

    21 - 29 June 2023

    OpenCL workshop https://dresa.org.au/events/opencl-workshop Supercomputers make use of accelerators from a variety of different hardware vendors, using devices such as multi-core CPU’s, GPU’s and even FPGA’s. OpenCL is a way for your HPC application to make effective use of heterogeneous computing devices, and to avoid code refactoring for new HPC infrastructure. 2023-06-21 09:00:00 UTC 2023-06-29 17:00:00 UTC Pawsey Supercomputing Centre Online, Perth, Australia Online Perth Australia 6100 Pawsey Supercomputing Research Centre Ann Backhaus [] technical staffstudentsprofessionalresearcher 10 webinarworkshop open_to_all supercomputingOpenCLSkills upliftPawsey Supercomputing CentreCPUsGPUs HPCSetonix
  • Using PennyLane on Pawsey’s Setonix Supercomputer

    28 November 2023

    Using PennyLane on Pawsey’s Setonix Supercomputer https://dresa.org.au/events/using-pennylane-on-pawsey-s-setonix-supercomputer Join this free, online training, where Xanadu’s Quantum Community Manager Catalina Albornoz introduces quantum computing using the Python-based PennyLane software library. During this training, you’ll have the opportunity to work hands-on with PennyLane on Pawsey’s Setonix supercomputer. Xanadu is a Canadian quantum computing company with the mission to build quantum computers that are useful and available to people everywhere. Xanadu is one of the world’s leading quantum hardware and software companies and also leads the development of PennyLane, an open-source software library for quantum computing and application development. This training consists of 2 days: 1 training day (this session) and a follow-up Q&A session. 2023-11-28 09:00:00 UTC 2023-11-28 13:00:00 UTC Pawsey Supercomputing Centre Perth, Australia Perth Australia Pawsey Supercomputing Research CentreXanadu training@pawsey.org.au [] Researchers and research students, or anyone who wants to learn intermediate statistical concepts to apply in R.Data scientistsProfessional/support staff [] open_to_all online learningsupercomputingPawsey Supercomputing CentrePennyLaneSetonix
  • Q&A Using PennyLane on Pawsey’s Setonix Supercomputer

    1 December 2023

    Q&A Using PennyLane on Pawsey’s Setonix Supercomputer https://dresa.org.au/events/q-a-using-pennylane-on-pawsey-s-setonix-supercomputer This is a Question and Answer (Q&A) session following up the 28 November Using PennyLane on Pawsey’s Setonix Supercomputer training. Join this session if you have questions that have come from your use of PennyLane, or come to learn more from your colleagues about their use. 2023-12-01 09:00:00 UTC 2023-12-01 10:00:00 UTC Pawsey Supercomputing Research Centre Perth, Australia Perth Australia Pawsey Supercomputing Research Centre training@pawsey.org.au [] Researchers and research students, or anyone who wants to learn intermediate statistical concepts to apply in R.Professional/support staff Data scientists webinar open_to_all Pawsey Supercomputing CentrePennyLaneSetonixsupercomputing
  • Introduction to OpenMP for CPUs

    8 December 2023

    Introduction to OpenMP for CPUs https://dresa.org.au/events/introduction-to-openmp-for-cpus Having multiple cores available only speeds your code when you know how to use them. If you want to discover how to enable CPU parallelism in your code, and have a familiarity with C, C++ or Fortran, then this hands-on workshop is for you. You can join in-person OR virtually. This is a free workshop. Please register only if you plan to attend. Please join at 8:30am (AWST) to ensure you can access the system. The class starts promptly at 9:00am. What will I learn in this 3-hour, hands-on workshop? You will learn about OpenMP: the reference in the High-Performance Computing world for shared memory programming over the last 25+ years. This workshop will address how to: - spawn multiple threads to enable CPU parallelism. - coordinate threads to have a certain piece of code executed by certain threads. - handle data between threads, depending on whether you want them to have their own copy or share it. - parallelise the execution of loops across threads. - make sure that different threads end with similar amounts of work. - synchronise threads to ensure that the program correctness is preserved. 2023-12-08 08:30:00 UTC 2023-12-08 12:00:00 UTC Pawsey Supercomputing Research Center 1 Bryce Ave, Kensington, Australia 1 Bryce Ave Kensington Australia 6151 PawseyPawsey Supercomputing Research Centre training@pawsey.org.au [] [] workshop open_to_all OpenMPCPU parallelismOpenMP with CPUs
  • Using OpenMP with GPUs

    8 December 2023

    Kensington, Australia

    Using OpenMP with GPUs https://dresa.org.au/events/using-openmp-with-gpus GPUs are powerful devices, but not trivial to use. If you want to discover how to offload the execution of your code to GPUs, and leverage the parallelism available on GPUs, then this hands-on workshop is for you. You can join in-person OR virtually. This is a free workshop. Please register only if you plan to attend. Please join at 12:45pm (AWST) to ensure you can access the system. The class starts promptly at 1:15pm. What will I learn in this 3-hour, hands-on workshop? This workshop is a gentle introduction to OpenMP for GPUs: the reference in the High-Performance Computing world for shared memory programming over the last 25+ years, which has recently become a viable solution for GPU offloading. This workshop will address how to: -offload the execution of my code to a GPU. - move the data to and from the GPU. - let the data on the GPU to avoid extra, costly, memory transfers. - enable parallelism on the GPU. - use multiple GPUs. - leverage asynchronous execution. 2023-12-08 12:45:00 UTC 2023-12-08 16:15:00 UTC Pawsey Supercomputing Research Center 1 Bryce Ave, Kensington, Australia 1 Bryce Ave Kensington Australia 6151 PawseyPawsey Supercomputing Research Centre training@pawsey.org.au [] [] workshop open_to_all OpenMP with GPUsOpenMP
  • P’Con – Exascale Data Management with ADIOS

    7 December 2021

    P’Con – Exascale Data Management with ADIOS https://dresa.org.au/events/p-con-exascale-data-management-with-adios Speakers: Speaker: Scott Klasky, Norbert Podhorszki (Oak Ridge National Laboratories) ADIOS is a high performance publish/subscribe I/O framework which has been designed and developed for the exascale computing era. In a snapshot, ADIOS: - Is integrated into most of the popular analysis and visualization packages. - Has strict continuous integration practices, providing stable, portable and efficient I/O services. - Has a programming interface designed for easy switching from files to streams (on-HPC machines) to streams over the Wide Area Network. In this presentation, the speaker will introduce the ADIOS I/O framework and describe how it is being used, including: - In data intensive supercomputing simulations, using it for data storage and retrieval moving petabytes of data in a single job, - In coupled simulations using it for data movement between the simulation codes and to in situ analysis and in situ visualization services, - In AI applications, for collecting training data from ensembles of computations on the fly, and - In collaborations between experimental facilities and HPC centers to stream experimental data for near-real-time decision making. We will present how ADIOS was used in the 2020 Gordon Bell finalist paper from Pawsey and ORNL about the workflow for simulating and processing the full-scale low-frequency telescope data of SKA Phase 1 on the Summit supercomputer at ORNL. ADIOS is also a research framework for new I/O technologies, pushing the boundaries beyond current use cases. Our research focuses on data reduction with trusted lossy compression techniques, data refactoring for easier on-demand retrieval, hierarchical storage to speed up access to the most important data, asynchronous I/O techniques and others. The techniques in ADIOS have been developed for over 15 years through collaborations with science applications, and we are always seeking new collaborations and new challenges that will define the I/O landscape of the future. We hope this presentation will allow the audience to learn about how others are using extreme I/O in their fields and also stimulate the audience to bring new challenges to us. Exascale Data Management with ADIOS session, is part of the first PaCER Conference – P’con: A week where Pawsey continues setting the pace for exascale. 2021-12-07 09:30:00 UTC 2021-12-07 11:30:00 UTC Pawsey Supercomputing Research Centre Pawsey Supercomputing Research Centre training@pawsey.org.au [] [] webinarconference open_to_all ADIOSsupercomputersupercomputingPawseyPaCERSetonix
  • P’Con – Porting multi-GPU SELF-Fluids code to HIPFort

    8 December 2021

    P’Con – Porting multi-GPU SELF-Fluids code to HIPFort https://dresa.org.au/events/p-con-porting-multi-gpu-self-fluids-code-to-hipfort Speaker: By Dr Joseph Schoonover, Fluid Numerics LLC During this talk, we will share our experience with porting process for SELF-Fluids from multi-GPU CUDA-Fortran to multi-GPU HIPFort. The talk will cover the design principles and roadmap for SELF and the strategy to port from Nvidia-only platforms to AMD & Nvidia GPUs. We’ll discuss hurdles encountered along the way and considerations for developing multi-GPU accelerated applications in Fortran. SELF is an object-oriented Fortran library that supports the implementation of Spectral Element Methods for solving partial differential equations. SELF-Fluids is an implementation of SELF that solves the compressible Navier Stokes equations on CPU only and GPU accelerated compute platforms using the Discontinuous Galerkin Spectral Element Method. The SELF API is designed based on the assumption that SEM developers and researchers need to be able to implement derivatives in 1-D and divergence, gradient, and curl in 2-D and 3-D on scalar, vector, and tensor functions using spectral collocation, continuous Galerkin, and discontinuous Galerkin spectral element methods. Additionally, as we enter the Exascale era, we are currently faced with a zoo of compute hardware that is available. Because of this, SELF routines provide support for GPU acceleration through AMD’s HIP and support for multi-core, multi-node, and multi-GPU platforms with MPI. SELF and SELF-Fluids are publicly available online at https://github.com/fluidnumerics/self Porting multi-GPU SELF-Fluids code to HIPFort, is part of the first PaCER Conference – P’con: A week where Pawsey continues setting the pace for exascale. 2021-12-08 09:30:00 UTC 2021-12-08 11:30:00 UTC Pawsey Supercomputing Research Centre Pawsey Supercomputing Research Centre training@pawsey.org.au [] [] webinarconference open_to_all GPUsHipFortsupercomputersupercomputingSetonix
  • Using Supercomputers: Parts 1 and 2

    22 - 23 August 2022

    Using Supercomputers: Parts 1 and 2 https://dresa.org.au/events/using-supercomputers-parts-1-and-2 In this course, students will be introduced to supercomputers and what makes them different to other computers. A typical supercomputing architecture, parallelism, compute resource sharing, command line interfaces and other key concepts and practices will be discussed. Students will practice what they learn using Pawsey's new supercomputer Setonix. 2022-08-22 10:00:00 UTC 2022-08-23 13:00:00 UTC Pawsey Supercomputing Research Centre Pawsey Supercomputing Research Centre training@pawsey.org.au [] [] workshopwebinar open_to_all supercomputingsupercomputersUnixSetonix
  • Evaluate Application Performance using TAU and E4S

    4 - 5 April 2023

    Evaluate Application Performance using TAU and E4S https://dresa.org.au/events/evaluate-application-performance-using-tau-and-e4s This workshop is for computational scientists who want to evaluate the performance of their parallel, scientific applications. In this workshop, you will learn about the Extreme-scale Scientific Software Stack and the TAU Performance System® and its interfaces to other tools and libraries. During the hands-on portion of the workshop, the instructor will attempt to collect and analyse performance data for additional user codes. Attendees are welcome to contact the instructor ahead of time to begin collecting data to discuss at the workshop. More Detail on the Workshop To meet the needs of computational scientists to evaluate the performance of their parallel, scientific applications, we will focus on the use of E4S containers and TAU performance data collection, analysis, and performance optimization. After demonstrating how performance data (both profile and trace data) can be collected using TAU’s (Tuning and Analysis Utilities) automated instrumentation, attention will then turn to how to instrument key codes and to analyse the performance data collected to explain where the time is spent. The workshop will include sample codes that illustrate the different instrumentation and measurement choices. Topics will cover generating performance profiles and traces with memory utilization and headroom, I/O, and interfaces to ROCm, including ROCProfiler and ROCTracer with support for collecting hardware performance data. The workshop will cover instrumentation of OpenMP programs using OpenMP Tools Interface (OMPT), including support for target offload and measurement of a program’s memory footprint. We will demonstrate scalable tracing using OTF2 and visualization using the Vampir trace analysis tool. Performance data analysis using ParaProf and PerfExplorer will be demonstrated using the performance data management framework (TAUdb) that includes TAU’s performance database. 2023-04-04 09:00:00 UTC 2023-04-05 15:00:00 UTC Pawsey Supercomputing Research Centre Pawsey Supercomputing Research Centre training@pawsey.org.au [] [] 20 workshop open_to_all supercomputingGPUsTAUE4SOpenMPperformance
  • OpenCL workshop

    21 - 29 June 2023

    OpenCL workshop https://dresa.org.au/events/opencl-workshop Supercomputers make use of accelerators from a variety of different hardware vendors, using devices such as multi-core CPU’s, GPU’s and even FPGA’s. OpenCL is a way for your HPC application to make effective use of heterogeneous computing devices, and to avoid code refactoring for new HPC infrastructure. 2023-06-21 09:00:00 UTC 2023-06-29 17:00:00 UTC Pawsey Supercomputing Centre Online, Perth, Australia Online Perth Australia 6100 Pawsey Supercomputing Research Centre Ann Backhaus [] technical staffstudentsprofessionalresearcher 10 webinarworkshop open_to_all supercomputingOpenCLSkills upliftPawsey Supercomputing CentreCPUsGPUs HPCSetonix
  • Using PennyLane on Pawsey’s Setonix Supercomputer

    28 November 2023

    Using PennyLane on Pawsey’s Setonix Supercomputer https://dresa.org.au/events/using-pennylane-on-pawsey-s-setonix-supercomputer Join this free, online training, where Xanadu’s Quantum Community Manager Catalina Albornoz introduces quantum computing using the Python-based PennyLane software library. During this training, you’ll have the opportunity to work hands-on with PennyLane on Pawsey’s Setonix supercomputer. Xanadu is a Canadian quantum computing company with the mission to build quantum computers that are useful and available to people everywhere. Xanadu is one of the world’s leading quantum hardware and software companies and also leads the development of PennyLane, an open-source software library for quantum computing and application development. This training consists of 2 days: 1 training day (this session) and a follow-up Q&A session. 2023-11-28 09:00:00 UTC 2023-11-28 13:00:00 UTC Pawsey Supercomputing Centre Perth, Australia Perth Australia Pawsey Supercomputing Research CentreXanadu training@pawsey.org.au [] Researchers and research students, or anyone who wants to learn intermediate statistical concepts to apply in R.Data scientistsProfessional/support staff [] open_to_all online learningsupercomputingPawsey Supercomputing CentrePennyLaneSetonix
  • Q&A Using PennyLane on Pawsey’s Setonix Supercomputer

    1 December 2023

    Q&A Using PennyLane on Pawsey’s Setonix Supercomputer https://dresa.org.au/events/q-a-using-pennylane-on-pawsey-s-setonix-supercomputer This is a Question and Answer (Q&A) session following up the 28 November Using PennyLane on Pawsey’s Setonix Supercomputer training. Join this session if you have questions that have come from your use of PennyLane, or come to learn more from your colleagues about their use. 2023-12-01 09:00:00 UTC 2023-12-01 10:00:00 UTC Pawsey Supercomputing Research Centre Perth, Australia Perth Australia Pawsey Supercomputing Research Centre training@pawsey.org.au [] Researchers and research students, or anyone who wants to learn intermediate statistical concepts to apply in R.Professional/support staff Data scientists webinar open_to_all Pawsey Supercomputing CentrePennyLaneSetonixsupercomputing
  • Introduction to OpenMP for CPUs

    8 December 2023

    Introduction to OpenMP for CPUs https://dresa.org.au/events/introduction-to-openmp-for-cpus Having multiple cores available only speeds your code when you know how to use them. If you want to discover how to enable CPU parallelism in your code, and have a familiarity with C, C++ or Fortran, then this hands-on workshop is for you. You can join in-person OR virtually. This is a free workshop. Please register only if you plan to attend. Please join at 8:30am (AWST) to ensure you can access the system. The class starts promptly at 9:00am. What will I learn in this 3-hour, hands-on workshop? You will learn about OpenMP: the reference in the High-Performance Computing world for shared memory programming over the last 25+ years. This workshop will address how to: - spawn multiple threads to enable CPU parallelism. - coordinate threads to have a certain piece of code executed by certain threads. - handle data between threads, depending on whether you want them to have their own copy or share it. - parallelise the execution of loops across threads. - make sure that different threads end with similar amounts of work. - synchronise threads to ensure that the program correctness is preserved. 2023-12-08 08:30:00 UTC 2023-12-08 12:00:00 UTC Pawsey Supercomputing Research Center 1 Bryce Ave, Kensington, Australia 1 Bryce Ave Kensington Australia 6151 PawseyPawsey Supercomputing Research Centre training@pawsey.org.au [] [] workshop open_to_all OpenMPCPU parallelismOpenMP with CPUs
  • Using OpenMP with GPUs

    8 December 2023

    Kensington, Australia

    Using OpenMP with GPUs https://dresa.org.au/events/using-openmp-with-gpus GPUs are powerful devices, but not trivial to use. If you want to discover how to offload the execution of your code to GPUs, and leverage the parallelism available on GPUs, then this hands-on workshop is for you. You can join in-person OR virtually. This is a free workshop. Please register only if you plan to attend. Please join at 12:45pm (AWST) to ensure you can access the system. The class starts promptly at 1:15pm. What will I learn in this 3-hour, hands-on workshop? This workshop is a gentle introduction to OpenMP for GPUs: the reference in the High-Performance Computing world for shared memory programming over the last 25+ years, which has recently become a viable solution for GPU offloading. This workshop will address how to: -offload the execution of my code to a GPU. - move the data to and from the GPU. - let the data on the GPU to avoid extra, costly, memory transfers. - enable parallelism on the GPU. - use multiple GPUs. - leverage asynchronous execution. 2023-12-08 12:45:00 UTC 2023-12-08 16:15:00 UTC Pawsey Supercomputing Research Center 1 Bryce Ave, Kensington, Australia 1 Bryce Ave Kensington Australia 6151 PawseyPawsey Supercomputing Research Centre training@pawsey.org.au [] [] workshop open_to_all OpenMP with GPUsOpenMP

Note, this map only displays events that have geolocation information in DReSA.
For the complete list of events in DReSA, click the grid tab.