Pawsey Supercomputing Research Centre

The Pawsey Supercomputing Research Centre is 1 of 2 Tier 1 supercomputing centres in Australia.

Pawsey prides itself on incorporating a range of best practices, features and solutions. Key features of the Centre include:

  • A purpose-built supercomputing building of more than 1000 m2 at Technology Park in Kensington, Western Australia; complete with scalable cooling and electrical services, to accommodate for expanding supercomputing infrastructure within the facility
  • A unique groundwater cooling system for removing heat from the supercomputer and dissipating this heat via an aquifer, 140 metres below the Centre, with no loss of groundwater. A photovoltaic system which has been incorporated into the building’s shaded façade, plus an extensive PV array on the roof of the building
  • This installation generates 140 kW of electricity onsite, which acts to offset the electrical and CO2 footprint of the Supercomputing Centre
  • Automated ‘intelligence’ incorporated into the building, with real-time monitoring, to facilitate efficient operation and support fine tuning of operations to reduce overall power costs

Pawsey Supercomputing Research Centre https://dresa.org.au/content_providers/pawsey-supercomputing-research-centre The Pawsey Supercomputing Research Centre is 1 of 2 Tier 1 supercomputing centres in Australia. Pawsey prides itself on incorporating a range of best practices, features and solutions. Key features of the Centre include: - A purpose-built supercomputing building of more than 1000 m2 at Technology Park in Kensington, Western Australia; complete with scalable cooling and electrical services, to accommodate for expanding supercomputing infrastructure within the facility - A unique groundwater cooling system for removing heat from the supercomputer and dissipating this heat via an aquifer, 140 metres below the Centre, with no loss of groundwater. A photovoltaic system which has been incorporated into the building’s shaded façade, plus an extensive PV array on the roof of the building - This installation generates 140 kW of electricity onsite, which acts to offset the electrical and CO2 footprint of the Supercomputing Centre - Automated ‘intelligence’ incorporated into the building, with real-time monitoring, to facilitate efficient operation and support fine tuning of operations to reduce overall power costs /system/content_providers/images/000/000/004/original/PAW_RGB_H.png?1633498197
Showing 3 material.
Porting the multi-GPU SELF-Fluids code to HIPFort

In this presentation by Dr. Joseph Schoonover of Fluid Numerics LLC, Joe shares their experience with the porting process for SELF-Fluids from multi-GPU CUDA-Fortran to multi-GPU HIPFort.

The presentation covers the design principles and roadmap for SELF and the strategy to port from...

Keywords: AMD, GPUs, supercomputer, supercomputing

Resource type: presentation

Porting the multi-GPU SELF-Fluids code to HIPFort https://dresa.org.au/materials/porting-the-multi-gpu-self-fluids-code-to-hipfort In this presentation by Dr. Joseph Schoonover of Fluid Numerics LLC, Joe shares their experience with the porting process for SELF-Fluids from multi-GPU CUDA-Fortran to multi-GPU HIPFort. The presentation covers the design principles and roadmap for SELF and the strategy to port from Nvidia-only platforms to AMD & Nvidia GPUs. Also discussed are the hurdles encountered along the way and considerations for developing multi-GPU accelerated applications in Fortran. SELF is an object-oriented Fortran library that supports the implementation of Spectral Element Methods for solving partial differential equations. SELF-Fluids is an implementation of SELF that solves the compressible Navier Stokes equations on CPU only and GPU accelerated compute platforms using the Discontinuous Galerkin Spectral Element Method. The SELF API is designed based on the assumption that SEM developers and researchers need to be able to implement derivatives in 1-D and divergence, gradient, and curl in 2-D and 3-D on scalar, vector, and tensor functions using spectral collocation, continuous Galerkin, and discontinuous Galerkin spectral element methods. The presentation discussion is placed in context of the Exascale era, where we're faced with a zoo of available compute hardware. Because of this, SELF routines provide support for GPU acceleration through AMD’s HIP and support for multi-core, multi-node, and multi-GPU platforms with MPI. training@pawsey.org.au AMD, GPUs, supercomputer, supercomputing
Embracing new solutions for in-situ visualisation

This PPT was used by Jean Favre, senior visualisation software engineer at CSCS, the Swiss National Supercomputing Centre during his presentation at P'Con '21 (Pawsey's first PaCER Conference).

This material discusses the upcoming release of ParaView v5.10, a leading scientific visualisation...

Keywords: ParaView, GPUs, supercomputer, supercomputing, visualisation, data visualisation

Resource type: presentation

Embracing new solutions for in-situ visualisation https://dresa.org.au/materials/embracing-new-solutions-for-in-situ-visualisation This PPT was used by Jean Favre, senior visualisation software engineer at CSCS, the Swiss National Supercomputing Centre during his presentation at P'Con '21 (Pawsey's first PaCER Conference). This material discusses the upcoming release of ParaView v5.10, a leading scientific visualisation application. In this release ParaView consolidates its implementation of the Catalyst API, a specification developed for simulations and scientific data producers to analyse and visualise data in situ. The material reviews some of the terminology and issues of different in-situ visualisation scenarios, then reviews early Data Adaptors for tight-coupling of simulations and visualisation solutions. This is followed by an introduction of Conduit, an intuitive model for describing hierarchical scientific data. Both ParaView-Catalyst and Ascent use Conduit’s Mesh Blueprint, a set of conventions to describe computational simulation meshes. Finally, the materials present CSCS’ early experience in adopting ParaView-Catalyst and Ascent via two concrete examples of instrumentation of some proxy numerical applications. training@pawsey.org.au ParaView, GPUs, supercomputer, supercomputing, visualisation, data visualisation
Merit Allocation Training for 2022

This merit allocation training session provides critical information for researchers considering to apply for time on Pawsey’s new Setonix supercomputer in 2022.

Keywords: supercomputer, supercomputing, merit allocation, allocation

Resource type: video

Merit Allocation Training for 2022 https://dresa.org.au/materials/merit-allocation-training-for-2022 This merit allocation training session provides critical information for researchers considering to apply for time on Pawsey’s new Setonix supercomputer in 2022. training@pawsey.org.au supercomputer, supercomputing, merit allocation, allocation
Found 0 upcoming event. Found 20 past event. View all results.