Geophysical Research Data Processing and Modelling for 2030 Computation
The Cross-NCRIS National Data Assets program co-funded the ‘Geophysics 2030: Building a National High-Resolution Geophysics Reference Collection for 2030 Computation’ (Geophysics2030) project. At completion, Geophysics2030 i) trialled publishing vertically integrated geophysical datasets, making...
Keywords: Geophysics, Applied mathematics, Physical sciences, Computer and information sciences
Resource type: presentation
Geophysical Research Data Processing and Modelling for 2030 Computation
https://zenodo.org/records/11100591
https://dresa.org.au/materials/geophysical-research-data-processing-and-modelling-for-2030-computation
The Cross-NCRIS National Data Assets program co-funded the ‘Geophysics 2030: Building a National High-Resolution Geophysics Reference Collection for 2030 Computation’ (Geophysics2030) project. At completion, Geophysics2030 i) trialled publishing vertically integrated geophysical datasets, making both raw datasets and successive levels of derivative data products available online in a new international self-describing data standard (first published in 2022); ii) co-located these datasets/data products with HPC computing resources required to process datasets at scale; and iii) developed new community software and environments allowing researchers to exploit the new data sets at high-resolution on a continental-scale. This ARDC, AuScope, NCI and TERN-funded project created new high-performance dataset and introduced a new, world-leading community platform that allows researchers to combine high-performance computing, high-resolution datasets, and flexible software workflows. The world-leading innovation was evidenced by new projects in collaboration with leading international researchers, including Jared Peacock, the United States Geological Survey-based leader of the new standards for Magnetotelluric (MT) data and Karl Kappler, DIAS Geophysics, who leads the development of ‘Aurora’, a National Science Foundation (USA) funded open-source software package for processing MT data using the new MTH5 standards.
This Community Connect project, in partnership with NCI and AuScope, proposed to develop, deliver, and distribute a 2-day ‘Geophysical Research Data Processing and Modelling for 2030 Computation’ workshop in 2023. The training packages will consist of two parts, i) the utilisation of NCI for Geophysics processing and modelling, and ii) developing workflows for coupling Geophysical software, compute environments and datasets.
Through previous engagement with the Geophysics community, we knew users of the 2030 Geophysics Collection were experts in their fields of geophysics data acquisition, processing and modelling. The community had high levels of computer literacy and deep technical skills in geophysics and research expertise. The workshop was targeted to support this advanced community and facilitate the usage of large co-located datasets and high-performance computing at the NCI HPC/cloud platform.
rebecca@auscope.org.au
Lesley Wyborn
Nigel Rees
Hannes Hollmann
Jo Croucher
Jared Peacock
Karl Kappler
Rui Yang
Janelle Kerr
Stephan Thiel
Hoël Seille
Anandaroop Ray
Robert Pickle
Voon Hui Lai
Shang Wang
Ben Evans
Rebecca Farrington
Geophysics, Applied mathematics, Physical sciences, Computer and information sciences
Understanding your role as a Data Steward: the role of a Data Steward across the research data management lifecycle
This presentation provides an overview of the role and responsibilities of Data Steward at the University of Adelaide across the six key phases of the research data management lifecycle.
The resource was developed by the University of Adelaide Library in December 2023 as part of the...
Keywords: research data management, RDM, RDM Training, data stewardship, research data governance, role profiles
Resource type: presentation
Understanding your role as a Data Steward: the role of a Data Steward across the research data management lifecycle
https://doi.org/10.25909/25248025
https://dresa.org.au/materials/understanding-your-role-as-a-data-steward-the-role-of-a-data-steward-across-the-research-data-management-lifecycle
This presentation provides an overview of the role and responsibilities of Data Steward at the University of Adelaide across the six key phases of the research data management lifecycle.
The resource was developed by the University of Adelaide Library in December 2023 as part of the Institutional Underpinnings program facilitated by the Australian Research Data Commons (ARDC).
University of Adelaide Library contact: https://www.adelaide.edu.au/library/ask-library
Crichton, Tom
research data management, RDM, RDM Training, data stewardship, research data governance, role profiles
mbr
phd
ecr
researcher
Porting the multi-GPU SELF-Fluids code to HIPFort
In this presentation by Dr. Joseph Schoonover of Fluid Numerics LLC, Joe shares their experience with the porting process for SELF-Fluids from multi-GPU CUDA-Fortran to multi-GPU HIPFort.
The presentation covers the design principles and roadmap for SELF and the strategy to port from...
Keywords: AMD, GPUs, supercomputer, supercomputing
Resource type: presentation
Porting the multi-GPU SELF-Fluids code to HIPFort
https://docs.google.com/presentation/d/1JUwFkrHLx5_hgjxsix8h498_YqvFkkcefNYbu-DsHio/edit#slide=id.g10626504d53_0_0
https://dresa.org.au/materials/porting-the-multi-gpu-self-fluids-code-to-hipfort
In this presentation by Dr. Joseph Schoonover of Fluid Numerics LLC, Joe shares their experience with the porting process for SELF-Fluids from multi-GPU CUDA-Fortran to multi-GPU HIPFort.
The presentation covers the design principles and roadmap for SELF and the strategy to port from Nvidia-only platforms to AMD & Nvidia GPUs. Also discussed are the hurdles encountered along the way and considerations for developing multi-GPU accelerated applications in Fortran.
SELF is an object-oriented Fortran library that supports the implementation of Spectral Element Methods for solving partial differential equations. SELF-Fluids is an implementation of SELF that solves the compressible Navier Stokes equations on CPU only and GPU accelerated compute platforms using the Discontinuous Galerkin Spectral Element Method. The SELF API is designed based on the assumption that SEM developers and researchers need to be able to implement derivatives in 1-D and divergence, gradient, and curl in 2-D and 3-D on scalar, vector, and tensor functions using spectral collocation, continuous Galerkin, and discontinuous Galerkin spectral element methods.
The presentation discussion is placed in context of the Exascale era, where we're faced with a zoo of available compute hardware. Because of this, SELF routines provide support for GPU acceleration through AMD’s HIP and support for multi-core, multi-node, and multi-GPU platforms with MPI.
training@pawsey.org.au
Joe Schoonover
AMD, GPUs, supercomputer, supercomputing
Embracing new solutions for in-situ visualisation
This PPT was used by Jean Favre, senior visualisation software engineer at CSCS, the Swiss National Supercomputing Centre during his presentation at P'Con '21 (Pawsey's first PaCER Conference).
This material discusses the upcoming release of ParaView v5.10, a leading scientific visualisation...
Keywords: ParaView, GPUs, supercomputer, supercomputing, visualisation, data visualisation
Resource type: presentation
Embracing new solutions for in-situ visualisation
https://github.com/jfavre/InSitu/blob/master/InSitu-Revisited.pdf
https://dresa.org.au/materials/embracing-new-solutions-for-in-situ-visualisation
This PPT was used by Jean Favre, senior visualisation software engineer at CSCS, the Swiss National Supercomputing Centre during his presentation at P'Con '21 (Pawsey's first PaCER Conference).
This material discusses the upcoming release of ParaView v5.10, a leading scientific visualisation application. In this release ParaView consolidates its implementation of the Catalyst API, a specification developed for simulations and scientific data producers to analyse and visualise data in situ.
The material reviews some of the terminology and issues of different in-situ visualisation scenarios, then reviews early Data Adaptors for tight-coupling of simulations and visualisation solutions. This is followed by an introduction of Conduit, an intuitive model for describing hierarchical scientific data. Both ParaView-Catalyst and Ascent use Conduit’s Mesh Blueprint, a set of conventions to describe computational simulation meshes.
Finally, the materials present CSCS’ early experience in adopting ParaView-Catalyst and Ascent via two concrete examples of instrumentation of some proxy numerical applications.
training@pawsey.org.au
Jean Favre
ParaView, GPUs, supercomputer, supercomputing, visualisation, data visualisation