Register training material
20 materials found

Keywords: R software  or HPC 


AWS Ramp-Up Guide: Academic Research

AWS Ramp-Up Guides offer a variety of resources to help you build your skills and knowledge of the AWS Cloud. Each guide features carefully selected digital training, classroom courses, videos, whitepapers, certifications, and more. AWS now offers four ramp-up guides that help academic...

Keywords: Machine learning, machine learning, aws, AWS, cloud, Cloud computing, cloud computing, training material, HPC training, HPC, training registry, training partnerships

AWS Ramp-Up Guide: Academic Research https://dresa.org.au/materials/aws-ramp-up-guide-academic-research AWS Ramp-Up Guides offer a variety of resources to help you build your skills and knowledge of the AWS Cloud. Each guide features carefully selected digital training, classroom courses, videos, whitepapers, certifications, and more. AWS now offers four ramp-up guides that help academic researchers who use AI, ML, Generative AI, and HPC in their research activities, as well as the essential AWS knowledge for Statistician Researchers and Research IT professionals. The guides help learners decide where to start, and how to navigate, their learning journey. Some resources will be more relevant than others based on each learner’s specific research tasks. AI, ML, Generative AI ramp-up guide (page 2) is for academic researchers who are exploring using AWS AI, ML, and Generative AI tools to improve efficiency and productivity in their research tasks. This course introduces seven components on AI and ML and ten components on Generative AI. The course starts with an introduction to AI, and covers AWS AI/ML services, such as Amazon SageMaker. The Generative AI content covers topics such as planning a Generative AI project, responsible AI Practices, security, compliance, and governance for AI solutions. The Generative AI topics also cover how to get started with Amazon Bedrock. Recommended prerequisites: basic understanding of Python. High Performance Computing ramp-up guide (page 3) is designed for academic researchers who seek to use HPC on AWS. In this course, you will be introduced to eleven components that are essential about Higher Performance Computing on AWS. The course starts with an overview of HPC on AWS, followed by topics including AWS ParallelCluster and Research HPC Workloads on AWS Batch. Recommended prerequisites: complete AWS Cloud Essentials. Statistician Researcher ramp-up guide (page 4) is specifically catered for researchers in the fields of statistics and quantum analysis. The course covers topics such as building with Amazon Redshift clusters, getting started with Amazon EMR, Machine Learning for Data Scientists, authoring visual analytics using Amazon QuickSight, Batch analytics on AWS, and Amazon Lightsail for Research. Recommended prerequisites: complete AWS Cloud Essentials. Research IT ramp-up guide (page 5) is an extension of the Foundational Researcher Learning Plan, and enables Research IT leaders and professionals to dive deeper into specific topics. The goal of this extension for Research IT professionals is to dive deeper on fundamentals, understand management capabilities and implementing guardrails, cost optimization for research workloads, become familiar with platforms for research and research partners, and learn more about AWS Landing Zone and AWS Control Tower for Research. Recommended prerequisites: Foundational Researcher Learning Plan. emmarrig@amazon.com Machine learning, machine learning, aws, AWS, cloud, Cloud computing, cloud computing, training material, HPC training, HPC, training registry, training partnerships
Tutorials to learn how to use STAN

Stan tutorials offer links to exceptional tutorial papers, videos and statistics to learn Bayesian statistical methods and applied statistics.

Keywords: Statistics, applied statistics, Bayesian statistics, R software, Python, MATLAB

Tutorials to learn how to use STAN https://dresa.org.au/materials/tutorials-to-learn-how-to-use-stan Stan tutorials offer links to exceptional tutorial papers, videos and statistics to learn Bayesian statistical methods and applied statistics. https://mc-stan.org/about/team/ Statistics, applied statistics, Bayesian statistics, R software, Python, MATLAB
Species Distribution Modelling in R

This set of scripts and videos provide an introduction to running SDMs in R and include some steps to consider that go beyond what's available in the EcoCommons SDM point-and-click tools.

Five videos include: 1. An introduction to SDM in R, 2. occurrence data, 3. environmental data, 4. fitting...

Keywords: Species Distribution Modelling, Ecology, R software, EcoCommons

Species Distribution Modelling in R https://dresa.org.au/materials/species-distribution-modelling-in-r This set of scripts and videos provide an introduction to running SDMs in R and include some steps to consider that go beyond what's available in the EcoCommons SDM point-and-click tools. Five videos include: 1. An introduction to SDM in R, 2. occurrence data, 3. environmental data, 4. fitting your model, 5. model evaluation Scripts and files are available here: https://github.com/EcoCommons-Australia/educational_material/tree/main/SDMs_in_R/Scripts Scripts for all four modules are here: https://www.ecocommons.org.au/wp-content/uploads/EcoCommons_steps_1_to_4.html https://www.ecocommons.org.au/contact/ Species Distribution Modelling, Ecology, R software, EcoCommons ugrad mbr phd
WEBINAR: Where to go when your bioinformatics outgrows your compute

This record includes training materials associated with the Australian BioCommons webinar ‘Where to go when your bioinformatics outgrows your compute’. This webinar took place on 19 August 2021.

Bioinformatics analyses are often complex, requiring multiple software tools and specialised compute...

Keywords: Computational Biology, Bioinformatics, High performance computing, HPC, Galaxy Australia, Nectar Research Cloud, Pawsey Supercomputing Centre, NCI, NCMAS, Cloud computing

WEBINAR: Where to go when your bioinformatics outgrows your compute https://dresa.org.au/materials/webinar-where-to-go-when-your-bioinformatics-outgrows-your-compute-7a5a0ff8-8f4f-4fd0-af20-a88d515a6554 This record includes training materials associated with the Australian BioCommons webinar ‘Where to go when your bioinformatics outgrows your compute’. This webinar took place on 19 August 2021. Bioinformatics analyses are often complex, requiring multiple software tools and specialised compute resources. “I don’t know what compute resources I will need”, “My analysis won’t run and I don’t know why” and "Just getting it to work" are common pain points for researchers. In this webinar, you will learn how to understand the compute requirements for your bioinformatics workflows. You will also hear about ways of accessing compute that suits your needs as an Australian researcher, including Galaxy Australia, cloud and high-performance computing services offered by the Australian Research Data Commons, the National Compute Infrastructure (NCI) and Pawsey.  We also describe bioinformatics and computing support services available to Australian researchers.  This webinar was jointly organised with the Sydney Informatics Hub at the University of Sydney. Materials are shared under a Creative Commons Attribution 4.0 International agreement unless otherwise specified and were current at the time of the event. Files and materials included in this record: Event metadata (PDF): Information about the event including, description, event URL, learning objectives, prerequisites, technical requirements etc. Index of training materials (PDF): List and description of all materials associated with this event including the name, format, location and a brief description of each file. Where to go when your bioinformatics outgrows your compute - slides (PDF and PPTX): Slides presented during the webinar Australian research computing resources cheat sheet (PDF): A list of resources and useful links mentioned during the webinar. Materials shared elsewhere: A recording of the webinar is available on the Australian BioCommons YouTube Channel: https://youtu.be/hNTbngSc-W0 Melissa Burke (melissa@biocommons.org.au) Computational Biology, Bioinformatics, High performance computing, HPC, Galaxy Australia, Nectar Research Cloud, Pawsey Supercomputing Centre, NCI, NCMAS, Cloud computing
WEBINAR: High performance bioinformatics: submitting your best NCMAS application

This record includes training materials associated with the Australian BioCommons webinar ‘High performance bioinformatics: submitting your best NCMAS application’. This webinar took place on 20 August 2021.

Bioinformaticians are increasingly turning to specialised compute infrastructure and...

Keywords: Computational Biology, Bioinformatics, High Performance Computing, HPC, NCMAS

WEBINAR: High performance bioinformatics: submitting your best NCMAS application https://dresa.org.au/materials/webinar-high-performance-bioinformatics-submitting-your-best-ncmas-application-ee80822f-74ac-41af-a5a4-e162c10e6d78 This record includes training materials associated with the Australian BioCommons webinar ‘High performance bioinformatics: submitting your best NCMAS application’. This webinar took place on 20 August 2021. Bioinformaticians are increasingly turning to specialised compute infrastructure and efficient, scalable workflows as their research becomes more data intensive. Australian researchers that require extensive compute resources to process large datasets can apply for access to national high performance computing facilities (e.g. Pawsey and NCI) to power their research through the National Computational Merit Allocation Scheme (NCMAS). NCMAS is a competitive, merit-based scheme and requires applicants to carefully consider how the compute infrastructure and workflows will be applied.  This webinar provides life science researchers with insights into what makes a strong NCMAS application, with a focus on the technical assessment, and how to design and present effective and efficient bioinformatic workflows for the various national compute facilities. It will be followed by a short Q&A session. Materials are shared under a Creative Commons Attribution 4.0 International agreement unless otherwise specified and were current at the time of the event. Files and materials included in this record: Event metadata (PDF): Information about the event including, description, event URL, learning objectives, prerequisites, technical requirements etc. Index of training materials (PDF): List and description of all materials associated with this event including the name, format, location and a brief description of each file. High performance bioinformatics: submitting your best NCMAS application - slides (PDF and PPTX): Slides presented during the webinar   Materials shared elsewhere: A recording of the webinar is available on the Australian BioCommons YouTube Channel: https://youtu.be/HeFGjguwS0Y Melissa Burke (melissa@biocommons.org.au) Computational Biology, Bioinformatics, High Performance Computing, HPC, NCMAS
WORKSHOP: Single cell RNAseq analysis in R

This record includes training materials associated with the Australian BioCommons workshop ‘Single cell RNAseq analysis in R’. This workshop took place over two, 3.5 hour sessions on 22 and 3 August 2022.

Event description

Analysis and interpretation of single cell RNAseq (scRNAseq) data...

Keywords: Bioinformatics, Analysis, Transcriptomics, R software, Single cell RNAseq, scRNAseq

WORKSHOP: Single cell RNAseq analysis in R https://dresa.org.au/materials/workshop-single-cell-rnaseq-analysis-in-r-4f60b82d-2f1e-4021-9569-6955878dd945 This record includes training materials associated with the Australian BioCommons workshop ‘Single cell RNAseq analysis in R’. This workshop took place over two, 3.5 hour sessions on 22 and 3 August 2022. Event description Analysis and interpretation of single cell RNAseq (scRNAseq) data requires dedicated workflows. In this hands-on workshop we will show you how to perform single cell analysis using Seurat - an R package for QC, analysis, and exploration of single-cell RNAseq data.  We will discuss the ‘why’ behind each step and cover reading in the count data, quality control, filtering, normalisation, clustering, UMAP layout and identification of cluster markers. We will also explore various ways of visualising single cell expression data. This workshop is presented by the Australian BioCommons and Queensland Cyber Infrastructure Foundation (QCIF) with the assistance of a network of facilitators from the national Bioinformatics Training Cooperative.   Materials are shared under a Creative Commons Attribution 4.0 International agreement unless otherwise specified and were current at the time of the event. Files and materials included in this record: Event metadata (PDF): Information about the event including, description, event URL, learning objectives, prerequisites, technical requirements etc. Index of training materials (PDF): List and description of all materials associated with this event including the name, format, location and a brief description of each file. scRNAseq_Slides (PDF): Slides used to introduce topics scRNAseq_Schedule (PDF): A breakdown of the topics and timings for the workshop scRNAseq_Resources (PDF): A list of resources recommended by trainers and participants scRNAseq_QandA(PDF): Archive of questions and their answers from the workshop Slack Channel.   Materials shared elsewhere: This workshop follows the tutorial ‘scRNAseq Analysis in R with Seurat’ https://swbioinf.github.io/scRNAseqInR_Doco/index.html This material is based on the introductory Guided Clustering Tutorial tutorial from Seurat. It is also drawing from a similar workshop held by Monash Bioinformatics Platform Single-Cell-Workshop, with material here. Melissa Burke (melissa@biocommons.org.au) Bioinformatics, Analysis, Transcriptomics, R software, Single cell RNAseq, scRNAseq
WORKSHOP: R: fundamental skills for biologists

This record includes training materials associated with the Australian BioCommons workshop ‘R: fundamental skills for biologists’. This workshop took place over four, three-hour sessions on 1, 8, 15 and 22 June 2022.

 

Event description

Biologists need data analysis skills to be able to...

Keywords: Bioinformatics, Analysis, Statistics, R software, RStudio, Data visualisation

WORKSHOP: R: fundamental skills for biologists https://dresa.org.au/materials/workshop-r-fundamental-skills-for-biologists-81aa00db-63ad-4962-a7ac-b885bf9f676b This record includes training materials associated with the Australian BioCommons workshop ‘R: fundamental skills for biologists’. This workshop took place over four, three-hour sessions on 1, 8, 15 and 22 June 2022.   Event description Biologists need data analysis skills to be able to interpret, visualise and communicate their research results. While Excel can cover some data analysis needs, there is a better choice, particularly for large and complex datasets.  R is a free, open-source software and programming language that enables data exploration, statistical analysis, visualisation and more. The large variety of R packages available for analysing biological data make it a robust and flexible option for data of all shapes and sizes.  Getting started can be a little daunting for those without a background in statistics and programming. In this workshop we will equip you with the foundations for getting the most out of R and RStudio, an interactive way of structuring and keeping track of your work in R. Using biological data from a model of influenza infection, you will learn how to efficiently and reproducibly organise, read, wrangle, analyse, visualise and generate reports from your data in R. Topics covered in this workshop include: Spreadsheets, organising data and first steps with R Manipulating and analysing data with dplyr Data visualisation Summarized experiments and getting started with Bioconductor   This workshop is presented by the Australian BioCommons and Saskia Freytag from WEHI  with the assistance of a network of facilitators from the national Bioinformatics Training Cooperative. Materials are shared under a Creative Commons Attribution 4.0 International agreement unless otherwise specified and were current at the time of the event. Files and materials included in this record: Event metadata (PDF): Information about the event including, description, event URL, learning objectives, prerequisites, technical requirements etc. Index of training materials (PDF): List and description of all materials associated with this event including the name, format, location and a brief description of each file. Schedule (PDF): A breakdown of the topics and timings for the workshop Recommended resources (PDF): A list of resources recommended by trainers and participants Q_and_A(PDF): Archive of questions and their answers from the workshop Slack Channel. Materials shared elsewhere:   This workshop follows the tutorial ‘Introduction to data analysis with R and Bioconductor’ which is publicly available. https://saskiafreytag.github.io/biocommons-r-intro/ This is derived from material produced as part of The Carpentries Incubator project https://carpentries-incubator.github.io/bioc-intro/ Melissa Burke (melissa@biocommons.org.au) Bioinformatics, Analysis, Statistics, R software, RStudio, Data visualisation
WEBINAR: Pro tips for scaling bioinformatics workflows to HPC

This record includes training materials associated with the Australian BioCommons webinar ‘Pro tips for scaling bioinformatics workflows to HPC’. This webinar took place on 31 May 2023.

Event description 

High Performance Computing (HPC) infrastructures offer the computational scale and...

Keywords: Bioinformatics, Workflows, HPC, High Performance Computing

WEBINAR: Pro tips for scaling bioinformatics workflows to HPC https://dresa.org.au/materials/webinar-pro-tips-for-scaling-bioinformatics-workflows-to-hpc-9f2a8b90-88da-433b-83b2-b1ab262dd9df This record includes training materials associated with the Australian BioCommons webinar ‘Pro tips for scaling bioinformatics workflows to HPC’. This webinar took place on 31 May 2023. Event description  High Performance Computing (HPC) infrastructures offer the computational scale and efficiency that life scientists need to handle complex biological datasets and multi-step computational workflows. But scaling workflows to HPC from smaller, more familiar computational infrastructures brings with it new jargon, expectations, and processes to learn. To make the most of HPC resources, bioinformatics workflows need to be designed for distributed computing environments and carefully manage varying resource requirements, and data scale related to biology.   In this webinar, Dr Georgina Samaha from the Sydney Informatics Hub, Dr Matthew Downton from the National Computational Infrastructure (NCI) and Dr Sarah Beecroft from the Pawsey Supercomputing Research Centre help you navigate the world of HPC for running and developing bioinformatics workflows. They explain when you should take your workflows to HPC and highlight the architectural features you should make the most of to scale your analyses once you’re there. You’ll hear pro-tips for dealing with common pain points like software installation, optimising for parallel computing and resource management, and will find out how to get access to Australia’s National HPC infrastructures at NCI and Pawsey.  Materials Materials are shared under a Creative Commons Attribution 4.0 International agreement unless otherwise specified and were current at the time of the event. Files and materials included in this record: Event metadata (PDF): Information about the event including, description, event URL, learning objectives, prerequisites, technical requirements etc. Index of training materials (PDF): List and description of all materials associated with this event including the name, format, location and a brief description of each file. Pro-tips_HPC_Slides: A PDF copy of the slides presented during the webinar. Materials shared elsewhere: A recording of this webinar is available on the Australian BioCommons YouTube Channel: https://youtu.be/YKJDRXCmGMo Melissa Burke (melissa@biocommons.org.au) Bioinformatics, Workflows, HPC, High Performance Computing
WORKSHOP: Working with genomics sequences and features in R with Bioconductor

This record includes training materials associated with the Australian BioCommons workshop ‘Working with genomics sequences and features in R with Bioconductor’. This workshop took place on 23 September 2021.

Workshop description

Explore the many useful functions that the Bioconductor...

Keywords: R software, Bioconductor, Bioinformatics, Analysis, Genomics, Sequence analysis

WORKSHOP: Working with genomics sequences and features in R with Bioconductor https://dresa.org.au/materials/workshop-working-with-genomics-sequences-and-features-in-r-with-bioconductor-8399bf0d-1e9e-48f3-a840-3f70f23254bb This record includes training materials associated with the Australian BioCommons workshop ‘Working with genomics sequences and features in R with Bioconductor’. This workshop took place on 23 September 2021. Workshop description Explore the many useful functions that the Bioconductor environment offers for working with genomic data and other biological sequences.  DNA and proteins are often represented as files containing strings of nucleic acids or amino acids. They are associated with text files that provide additional contextual information such as genome annotations. This workshop provides hands-on experience with tools, software and packages available in R via Bioconductor for manipulating, exploring and extracting information from biological sequences and annotation files. We will look at tools for working with some commonly used file formats including FASTA, GFF3, GTF, methods for identifying regions of interest, and easy methods for obtaining data packages such as genome assemblies.  This workshop is presented by the Australian BioCommons and Monash Bioinformatics Platform with the assistance of a network of facilitators from the national Bioinformatics Training Cooperative. Materials are shared under a Creative Commons Attribution 4.0 International agreement unless otherwise specified and were current at the time of the event. Files and materials included in this record: Event metadata (PDF): Information about the event including, description, event URL, learning objectives, prerequisites, technical requirements etc. Index of training materials (PDF): List and description of all materials associated with this event including the name, format, location and a brief description of each file. Schedule (PDF): schedule for the workshop providing a breakdown of topics and timings   Materials shared elsewhere: This workshop follows the tutorial ‘Working with DNA sequences and features in R with Bioconductor - version 2’ developed for Monash Bioinformatics Platform and Monash Data Fluency by Paul Harrison. https://monashdatafluency.github.io/r-bioc-2/ Melissa Burke (melissa@biocommons.org.au) R software, Bioconductor, Bioinformatics, Analysis, Genomics, Sequence analysis
Pawsey: AWS Quantum 101 Using Amazon Braket

Join us as AWS Quantum Specialists introduce quantum simulators and gate-based quantum computers, before turning to more advanced topics.

Keywords: Pawsey Supercomputing Centre, AWS, quantum, HPC

Pawsey: AWS Quantum 101 Using Amazon Braket https://dresa.org.au/materials/pawsey-aws-quantum-101-using-amazon-braket Join us as AWS Quantum Specialists introduce quantum simulators and gate-based quantum computers, before turning to more advanced topics. training@pawsey.org.au Pawsey Supercomputing Centre, AWS, quantum, HPC
PCon Preparing applications for El Capitan and beyond

As Lawrence Livermore National Laboratories (LLNL) prepares to stand up its next supercomputer, El Capitan, application teams prepare to pivot to another GPU architecture.

This talk presents how the LLNL application teams made the transition from distributed-memory, CPU-only architectures to...

Keywords: GPUs, supercomputing, HPC, PaCER

PCon Preparing applications for El Capitan and beyond https://dresa.org.au/materials/pcon-preparing-applications-for-el-capitan-and-beyond As Lawrence Livermore National Laboratories (LLNL) prepares to stand up its next supercomputer, El Capitan, application teams prepare to pivot to another GPU architecture. This talk presents how the LLNL application teams made the transition from distributed-memory, CPU-only architectures to GPUs. They share institutional best practices. They discuss new open-source software products as tools for porting and profiling applications and as avenues for collaboration across the computational science community. Join LLNL's Erik Draeger and Jane Herriman, who presented this talk at Pawsey's PaCER Conference in September 2023. training@pawsey.org.au Pawsey Supercomputing Research Centre GPUs, supercomputing, HPC, PaCER masters phd researcher ecr support professional ugrad
VOSON Lab Code Blog

The VOSON Lab Code Blog is a space to share methods, tips, examples and code. Blog posts provide techniques to construct and analyse networks from various API and other online data sources, using the VOSON open-source software and other R based packages.

Keywords: visualisation, Data analysis, data collections, R software, Social network analysis, social media data, Computational Social Science, quantitative, Text Analytics

Resource type: tutorial, other

VOSON Lab Code Blog https://dresa.org.au/materials/voson-lab-code-blog The VOSON Lab Code Blog is a space to share methods, tips, examples and code. Blog posts provide techniques to construct and analyse networks from various API and other online data sources, using the VOSON open-source software and other R based packages. robert.ackland@anu.edu.au visualisation, Data analysis, data collections, R software, Social network analysis, social media data, Computational Social Science, quantitative, Text Analytics researcher support phd masters
From PC to Cloud or High Performance Computing

Most of you would have heard of Cloud and High Performance Computing (HPC), or you may already be using it. HPC is not the same as cloud computing. Both technologies differ in a number of ways, and have some similarities as well.

We may refer to both types as “large scale computing” – but...

Keywords: HPC

From PC to Cloud or High Performance Computing https://dresa.org.au/materials/from-pc-to-cloud-or-high-performance-computing Most of you would have heard of Cloud and High Performance Computing (HPC), or you may already be using it. HPC is not the same as cloud computing. Both technologies differ in a number of ways, and have some similarities as well. We may refer to both types as “large scale computing” – but what is the difference? Both systems target scalability of computing, but in different ways. This webinar will give a good overview to the researchers thinking to make a move from their local computer to Cloud of High Performance Computing Cluster. Introduction HPC vs Cloud computing When to use HPC When to use the Cloud The Cloud – Pros and Cons HPC – Pros and Cons The webinar has no prerequisites. training@intersect.org.au HPC
Getting started with HPC using PBS Pro

Is your computer’s limited power throttling your research ambitions? Are your analysis scripts pushing your laptop’s processor to its limits? Is your software crashing because you’ve run out of memory? Would you like to unleash to power of the Unix command line to automate and run your analysis...

Keywords: HPC

Getting started with HPC using PBS Pro https://dresa.org.au/materials/getting-started-with-hpc-using-pbs-pro Is your computer’s limited power throttling your research ambitions? Are your analysis scripts pushing your laptop’s processor to its limits? Is your software crashing because you’ve run out of memory? Would you like to unleash to power of the Unix command line to automate and run your analysis on supercomputers that you can access for free? High-Performance Computing (HPC) allows you to accomplish your analysis faster by using many parallel CPUs and huge amounts of memory simultaneously. This course provides a hands on introduction to running software on HPC infrastructure using PBS Pro. Connect to an HPC cluster Use the Unix command line to operate a remote computer and create job scripts Submit and manage jobs on a cluster using a scheduler Transfer files to and from a remote computer Use software through environment modules Use parallelisation to speed up data analysis Access the facilities available to you as a researcher This is the PBS Pro version of the Getting Started with HPC course. This course assumes basic familiarity with the Bash command line environment found on GNU/Linux and other Unix-like environments. To come up to speed, consider taking our \Unix Shell and Command Line Basics\ course. training@intersect.org.au HPC
Getting started with HPC using Slurm

Is your computer’s limited power throttling your research ambitions? Are your analysis scripts pushing your laptop’s processor to its limits? Is your software crashing because you’ve run out of memory? Would you like to unleash to power of the Unix command line to automate and run your analysis...

Keywords: HPC

Getting started with HPC using Slurm https://dresa.org.au/materials/getting-started-with-hpc-using-slurm Is your computer’s limited power throttling your research ambitions? Are your analysis scripts pushing your laptop’s processor to its limits? Is your software crashing because you’ve run out of memory? Would you like to unleash to power of the Unix command line to automate and run your analysis on supercomputers that you can access for free? High-Performance Computing (HPC) allows you to accomplish your analysis faster by using many parallel CPUs and huge amounts of memory simultaneously. This course provides a hands on introduction to running software on HPC infrastructure using Slurm. Connect to an HPC cluster Use the Unix command line to operate a remote computer and create job scripts Submit and manage jobs on a cluster using a scheduler Transfer files to and from a remote computer Use software through environment modules Use parallelisation to speed up data analysis Access the facilities available to you as a researcher This is the Slurm version of the Getting Started with HPC course. This course assumes basic familiarity with the Bash command line environment found on GNU/Linux and other Unix-like environments. To come up to speed, consider taking our \Unix Shell and Command Line Basics\ course. training@intersect.org.au HPC
Parallel Programming for HPC

You have written, compiled and run functioning programs in C and/or Fortran. You know how HPC works and you’ve submitted batch jobs.

Now you want to move from writing single-threaded programs into the parallel programming paradigm, so you can truly harness the full power of High Performance...

Keywords: HPC

Parallel Programming for HPC https://dresa.org.au/materials/parallel-programming-for-hpc You have written, compiled and run functioning programs in C and/or Fortran. You know how HPC works and you’ve submitted batch jobs. Now you want to move from writing single-threaded programs into the parallel programming paradigm, so you can truly harness the full power of High Performance Computing. OpenMP (Open Multi-Processing): a widespread method for shared memory programming MPI (Message Passing Interface): a leading distributed memory programming model To do this course you need to have: A good working knowledge of HPC. Consider taking our Getting Started with HPC using PBS Pro course to come up to speed beforehand. Prior experience of writing programs in either C or Fortran. training@intersect.org.au HPC
HPC file systems and what users need to consider for appropriate and efficient usage

Three videos on miscellaneous aspects of HPC usage - useful reference for new users of HPC systems.

1 – General overview of different file systems that might be available on HPC. The video goes through shared file systems such as /home and /scratch, local compute node file systems (local...

Keywords: HPC, high performance computer, File systems

Resource type: video, presentation

HPC file systems and what users need to consider for appropriate and efficient usage https://dresa.org.au/materials/hpc-file-systems-and-what-users-need-to-consider-for-appropriate-and-efficient-usage Three videos on miscellaneous aspects of HPC usage - useful reference for new users of HPC systems. 1 – General overview of different file systems that might be available on HPC. The video goes through shared file systems such as /home and /scratch, local compute node file systems (local scratch or $TMPDIR) and storage file system. It outlines what users need to consider if they wish to use any of these in their workflows. 2 – Overview of the different directories that might be present on HPC. These could include /home, /scratch, /opt, /lib and lib64, /sw and others. 3 – Overview of the Message-of-the-day file and the message that is displayed to users every time they log in. This displays info about general help and often current problems or upcoming outages. QCIF Training (training@qcif.edu.au) HPC, high performance computer, File systems
Basic Linux/Unix commands

A series of eight videos (each between 5 and 10 minutes long) following the content of the Software Carpentry workshop "The Unix Shell".

Sessions 1, 2 and 3 provide instructions on the minimal level of Linux/Unix commands recommended for new...

Keywords: HPC, high performance computer, Unix, Linux, Software Carpentry

Resource type: video, guide

Basic Linux/Unix commands https://dresa.org.au/materials/basic-linux-unix-commands A series of eight videos (each between 5 and 10 minutes long) following the content of the Software Carpentry workshop ["The Unix Shell"](https://swcarpentry.github.io/shell-novice/). Sessions 1, 2 and 3 provide instructions on the minimal level of Linux/Unix commands recommended for new users of HPC. 1 – An overview of how to find out where a user is in the filesystem, list the files there, and how to get help on Unix commands 2 – How to move around the file system and change into other directories 3 – Explains the difference between an absolute and relative path 4 – Overview of how to create new directories, and to create and edit new files with nano 5 – How to use the vi editor to edit files 6 – Overview of file viewers available 7 – How to copy and move files and directories 8 – How to remove files and directories Further details and exercises with solutions can be found on the Software Carpentry "The Unix Shell" page (https://swcarpentry.github.io/shell-novice/) QCIF Training (training@qcif.edu.au) HPC, high performance computer, Unix, Linux, Software Carpentry
Transferring files and data

A short video outlining the basics on how to use FileZilla to establish a secure file transfer protocol (sftp) connection to HPC to use a drag and drop interface to transfer files between the HPC and a desktop computer.

Keywords: sftp, file transfer, HPC, high performance computer

Resource type: video, guide

Transferring files and data https://dresa.org.au/materials/transferring-files-and-data A short video outlining the basics on how to use FileZilla to establish a secure file transfer protocol (sftp) connection to HPC to use a drag and drop interface to transfer files between the HPC and a desktop computer. QCIF Training (training@qcif.edu.au) sftp, file transfer, HPC, high performance computer
Connecting to HPC

A series of three short videos introducing how to use PuTTY to connect from a Windows PC to a secure HPC (high performance computing) cluster.

1 - The very basics on how to establish a connection to HPC.
2 - How to add more specific options for the connection to HPC.
3 - How to save the...

Keywords: HPC, high performance computer, ssh

Resource type: video, guide

Connecting to HPC https://dresa.org.au/materials/connecting-to-hpc A series of three short videos introducing how to use PuTTY to connect from a Windows PC to a secure HPC (high performance computing) cluster. 1 - The very basics on how to establish a connection to HPC. 2 - How to add more specific options for the connection to HPC. 3 - How to save the details and options for a connection for future use. QCIF Training (training@qcif.edu.au) HPC, high performance computer, ssh