Register training material
20 materials found

Keywords: fish  or reproducibility  or HPC 


AWS Ramp-Up Guide: Academic Research

AWS Ramp-Up Guides offer a variety of resources to help you build your skills and knowledge of the AWS Cloud. Each guide features carefully selected digital training, classroom courses, videos, whitepapers, certifications, and more. AWS now offers four ramp-up guides that help academic...

Keywords: Machine learning, machine learning, aws, AWS, cloud, Cloud computing, cloud computing, training material, HPC training, HPC, training registry, training partnerships

AWS Ramp-Up Guide: Academic Research https://dresa.org.au/materials/aws-ramp-up-guide-academic-research AWS Ramp-Up Guides offer a variety of resources to help you build your skills and knowledge of the AWS Cloud. Each guide features carefully selected digital training, classroom courses, videos, whitepapers, certifications, and more. AWS now offers four ramp-up guides that help academic researchers who use AI, ML, Generative AI, and HPC in their research activities, as well as the essential AWS knowledge for Statistician Researchers and Research IT professionals. The guides help learners decide where to start, and how to navigate, their learning journey. Some resources will be more relevant than others based on each learner’s specific research tasks. AI, ML, Generative AI ramp-up guide (page 2) is for academic researchers who are exploring using AWS AI, ML, and Generative AI tools to improve efficiency and productivity in their research tasks. This course introduces seven components on AI and ML and ten components on Generative AI. The course starts with an introduction to AI, and covers AWS AI/ML services, such as Amazon SageMaker. The Generative AI content covers topics such as planning a Generative AI project, responsible AI Practices, security, compliance, and governance for AI solutions. The Generative AI topics also cover how to get started with Amazon Bedrock. Recommended prerequisites: basic understanding of Python. High Performance Computing ramp-up guide (page 3) is designed for academic researchers who seek to use HPC on AWS. In this course, you will be introduced to eleven components that are essential about Higher Performance Computing on AWS. The course starts with an overview of HPC on AWS, followed by topics including AWS ParallelCluster and Research HPC Workloads on AWS Batch. Recommended prerequisites: complete AWS Cloud Essentials. Statistician Researcher ramp-up guide (page 4) is specifically catered for researchers in the fields of statistics and quantum analysis. The course covers topics such as building with Amazon Redshift clusters, getting started with Amazon EMR, Machine Learning for Data Scientists, authoring visual analytics using Amazon QuickSight, Batch analytics on AWS, and Amazon Lightsail for Research. Recommended prerequisites: complete AWS Cloud Essentials. Research IT ramp-up guide (page 5) is an extension of the Foundational Researcher Learning Plan, and enables Research IT leaders and professionals to dive deeper into specific topics. The goal of this extension for Research IT professionals is to dive deeper on fundamentals, understand management capabilities and implementing guardrails, cost optimization for research workloads, become familiar with platforms for research and research partners, and learn more about AWS Landing Zone and AWS Control Tower for Research. Recommended prerequisites: Foundational Researcher Learning Plan. emmarrig@amazon.com Machine learning, machine learning, aws, AWS, cloud, Cloud computing, cloud computing, training material, HPC training, HPC, training registry, training partnerships
7 Steps towards Reproducible Research

This workshop aims to take you further down your reproducibility path, by providing concepts and tools you can use in your everyday workflows. It is discipline and experience agnostic, and no coding experience is needed.

We will also examine how Reproducible Research builds business continuity...

Keywords: reproducibility, Reproducibility, reproducible workflows

Resource type: full-course, tutorial

7 Steps towards Reproducible Research https://dresa.org.au/materials/7-steps-towards-reproducible-research This workshop aims to take you further down your reproducibility path, by providing concepts and tools you can use in your everyday workflows. It is discipline and experience agnostic, and no coding experience is needed. We will also examine how Reproducible Research builds business continuity into your research group, how the culture in your institute ecosystem can affect Reproducibility and how you can identify and address risks to your knowledge. The workshop can be used as self-paced or as an instructor Amanda Miotto - a.miotto@griffith.edu.au reproducibility, Reproducibility, reproducible workflows phd support
WEBINAR: Where to go when your bioinformatics outgrows your compute

This record includes training materials associated with the Australian BioCommons webinar ‘Where to go when your bioinformatics outgrows your compute’. This webinar took place on 19 August 2021.

Bioinformatics analyses are often complex, requiring multiple software tools and specialised compute...

Keywords: Computational Biology, Bioinformatics, High performance computing, HPC, Galaxy Australia, Nectar Research Cloud, Pawsey Supercomputing Centre, NCI, NCMAS, Cloud computing

WEBINAR: Where to go when your bioinformatics outgrows your compute https://dresa.org.au/materials/webinar-where-to-go-when-your-bioinformatics-outgrows-your-compute-7a5a0ff8-8f4f-4fd0-af20-a88d515a6554 This record includes training materials associated with the Australian BioCommons webinar ‘Where to go when your bioinformatics outgrows your compute’. This webinar took place on 19 August 2021. Bioinformatics analyses are often complex, requiring multiple software tools and specialised compute resources. “I don’t know what compute resources I will need”, “My analysis won’t run and I don’t know why” and "Just getting it to work" are common pain points for researchers. In this webinar, you will learn how to understand the compute requirements for your bioinformatics workflows. You will also hear about ways of accessing compute that suits your needs as an Australian researcher, including Galaxy Australia, cloud and high-performance computing services offered by the Australian Research Data Commons, the National Compute Infrastructure (NCI) and Pawsey.  We also describe bioinformatics and computing support services available to Australian researchers.  This webinar was jointly organised with the Sydney Informatics Hub at the University of Sydney. Materials are shared under a Creative Commons Attribution 4.0 International agreement unless otherwise specified and were current at the time of the event. Files and materials included in this record: Event metadata (PDF): Information about the event including, description, event URL, learning objectives, prerequisites, technical requirements etc. Index of training materials (PDF): List and description of all materials associated with this event including the name, format, location and a brief description of each file. Where to go when your bioinformatics outgrows your compute - slides (PDF and PPTX): Slides presented during the webinar Australian research computing resources cheat sheet (PDF): A list of resources and useful links mentioned during the webinar. Materials shared elsewhere: A recording of the webinar is available on the Australian BioCommons YouTube Channel: https://youtu.be/hNTbngSc-W0 Melissa Burke (melissa@biocommons.org.au) Computational Biology, Bioinformatics, High performance computing, HPC, Galaxy Australia, Nectar Research Cloud, Pawsey Supercomputing Centre, NCI, NCMAS, Cloud computing
WEBINAR: High performance bioinformatics: submitting your best NCMAS application

This record includes training materials associated with the Australian BioCommons webinar ‘High performance bioinformatics: submitting your best NCMAS application’. This webinar took place on 20 August 2021.

Bioinformaticians are increasingly turning to specialised compute infrastructure and...

Keywords: Computational Biology, Bioinformatics, High Performance Computing, HPC, NCMAS

WEBINAR: High performance bioinformatics: submitting your best NCMAS application https://dresa.org.au/materials/webinar-high-performance-bioinformatics-submitting-your-best-ncmas-application-ee80822f-74ac-41af-a5a4-e162c10e6d78 This record includes training materials associated with the Australian BioCommons webinar ‘High performance bioinformatics: submitting your best NCMAS application’. This webinar took place on 20 August 2021. Bioinformaticians are increasingly turning to specialised compute infrastructure and efficient, scalable workflows as their research becomes more data intensive. Australian researchers that require extensive compute resources to process large datasets can apply for access to national high performance computing facilities (e.g. Pawsey and NCI) to power their research through the National Computational Merit Allocation Scheme (NCMAS). NCMAS is a competitive, merit-based scheme and requires applicants to carefully consider how the compute infrastructure and workflows will be applied.  This webinar provides life science researchers with insights into what makes a strong NCMAS application, with a focus on the technical assessment, and how to design and present effective and efficient bioinformatic workflows for the various national compute facilities. It will be followed by a short Q&A session. Materials are shared under a Creative Commons Attribution 4.0 International agreement unless otherwise specified and were current at the time of the event. Files and materials included in this record: Event metadata (PDF): Information about the event including, description, event URL, learning objectives, prerequisites, technical requirements etc. Index of training materials (PDF): List and description of all materials associated with this event including the name, format, location and a brief description of each file. High performance bioinformatics: submitting your best NCMAS application - slides (PDF and PPTX): Slides presented during the webinar   Materials shared elsewhere: A recording of the webinar is available on the Australian BioCommons YouTube Channel: https://youtu.be/HeFGjguwS0Y Melissa Burke (melissa@biocommons.org.au) Computational Biology, Bioinformatics, High Performance Computing, HPC, NCMAS
WEBINAR: Pro tips for scaling bioinformatics workflows to HPC

This record includes training materials associated with the Australian BioCommons webinar ‘Pro tips for scaling bioinformatics workflows to HPC’. This webinar took place on 31 May 2023.

Event description 

High Performance Computing (HPC) infrastructures offer the computational scale and...

Keywords: Bioinformatics, Workflows, HPC, High Performance Computing

WEBINAR: Pro tips for scaling bioinformatics workflows to HPC https://dresa.org.au/materials/webinar-pro-tips-for-scaling-bioinformatics-workflows-to-hpc-9f2a8b90-88da-433b-83b2-b1ab262dd9df This record includes training materials associated with the Australian BioCommons webinar ‘Pro tips for scaling bioinformatics workflows to HPC’. This webinar took place on 31 May 2023. Event description  High Performance Computing (HPC) infrastructures offer the computational scale and efficiency that life scientists need to handle complex biological datasets and multi-step computational workflows. But scaling workflows to HPC from smaller, more familiar computational infrastructures brings with it new jargon, expectations, and processes to learn. To make the most of HPC resources, bioinformatics workflows need to be designed for distributed computing environments and carefully manage varying resource requirements, and data scale related to biology.   In this webinar, Dr Georgina Samaha from the Sydney Informatics Hub, Dr Matthew Downton from the National Computational Infrastructure (NCI) and Dr Sarah Beecroft from the Pawsey Supercomputing Research Centre help you navigate the world of HPC for running and developing bioinformatics workflows. They explain when you should take your workflows to HPC and highlight the architectural features you should make the most of to scale your analyses once you’re there. You’ll hear pro-tips for dealing with common pain points like software installation, optimising for parallel computing and resource management, and will find out how to get access to Australia’s National HPC infrastructures at NCI and Pawsey.  Materials Materials are shared under a Creative Commons Attribution 4.0 International agreement unless otherwise specified and were current at the time of the event. Files and materials included in this record: Event metadata (PDF): Information about the event including, description, event URL, learning objectives, prerequisites, technical requirements etc. Index of training materials (PDF): List and description of all materials associated with this event including the name, format, location and a brief description of each file. Pro-tips_HPC_Slides: A PDF copy of the slides presented during the webinar. Materials shared elsewhere: A recording of this webinar is available on the Australian BioCommons YouTube Channel: https://youtu.be/YKJDRXCmGMo Melissa Burke (melissa@biocommons.org.au) Bioinformatics, Workflows, HPC, High Performance Computing
How can software containers help your research?

This video explains software containers to a research audience. It is an introduction to why containers are beneficial for research. These benefits are standardisation, portability, reliability and reproducibility. 

Software Containers in research are a solution that addresses the challenge of a...

Keywords: containers, software, research, reproducibility, RSE, standard, agility, portable, reusable, code, application, reproducible, standardisation, package, system, cloud, server, version, reliability, program, collaborator, ARDC_AU, training material

How can software containers help your research? https://dresa.org.au/materials/how-can-software-containers-help-your-research-ca0f9d41-d83b-463b-a548-402c6c642fbf This video explains software containers to a research audience. It is an introduction to why containers are beneficial for research. These benefits are standardisation, portability, reliability and reproducibility.  Software Containers in research are a solution that addresses the challenge of a replicable computational environment and supports reproducibility of research results. Understanding the concept of software containers enables researchers to better communicate their research needs with their colleagues and other researchers using and developing containers. Watch the video here: https://www.youtube.com/watch?v=HelrQnm3v4g If you want to share this video please use this: Australian Research Data Commons, 2021. How can software containers help your research?. [video] Available at: https://www.youtube.com/watch?v=HelrQnm3v4g DOI: http://doi.org/10.5281/zenodo.5091260 [Accessed dd Month YYYY]. contact@ardc.edu.au Martinez, Paula Andrea (type: ProjectLeader) Sam Muirhead (type: Producer) The ARDC Communications Team (type: Editor) The ARDC Skills and Workforce Development Team (type: ProjectMember) The ARDC eResearch Infrastructure & Services (type: ProjectMember) The ARDC Nectar Cloud Services team (type: ProjectMember) containers, software, research, reproducibility, RSE, standard, agility, portable, reusable, code, application, reproducible, standardisation, package, system, cloud, server, version, reliability, program, collaborator, ARDC_AU, training material
CheckEM User Guide

CheckEM is an open-source web based application which provides quality control assessments on metadata and image annotations of fish stereo-imagery. It is available at marine-ecology.shinyapps.io/CheckEM. The application can assess a range of sampling methods and annotation data formats for...

Keywords: stereo-video, fish, annotation

CheckEM User Guide https://dresa.org.au/materials/checkem-user-guide CheckEM is an open-source web based application which provides quality control assessments on metadata and image annotations of fish stereo-imagery. It is available at marine-ecology.shinyapps.io/CheckEM. The application can assess a range of sampling methods and annotation data formats for common inaccuracies made whilst annotating stereo imagery. CheckEM creates interactive plots and tables in a graphical interface, and provides summarised data and a report of potential errors to download. brooke.gibbons@uwa.edu.au stereo-video, fish, annotation
EventMeasure Annotation Guide

EventMeasure annotation guide for baited remote underwater stereo video systems (stereo-BRUVs) for count and length

Keywords: fish, stereo-video, annotation

EventMeasure Annotation Guide https://dresa.org.au/materials/eventmeasure-annotation-guide EventMeasure annotation guide for baited remote underwater stereo video systems (stereo-BRUVs) for count and length tim.langlois@uwa.edu.au fish, stereo-video, annotation
Stereo-video workflows for fish and benthic ecologists

Stereo imagery is widely used by research institutions and management bodies around the world as a cost-effective and non-destructive method to research and monitor fish and habitats (Whitmarsh, Fairweather and Huveneers, 2017). Stereo-video can provide accurate and precise size and range...

Keywords: stereo-video, fish, sharks, habitats

Resource type: tutorial

Stereo-video workflows for fish and benthic ecologists https://dresa.org.au/materials/stereo-video-workflows-for-fish-and-benthic-ecologists Stereo imagery is widely used by research institutions and management bodies around the world as a cost-effective and non-destructive method to research and monitor fish and habitats (Whitmarsh, Fairweather and Huveneers, 2017). Stereo-video can provide accurate and precise size and range measurements and can be used to study spatial and temporal patterns in fish assemblages (McLean et al., 2016), habitat composition and complexity (Collins et al., 2017), behaviour (Goetze et al., 2017), responses to anthropogenic pressures (Bosch et al., 2022) and the recovery and growth of benthic fauna (Langlois et al. 2020). It is important that users of stereo-video collect, annotate, quality control and store their data in a consistent manner, to ensure data produced is of the highest quality possible and to enable large scale collaborations. Here we collate existing best practices and propose new tools to equip ecologists to ensure that all aspects of the stereo-video workflow are performed in a consistent way. tim.langlois@uwa.edu.au stereo-video, fish, sharks, habitats
Pawsey: AWS Quantum 101 Using Amazon Braket

Join us as AWS Quantum Specialists introduce quantum simulators and gate-based quantum computers, before turning to more advanced topics.

Keywords: Pawsey Supercomputing Centre, AWS, quantum, HPC

Pawsey: AWS Quantum 101 Using Amazon Braket https://dresa.org.au/materials/pawsey-aws-quantum-101-using-amazon-braket Join us as AWS Quantum Specialists introduce quantum simulators and gate-based quantum computers, before turning to more advanced topics. training@pawsey.org.au Pawsey Supercomputing Centre, AWS, quantum, HPC
PCon Preparing applications for El Capitan and beyond

As Lawrence Livermore National Laboratories (LLNL) prepares to stand up its next supercomputer, El Capitan, application teams prepare to pivot to another GPU architecture.

This talk presents how the LLNL application teams made the transition from distributed-memory, CPU-only architectures to...

Keywords: GPUs, supercomputing, HPC, PaCER

PCon Preparing applications for El Capitan and beyond https://dresa.org.au/materials/pcon-preparing-applications-for-el-capitan-and-beyond As Lawrence Livermore National Laboratories (LLNL) prepares to stand up its next supercomputer, El Capitan, application teams prepare to pivot to another GPU architecture. This talk presents how the LLNL application teams made the transition from distributed-memory, CPU-only architectures to GPUs. They share institutional best practices. They discuss new open-source software products as tools for porting and profiling applications and as avenues for collaboration across the computational science community. Join LLNL's Erik Draeger and Jane Herriman, who presented this talk at Pawsey's PaCER Conference in September 2023. training@pawsey.org.au Pawsey Supercomputing Research Centre GPUs, supercomputing, HPC, PaCER masters phd researcher ecr support professional ugrad
From PC to Cloud or High Performance Computing

Most of you would have heard of Cloud and High Performance Computing (HPC), or you may already be using it. HPC is not the same as cloud computing. Both technologies differ in a number of ways, and have some similarities as well.

We may refer to both types as “large scale computing” – but...

Keywords: HPC

From PC to Cloud or High Performance Computing https://dresa.org.au/materials/from-pc-to-cloud-or-high-performance-computing Most of you would have heard of Cloud and High Performance Computing (HPC), or you may already be using it. HPC is not the same as cloud computing. Both technologies differ in a number of ways, and have some similarities as well. We may refer to both types as “large scale computing” – but what is the difference? Both systems target scalability of computing, but in different ways. This webinar will give a good overview to the researchers thinking to make a move from their local computer to Cloud of High Performance Computing Cluster. Introduction HPC vs Cloud computing When to use HPC When to use the Cloud The Cloud – Pros and Cons HPC – Pros and Cons The webinar has no prerequisites. training@intersect.org.au HPC
Getting started with HPC using PBS Pro

Is your computer’s limited power throttling your research ambitions? Are your analysis scripts pushing your laptop’s processor to its limits? Is your software crashing because you’ve run out of memory? Would you like to unleash to power of the Unix command line to automate and run your analysis...

Keywords: HPC

Getting started with HPC using PBS Pro https://dresa.org.au/materials/getting-started-with-hpc-using-pbs-pro Is your computer’s limited power throttling your research ambitions? Are your analysis scripts pushing your laptop’s processor to its limits? Is your software crashing because you’ve run out of memory? Would you like to unleash to power of the Unix command line to automate and run your analysis on supercomputers that you can access for free? High-Performance Computing (HPC) allows you to accomplish your analysis faster by using many parallel CPUs and huge amounts of memory simultaneously. This course provides a hands on introduction to running software on HPC infrastructure using PBS Pro. Connect to an HPC cluster Use the Unix command line to operate a remote computer and create job scripts Submit and manage jobs on a cluster using a scheduler Transfer files to and from a remote computer Use software through environment modules Use parallelisation to speed up data analysis Access the facilities available to you as a researcher This is the PBS Pro version of the Getting Started with HPC course. This course assumes basic familiarity with the Bash command line environment found on GNU/Linux and other Unix-like environments. To come up to speed, consider taking our \Unix Shell and Command Line Basics\ course. training@intersect.org.au HPC
Getting started with HPC using Slurm

Is your computer’s limited power throttling your research ambitions? Are your analysis scripts pushing your laptop’s processor to its limits? Is your software crashing because you’ve run out of memory? Would you like to unleash to power of the Unix command line to automate and run your analysis...

Keywords: HPC

Getting started with HPC using Slurm https://dresa.org.au/materials/getting-started-with-hpc-using-slurm Is your computer’s limited power throttling your research ambitions? Are your analysis scripts pushing your laptop’s processor to its limits? Is your software crashing because you’ve run out of memory? Would you like to unleash to power of the Unix command line to automate and run your analysis on supercomputers that you can access for free? High-Performance Computing (HPC) allows you to accomplish your analysis faster by using many parallel CPUs and huge amounts of memory simultaneously. This course provides a hands on introduction to running software on HPC infrastructure using Slurm. Connect to an HPC cluster Use the Unix command line to operate a remote computer and create job scripts Submit and manage jobs on a cluster using a scheduler Transfer files to and from a remote computer Use software through environment modules Use parallelisation to speed up data analysis Access the facilities available to you as a researcher This is the Slurm version of the Getting Started with HPC course. This course assumes basic familiarity with the Bash command line environment found on GNU/Linux and other Unix-like environments. To come up to speed, consider taking our \Unix Shell and Command Line Basics\ course. training@intersect.org.au HPC
Parallel Programming for HPC

You have written, compiled and run functioning programs in C and/or Fortran. You know how HPC works and you’ve submitted batch jobs.

Now you want to move from writing single-threaded programs into the parallel programming paradigm, so you can truly harness the full power of High Performance...

Keywords: HPC

Parallel Programming for HPC https://dresa.org.au/materials/parallel-programming-for-hpc You have written, compiled and run functioning programs in C and/or Fortran. You know how HPC works and you’ve submitted batch jobs. Now you want to move from writing single-threaded programs into the parallel programming paradigm, so you can truly harness the full power of High Performance Computing. OpenMP (Open Multi-Processing): a widespread method for shared memory programming MPI (Message Passing Interface): a leading distributed memory programming model To do this course you need to have: A good working knowledge of HPC. Consider taking our Getting Started with HPC using PBS Pro course to come up to speed beforehand. Prior experience of writing programs in either C or Fortran. training@intersect.org.au HPC
10 Reproducible Research things - Building Business Continuity

The idea that you can duplicate an experiment and get the same conclusion is the basis for all scientific discoveries. Reproducible research is data analysis that starts with the raw data and offers a transparent workflow to arrive at the same results and conclusions. However not all studies are...

Keywords: reproducibility, data management

Resource type: tutorial, video

10 Reproducible Research things - Building Business Continuity https://dresa.org.au/materials/9-reproducible-research-things-building-business-continuity The idea that you can duplicate an experiment and get the same conclusion is the basis for all scientific discoveries. Reproducible research is data analysis that starts with the raw data and offers a transparent workflow to arrive at the same results and conclusions. However not all studies are replicable due to lack of information on the process. Therefore, reproducibility in research is extremely important. Researchers genuinely want to make their research more reproducible, but sometimes don’t know where to start and often don’t have the available time to investigate or establish methods on how reproducible research can speed up every day work. We aim for the philosophy “Be better than you were yesterday”. Reproducibility is a process, and we highlight there is no expectation to go from beginner to expert in a single workshop. Instead, we offer some steps you can take towards the reproducibility path following our Steps to Reproducible Research self paced program. Video: https://www.youtube.com/watch?v=bANTr9RvnGg Tutorial: https://guereslib.github.io/ten-reproducible-research-things/ a.miotto@griffith.edu.au; s.stapleton@griffith.edu.au; i.jennings@griffith.edu.au; Sharron Stapleton Isaac Jennings reproducibility, data management masters phd ecr researcher support
HPC file systems and what users need to consider for appropriate and efficient usage

Three videos on miscellaneous aspects of HPC usage - useful reference for new users of HPC systems.

1 – General overview of different file systems that might be available on HPC. The video goes through shared file systems such as /home and /scratch, local compute node file systems (local...

Keywords: HPC, high performance computer, File systems

Resource type: video, presentation

HPC file systems and what users need to consider for appropriate and efficient usage https://dresa.org.au/materials/hpc-file-systems-and-what-users-need-to-consider-for-appropriate-and-efficient-usage Three videos on miscellaneous aspects of HPC usage - useful reference for new users of HPC systems. 1 – General overview of different file systems that might be available on HPC. The video goes through shared file systems such as /home and /scratch, local compute node file systems (local scratch or $TMPDIR) and storage file system. It outlines what users need to consider if they wish to use any of these in their workflows. 2 – Overview of the different directories that might be present on HPC. These could include /home, /scratch, /opt, /lib and lib64, /sw and others. 3 – Overview of the Message-of-the-day file and the message that is displayed to users every time they log in. This displays info about general help and often current problems or upcoming outages. QCIF Training (training@qcif.edu.au) HPC, high performance computer, File systems
Basic Linux/Unix commands

A series of eight videos (each between 5 and 10 minutes long) following the content of the Software Carpentry workshop "The Unix Shell".

Sessions 1, 2 and 3 provide instructions on the minimal level of Linux/Unix commands recommended for new...

Keywords: HPC, high performance computer, Unix, Linux, Software Carpentry

Resource type: video, guide

Basic Linux/Unix commands https://dresa.org.au/materials/basic-linux-unix-commands A series of eight videos (each between 5 and 10 minutes long) following the content of the Software Carpentry workshop ["The Unix Shell"](https://swcarpentry.github.io/shell-novice/). Sessions 1, 2 and 3 provide instructions on the minimal level of Linux/Unix commands recommended for new users of HPC. 1 – An overview of how to find out where a user is in the filesystem, list the files there, and how to get help on Unix commands 2 – How to move around the file system and change into other directories 3 – Explains the difference between an absolute and relative path 4 – Overview of how to create new directories, and to create and edit new files with nano 5 – How to use the vi editor to edit files 6 – Overview of file viewers available 7 – How to copy and move files and directories 8 – How to remove files and directories Further details and exercises with solutions can be found on the Software Carpentry "The Unix Shell" page (https://swcarpentry.github.io/shell-novice/) QCIF Training (training@qcif.edu.au) HPC, high performance computer, Unix, Linux, Software Carpentry
Transferring files and data

A short video outlining the basics on how to use FileZilla to establish a secure file transfer protocol (sftp) connection to HPC to use a drag and drop interface to transfer files between the HPC and a desktop computer.

Keywords: sftp, file transfer, HPC, high performance computer

Resource type: video, guide

Transferring files and data https://dresa.org.au/materials/transferring-files-and-data A short video outlining the basics on how to use FileZilla to establish a secure file transfer protocol (sftp) connection to HPC to use a drag and drop interface to transfer files between the HPC and a desktop computer. QCIF Training (training@qcif.edu.au) sftp, file transfer, HPC, high performance computer
Connecting to HPC

A series of three short videos introducing how to use PuTTY to connect from a Windows PC to a secure HPC (high performance computing) cluster.

1 - The very basics on how to establish a connection to HPC.
2 - How to add more specific options for the connection to HPC.
3 - How to save the...

Keywords: HPC, high performance computer, ssh

Resource type: video, guide

Connecting to HPC https://dresa.org.au/materials/connecting-to-hpc A series of three short videos introducing how to use PuTTY to connect from a Windows PC to a secure HPC (high performance computing) cluster. 1 - The very basics on how to establish a connection to HPC. 2 - How to add more specific options for the connection to HPC. 3 - How to save the details and options for a connection for future use. QCIF Training (training@qcif.edu.au) HPC, high performance computer, ssh