Register training material
182 material found

ARDC Research Data Rights Management Guide

A practical guide for people and organisations working with data, about rights information and licences, and to raise awareness of the implications of not having licences on data.

Who is this for? This guide is primarily directed toward members of the research sector, particularly data rights...

Keywords: data, rights, management, licence, licensing, research, policy, guide, training material

ARDC Research Data Rights Management Guide https://dresa.org.au/materials/ardc-research-data-rights-management-guide-a5c12e9a-672b-4a42-b9d1-e1315d733aae A practical guide for people and organisations working with data, about rights information and licences, and to raise awareness of the implications of not having licences on data. Who is this for? This guide is primarily directed toward members of the research sector, particularly data rights holders users and suppliers. Some general reference is made to characteristics and management of government data, acknowledging that this kind of data can be input to the research process. Government readers should consult their agency’s data management policies, in addition to reading this guide. contact@ardc.edu.au Laughlin, Greg (type: Editor) Appleyard, Baden (type: Editor) data, rights, management, licence, licensing, research, policy, guide, training material
ARDC Skills Landscape

The Australian Research Data Commons is driving transformational change in the research data ecosystem, enabling researchers to conduct world class data-intensive research. One interconnected component of this ecosystem is skills development/uplift, which is critical to the Commons and its...

Keywords: skills, data skills, eresearch skills, community, skilled workforce, FAIR, research data management, data stewardship, data governance, data use, data generation, training material

ARDC Skills Landscape https://dresa.org.au/materials/ardc-skills-landscape The Australian Research Data Commons is driving transformational change in the research data ecosystem, enabling researchers to conduct world class data-intensive research. One interconnected component of this ecosystem is skills development/uplift, which is critical to the Commons and its purpose of providing Australian researchers with a competitive advantage through data.   In this presentation, Kathryn Unsworth introduces the ARDC Skills Landscape. The Landscape is a first step in developing a national skills framework to enable a coordinated and cohesive approach to skills development across the Australian eResearch sector. It is also a first step towards helping to analyse current approaches in data training to identify: - Siloed skills initiatives, and finding ways to build partnerships and improve collaboration - Skills deficits, and working to address the gaps in data skills - Areas of skills development for investment by skills stakeholders like universities, research organisations, skills and training service providers, ARDC, etc.   contact@ardc.edu.au skills, data skills, eresearch skills, community, skilled workforce, FAIR, research data management, data stewardship, data governance, data use, data generation, training material
The Living Book of Digital Skills

The Living Book of Digital Skills (You never knew you needed until now) is a living, open source online guide to 'modern not-quite-technical computer skills' for researchers and the broader academic community.

A collaboration between Australia's Academic Research Network (AARNet) and the...

Keywords: digital skills, digital dexterity, community, open source

Resource type: guide

The Living Book of Digital Skills https://dresa.org.au/materials/the-living-book-of-digital-skills *The Living Book of Digital Skills (You never knew you needed until now)* is a living, open source online guide to 'modern not-quite-technical computer skills' for researchers and the broader academic community. A collaboration between Australia's Academic Research Network (AARNet) and the Council of Australian Librarians (CAUL), this book is the creation of the CAUL Digital Dexterity Champions and their communities. **Contributing to the Digital Skills GitBook** The Digital Skills GitBook is an open source project and like many projects on GitHub we welcome your contributions. If you have knowledge or expertise on one of our [requested topics](https://aarnet.gitbook.io/digital-skills-gitbook-1/requested-articles), we would love you to write an article for the book. Please let us know what you'd like to write about via our [contributor form](https://github.com/AARNet/Digital-Skills-GitBook/issues/new?assignees=sarasrking&labels=contributors&template=contributor-form.yml&title=Contributor+form%3A+). There are other ways to contribute too. For example, you might: * have a great idea for a new topic to be included in one of our chapters (make a new page) * notice some information that’s out-of-date or that could be explained better (edit a page) * come across something in the GitBook that’s not working as it should be (submit an issue) Sara King - sara.king@aarnet.edu.au Sara King Miah de Francesch Emma Chapman Katie Mills Ruth Cameron digital skills, digital dexterity, community, open source ugrad masters mbr phd ecr researcher support
Create a website resume

Written for the Qld Research Bazaar conference 2021, this self paced lesson breaks down how to use Github pages to make a resume, with a simple and basic template to start off with. It discusses how to use Markdown and minimum HTML to customize the template, and offers explanations on how the...

Keywords: personal development, website

Resource type: tutorial, guide

Create a website resume https://dresa.org.au/materials/create-a-website-resume Written for the Qld Research Bazaar conference 2021, this self paced lesson breaks down how to use Github pages to make a resume, with a simple and basic template to start off with. It discusses how to use Markdown and minimum HTML to customize the template, and offers explanations on how the components work together. a.miotto@griffith.edu.au personal development, website
10 Reproducible Research things - Building Business Continuity

The idea that you can duplicate an experiment and get the same conclusion is the basis for all scientific discoveries. Reproducible research is data analysis that starts with the raw data and offers a transparent workflow to arrive at the same results and conclusions. However not all studies are...

Keywords: reproducibility, data management

Resource type: tutorial, video

10 Reproducible Research things - Building Business Continuity https://dresa.org.au/materials/9-reproducible-research-things-building-business-continuity The idea that you can duplicate an experiment and get the same conclusion is the basis for all scientific discoveries. Reproducible research is data analysis that starts with the raw data and offers a transparent workflow to arrive at the same results and conclusions. However not all studies are replicable due to lack of information on the process. Therefore, reproducibility in research is extremely important. Researchers genuinely want to make their research more reproducible, but sometimes don’t know where to start and often don’t have the available time to investigate or establish methods on how reproducible research can speed up every day work. We aim for the philosophy “Be better than you were yesterday”. Reproducibility is a process, and we highlight there is no expectation to go from beginner to expert in a single workshop. Instead, we offer some steps you can take towards the reproducibility path following our Steps to Reproducible Research self paced program. Video: https://www.youtube.com/watch?v=bANTr9RvnGg Tutorial: https://guereslib.github.io/ten-reproducible-research-things/ a.miotto@griffith.edu.au; s.stapleton@griffith.edu.au; i.jennings@griffith.edu.au; Sharron Stapleton Isaac Jennings reproducibility, data management masters phd ecr researcher support
Data Storytelling

Nowadays, more information created than our audience could possibly analyse on their own! A study by Stanford professor Chip Heath found that during the recall of speeches, 63% of people remember stories and how they made them feel, but only 5% remember a single statistic. So, you should convert...

Keywords: data storytelling, data visualisation

Data Storytelling https://dresa.org.au/materials/data-storytelling Nowadays, more information created than our audience could possibly analyse on their own! A study by Stanford professor Chip Heath found that during the recall of speeches, 63% of people remember stories and how they made them feel, but only 5% remember a single statistic. So, you should convert your insights and discovery from data into stories to share with non-experts with a language they understand. But how? This tutorial helps you construct stories that incite an emotional response and create meaning and understanding for the audience by applying data storytelling techniques. m.yamaguchi@griffith.edu.au a.miotto@griffith.edu.au data storytelling, data visualisation support masters phd researcher
Porting the multi-GPU SELF-Fluids code to HIPFort

In this presentation by Dr. Joseph Schoonover of Fluid Numerics LLC, Joe shares their experience with the porting process for SELF-Fluids from multi-GPU CUDA-Fortran to multi-GPU HIPFort.

The presentation covers the design principles and roadmap for SELF and the strategy to port from...

Keywords: AMD, GPUs, supercomputer, supercomputing

Resource type: presentation

Porting the multi-GPU SELF-Fluids code to HIPFort https://dresa.org.au/materials/porting-the-multi-gpu-self-fluids-code-to-hipfort In this presentation by Dr. Joseph Schoonover of Fluid Numerics LLC, Joe shares their experience with the porting process for SELF-Fluids from multi-GPU CUDA-Fortran to multi-GPU HIPFort. The presentation covers the design principles and roadmap for SELF and the strategy to port from Nvidia-only platforms to AMD & Nvidia GPUs. Also discussed are the hurdles encountered along the way and considerations for developing multi-GPU accelerated applications in Fortran. SELF is an object-oriented Fortran library that supports the implementation of Spectral Element Methods for solving partial differential equations. SELF-Fluids is an implementation of SELF that solves the compressible Navier Stokes equations on CPU only and GPU accelerated compute platforms using the Discontinuous Galerkin Spectral Element Method. The SELF API is designed based on the assumption that SEM developers and researchers need to be able to implement derivatives in 1-D and divergence, gradient, and curl in 2-D and 3-D on scalar, vector, and tensor functions using spectral collocation, continuous Galerkin, and discontinuous Galerkin spectral element methods. The presentation discussion is placed in context of the Exascale era, where we're faced with a zoo of available compute hardware. Because of this, SELF routines provide support for GPU acceleration through AMD’s HIP and support for multi-core, multi-node, and multi-GPU platforms with MPI. training@pawsey.org.au AMD, GPUs, supercomputer, supercomputing
Embracing new solutions for in-situ visualisation

This PPT was used by Jean Favre, senior visualisation software engineer at CSCS, the Swiss National Supercomputing Centre during his presentation at P'Con '21 (Pawsey's first PaCER Conference).

This material discusses the upcoming release of ParaView v5.10, a leading scientific visualisation...

Keywords: ParaView, GPUs, supercomputer, supercomputing, visualisation, data visualisation

Resource type: presentation

Embracing new solutions for in-situ visualisation https://dresa.org.au/materials/embracing-new-solutions-for-in-situ-visualisation This PPT was used by Jean Favre, senior visualisation software engineer at CSCS, the Swiss National Supercomputing Centre during his presentation at P'Con '21 (Pawsey's first PaCER Conference). This material discusses the upcoming release of ParaView v5.10, a leading scientific visualisation application. In this release ParaView consolidates its implementation of the Catalyst API, a specification developed for simulations and scientific data producers to analyse and visualise data in situ. The material reviews some of the terminology and issues of different in-situ visualisation scenarios, then reviews early Data Adaptors for tight-coupling of simulations and visualisation solutions. This is followed by an introduction of Conduit, an intuitive model for describing hierarchical scientific data. Both ParaView-Catalyst and Ascent use Conduit’s Mesh Blueprint, a set of conventions to describe computational simulation meshes. Finally, the materials present CSCS’ early experience in adopting ParaView-Catalyst and Ascent via two concrete examples of instrumentation of some proxy numerical applications. training@pawsey.org.au ParaView, GPUs, supercomputer, supercomputing, visualisation, data visualisation
HPC file systems and what users need to consider for appropriate and efficient usage

Three videos on miscellaneous aspects of HPC usage - useful reference for new users of HPC systems.

1 – General overview of different file systems that might be available on HPC. The video goes through shared file systems such as /home and /scratch, local compute node file systems (local...

Keywords: HPC, high performance computer, File systems

Resource type: video, presentation

HPC file systems and what users need to consider for appropriate and efficient usage https://dresa.org.au/materials/hpc-file-systems-and-what-users-need-to-consider-for-appropriate-and-efficient-usage Three videos on miscellaneous aspects of HPC usage - useful reference for new users of HPC systems. 1 – General overview of different file systems that might be available on HPC. The video goes through shared file systems such as /home and /scratch, local compute node file systems (local scratch or $TMPDIR) and storage file system. It outlines what users need to consider if they wish to use any of these in their workflows. 2 – Overview of the different directories that might be present on HPC. These could include /home, /scratch, /opt, /lib and lib64, /sw and others. 3 – Overview of the Message-of-the-day file and the message that is displayed to users every time they log in. This displays info about general help and often current problems or upcoming outages. QCIF Training (training@qcif.edu.au) HPC, high performance computer, File systems
Basic Linux/Unix commands

A series of eight videos (each between 5 and 10 minutes long) following the content of the Software Carpentry workshop "The Unix Shell".

Sessions 1, 2 and 3 provide instructions on the minimal level of Linux/Unix commands recommended for new...

Keywords: HPC, high performance computer, Unix, Linux, Software Carpentry

Resource type: video, guide

Basic Linux/Unix commands https://dresa.org.au/materials/basic-linux-unix-commands A series of eight videos (each between 5 and 10 minutes long) following the content of the Software Carpentry workshop ["The Unix Shell"](https://swcarpentry.github.io/shell-novice/). Sessions 1, 2 and 3 provide instructions on the minimal level of Linux/Unix commands recommended for new users of HPC. 1 – An overview of how to find out where a user is in the filesystem, list the files there, and how to get help on Unix commands 2 – How to move around the file system and change into other directories 3 – Explains the difference between an absolute and relative path 4 – Overview of how to create new directories, and to create and edit new files with nano 5 – How to use the vi editor to edit files 6 – Overview of file viewers available 7 – How to copy and move files and directories 8 – How to remove files and directories Further details and exercises with solutions can be found on the Software Carpentry "The Unix Shell" page (https://swcarpentry.github.io/shell-novice/) QCIF Training (training@qcif.edu.au) HPC, high performance computer, Unix, Linux, Software Carpentry
Transferring files and data

A short video outlining the basics on how to use FileZilla to establish a secure file transfer protocol (sftp) connection to HPC to use a drag and drop interface to transfer files between the HPC and a desktop computer.

Keywords: sftp, file transfer, HPC, high performance computer

Resource type: video, guide

Transferring files and data https://dresa.org.au/materials/transferring-files-and-data A short video outlining the basics on how to use FileZilla to establish a secure file transfer protocol (sftp) connection to HPC to use a drag and drop interface to transfer files between the HPC and a desktop computer. QCIF Training (training@qcif.edu.au) sftp, file transfer, HPC, high performance computer
Connecting to HPC

A series of three short videos introducing how to use PuTTY to connect from a Windows PC to a secure HPC (high performance computing) cluster.

1 - The very basics on how to establish a connection to HPC.
2 - How to add more specific options for the connection to HPC.
3 - How to save the...

Keywords: HPC, high performance computer, ssh

Resource type: video, guide

Connecting to HPC https://dresa.org.au/materials/connecting-to-hpc A series of three short videos introducing how to use PuTTY to connect from a Windows PC to a secure HPC (high performance computing) cluster. 1 - The very basics on how to establish a connection to HPC. 2 - How to add more specific options for the connection to HPC. 3 - How to save the details and options for a connection for future use. QCIF Training (training@qcif.edu.au) HPC, high performance computer, ssh
Use the Trove Newspaper & Gazette Harvester (web app version)

This video shows how you can use the web app version of the Trove Newspaper & Gazette Harvester to download large quantities of digitised newspaper articles from Trove. Just give it a search from the Trove web interface, and the harvester will...

Keywords: Trove, newspapers, GLAM Workbench, HASS

Resource type: video

Use the Trove Newspaper & Gazette Harvester (web app version) https://dresa.org.au/materials/use-the-trove-newspaper-gazette-harvester-web-app-version-to-download-large-quantities-of-digitised-articles This video shows how you can use the web app version of the [Trove Newspaper & Gazette Harvester](https://glam-workbench.net/trove-harvester/) to download large quantities of digitised newspaper articles from Trove. Just give it a search from the Trove web interface, and the harvester will save the metadata of all the articles from the search results in a CSV (spreadsheet) file for further analysis. You can also save the full text of every article, as well as copies of the articles as JPG images, and even PDFs. The GLAM Workbench is a collection of tools, examples, tutorials, and apps that help you make use of collection data from GLAM organisations (Galleries, Libraries, Archives, and Museums). See: [https://glam-workbench.net/](https://glam-workbench.net/) Tim Sherratt (tim@timsherratt.org and @wragge on Twitter) Trove, newspapers, GLAM Workbench, HASS ugrad masters phd ecr researcher support
Research Data Management (RDM) Online Orientation Module (Macquarie University)

This is a self-paced, guided orientation to the essential elements of Research Data Management. It is available for others to use and modify.
The course introduces the following topics: data policies, data sensitivity, data management planning, storage and security, organisation and metadata,...

Keywords: research data, data management, FAIR data, training

Resource type: quiz, activity, other

Research Data Management (RDM) Online Orientation Module (Macquarie University) https://dresa.org.au/materials/macquarie-university-research-data-management-rdm-online This is a self-paced, guided orientation to the essential elements of Research Data Management. It is available for others to use and modify. The course introduces the following topics: data policies, data sensitivity, data management planning, storage and security, organisation and metadata, benefits of data sharing, licensing, repositories, and best practice including the FAIR principles. Embedded activities and examples help extend learner experience and awareness. The course was designed to assist research students and early career researchers in complying with policies and legislative requirements, understand safe data practices, raise awareness of the benefits of data curation and data sharing (efficiency and impact) and equip them with the required knowledge to plan their data management early in their projects. This course is divided into four sections 1. Crawl - What is Research Data and why care for it? Policy and legislative requirements. The Research Data Life-cycle. Data Management Planning (~30 mins) 2. Walk - Data sensitivity, identifiability, storage, and security (~60 mins) 3. Run - Record keeping, data retention, file naming, folder structures, version control, metadata, data sharing, open data, licences, data repositories, data citation, and ethics (~75 mins) 4. Jump - Best practice FAIR data principles (~45 mins) 5. Fight - Review - a quiz designed to review and reinforce knowledge (~15 mins) https://rise.articulate.com/share/-AWqSPaEI_jTbHwzQHdmQ43R50edrCl0 * *Password: "FAIR" *Password: "FAIR" Any queries or suggestions for course improvement can be directed to the Macquarie University Research Integrity Team: Dr Paul Sou (paul.sou@mq.edu.au) or Dr Shannon Smith (shannon.smith@mq.edu.au). Scorm files can be made available upon request. research data, data management, FAIR data, training
Deep Learning for Natural Language Processing

This workshop is designed to be instructor led and consists of two parts.
Part 1 consists of a lecture-demo about text processing and a hands-on session for attendees to learn how to clean a dataset.
Part 2 consists of a lecture introducing Recurrent Neural Networks and a hands-on session for...

Keywords: Deep learning, NLP, Machine learning

Resource type: presentation, tutorial

Deep Learning for Natural Language Processing https://dresa.org.au/materials/deep-learning-for-natural-language-processing This workshop is designed to be instructor led and consists of two parts. Part 1 consists of a lecture-demo about text processing and a hands-on session for attendees to learn how to clean a dataset. Part 2 consists of a lecture introducing Recurrent Neural Networks and a hands-on session for attendees to train their own RNN. The Powerpoints contain the lecture slides, while the Jupyter notebooks (.ipynb) contain the hands-on coding exercises. This workshop introduces natural language as data for deep learning. We discuss various techniques and software packages (e.g. python strings, RegEx, NLTK, Word2Vec) that help us convert, clean, and formalise text data “in the wild” for use in a deep learning model. We then explore the training and testing of a Recurrent Neural Network on the data to complete a real world task. We will be using TensorFlow v2 for this purpose. datascienceplatform@monash.edu Deep learning, NLP, Machine learning
Getting Started with Deep Learning

This lecture provides a high level overview of how you could get started with developing deep learning applications. It introduces deep learning in a nutshell and then provides advice relating to the concepts and skill sets you would need to know and have in order to build a deep learning...

Keywords: Deep learning, Machine learning

Resource type: presentation

Getting Started with Deep Learning https://dresa.org.au/materials/getting-started-with-deep-learning This lecture provides a high level overview of how you could get started with developing deep learning applications. It introduces deep learning in a nutshell and then provides advice relating to the concepts and skill sets you would need to know and have in order to build a deep learning application. The lecture also provides pointers to various resources you could use to gain a stronger foothold in deep learning. This lecture is targeted at researchers who may be complete beginners in machine learning, deep learning, or even with programming, but who would like to get into the space to build AI systems hands-on. datascienceplatform@monash.edu Deep learning, Machine learning
Visualisation and Storytelling

This workshop explores how data visualisation techniques could be utilised to better understand data and to communicate research efforts and outcomes. The workshop covers a broad range of techniques from simple and static 2D graphics to advanced 3D visualisations in order to provide a broad...

Keywords: data visualisation, storytelling

Resource type: presentation, tutorial

Visualisation and Storytelling https://dresa.org.au/materials/visualisation-and-storytelling This workshop explores how data visualisation techniques could be utilised to better understand data and to communicate research efforts and outcomes. The workshop covers a broad range of techniques from simple and static 2D graphics to advanced 3D visualisations in order to provide a broad overview of the tools available for data analysis, presentation and storytelling. We explore, among others, animated charts and graphs, web visualisation tools such as scrollytellers, and the possibilities of 3D, interactive, and even immersive visualisations. We use real world, concrete examples along the way in order to tangibly illustrate how these visualisations can be created and how viewers perceive and interact with them. We also introduce the various tools and skill sets you would need to be proficient at presenting your data to the world. By the conclusion of this workshop, you would gain familiarity with the various possibilities for presenting your own research data and outcomes. You would have a more intuitive understanding of the strengths and weaknesses of various modes of data visualisation and storytelling, and would have a starting point to obtain the right skill sets relevant to developing your visualisations of choice. datascienceplatform@monash.edu data visualisation, storytelling
Semi-Supervised Deep Learning

Modern deep neural networks require large amounts of labelled data to train. Obtaining the required labelled data is often an expensive and time consuming process. Semi-supervised deep learning involves the use of various creative techniques to train deep neural networks on partially labelled...

Keywords: Deep learning, Machine learning, semi-supervised

Resource type: presentation, tutorial

Semi-Supervised Deep Learning https://dresa.org.au/materials/semi-supervised-deep-learning Modern deep neural networks require large amounts of labelled data to train. Obtaining the required labelled data is often an expensive and time consuming process. Semi-supervised deep learning involves the use of various creative techniques to train deep neural networks on partially labelled data. If successful, it allows better training of a model despite the limited amount of labelled data available. This workshop is designed to be instructor led and covers various semi-supervised learning techniques available in the literature. The workshop consists of a lecture introducing at a high level a selection of techniques that are suitable for semi-supervised deep learning. We discuss how these techniques can be implemented and the underlying assumptions they require. The lecture is followed by a hands-on session where attendees implement a semi-supervised learning technique to train a neural network. We observe and discuss the changing performance and behaviour of the network as varying degrees of labelled and unlabelled data is provided to the network during training. datascienceplatform@monash.edu Deep learning, Machine learning, semi-supervised
Introduction to Deep Learning and TensorFlow

This workshop is intended to run as an instructor guided live event and consists of two parts. Each part consists of a lecture and a hands-on coding exercise.
Part 1 - Introduction to Deep Learning and TensorFlow
Part 2 - Introduction to Convolutional Neural Networks
The Powerpoints contain...

Keywords: Deep learning, convolutional neural network, tensorflow, Machine learning

Resource type: presentation, tutorial

Introduction to Deep Learning and TensorFlow https://dresa.org.au/materials/introduction-to-deep-learning-and-tensorflow This workshop is intended to run as an instructor guided live event and consists of two parts. Each part consists of a lecture and a hands-on coding exercise. Part 1 - Introduction to Deep Learning and TensorFlow Part 2 - Introduction to Convolutional Neural Networks The Powerpoints contain the lecture slides, while the Jupyter notebooks (.ipynb) contain the hands-on coding exercises. This workshop is an introduction to how deep learning works and how you could create a neural network using TensorFlow v2. We start by learning the basics of deep learning including what a neural network is, how information passes through the network, and how the network learns from data through the automated process of gradient descent. Workshop attendees would build, train and evaluate a neural network using a cloud GPU (Google Colab). In part 2, we look at image data and how we could train a convolution neural network to classify images. Workshop attendees will extend their knowledge from the first part to design, train and evaluate this convolutional neural network. datascienceplatform@monash.edu Deep learning, convolutional neural network, tensorflow, Machine learning
Use QueryPic to visualise searches in Trove's digitised newspapers (part 2)

This video shows how you can construct and visualise more complex searches for digitised newspaper articles in Trove using QueryPic (see part 1 for the basics). This includes limiting the date range of your query, and changing the time...

Keywords: Trove, GLAM Workbench, visualisation, newspapers, HASS

Resource type: video

Use QueryPic to visualise searches in Trove's digitised newspapers (part 2) https://dresa.org.au/materials/use-querypic-to-visualise-searches-in-trove-s-digitised-newspapers-part-2 This video shows how you can construct and visualise more complex searches for digitised newspaper articles in Trove using [QueryPic](https://glam-workbench.net/trove-newspapers/#querypic) (see part 1 for the basics). This includes limiting the date range of your query, and changing the time scale to zoom in and out of your search results. The GLAM Workbench is a collection of tools, examples, tutorials, and apps that help you make use of collection data from GLAM organisations (Galleries, Libraries, Archives, and Museums). See: https://glam-workbench.net/ Tim Sherratt (tim@timsherratt.org and @wragge on Twitter) Trove, GLAM Workbench, visualisation, newspapers, HASS ugrad masters phd ecr researcher
Use QueryPic to visualise searches in Trove's digitised newspapers (part 1)

This video demonstrates how to use the GLAM Workbench to visualise searches for digitised newspaper articles in Trove. Using the latest version of QueryPic, we can explore the complete result set, showing how the number of matching articles...

Keywords: Trove, GLAM Workbench, visualisation, newspapers, HASS

Resource type: video

Use QueryPic to visualise searches in Trove's digitised newspapers (part 1) https://dresa.org.au/materials/use-querypic-to-visualise-searches-in-trove-s-digitised-newspapers-part-1 This video demonstrates how to use the GLAM Workbench to visualise searches for digitised newspaper articles in Trove. Using the latest version of [QueryPic](https://glam-workbench.net/trove-newspapers/#querypic), we can explore the complete result set, showing how the number of matching articles changes over time. We can even compare queries to visualise changes in language or technology. It's a great way to start exploring the possibilities of GLAM data. The GLAM Workbench is a collection of tools, examples, tutorials, and apps that help you make use of collection data from GLAM organisations (Galleries, Libraries, Archives, and Museums). See: https://glam-workbench.net/ Tim Sherratt (tim@timsherratt.org & @wragge on Twitter) Trove, GLAM Workbench, visualisation, newspapers, HASS ugrad masters ecr researcher
Galaxy Training

Galaxy is a hosted web-accessible platform that lets you conduct accessible, reproducible, and transparent computational biological research. It is an international, community driven effort to make it easy for life scientists to analyse their data for free and without the need for programmatic...

Keywords: Galaxy Australia, Galaxy Project, Bioinformatics, Data analysis

Galaxy Training https://dresa.org.au/materials/galaxy-training Galaxy is a hosted web-accessible platform that lets you conduct accessible, reproducible, and transparent computational biological research. It is an international, community driven effort to make it easy for life scientists to analyse their data for free and without the need for programmatic skills. This is a collection of tutorials developed and maintained by the worldwide Galaxy community that show you how to analyse a variety of biological data using Galaxy. Melissa (melissa@biocommons.org.au) Galaxy Australia, Galaxy Project, Bioinformatics, Data analysis
Research Data Governance

This video contains key information for those who make research data-related decisions. It will help project leaders to start investigating ways to develop their own data governance policy, roles and responsibilities and procedures with the input of appropriate stakeholders.

**Cite...

Keywords: data governance, research data

Resource type: video

Research Data Governance https://dresa.org.au/materials/research-data-governance This video contains key information for those who make research data-related decisions. It will help project leaders to start investigating ways to develop their own data governance policy, roles and responsibilities and procedures with the input of appropriate stakeholders. **Cite as** Australian Research Data Commons. (2021, June 30). Research Data Governance. Zenodo. https://doi.org/10.5281/zenodo.5044585 ARDC contact: https://ardc.edu.au/contact-us Max Wilkinson Shannon Callaghan Jo Savill Kristan Kang Kerry Levett Keith Russell Natasha Simons data governance, research data ecr researcher support
How can Software Containers help your Research?

This video explains software containers to a research audience. It is an introduction to why containers are beneficial for research. These benefits are standardisation, portability, reliability and reproducibility.

Software Containers in research are a solution that addresses the challenge of...

Keywords: Containers, software containers, reproducibility, replicable computational environment, software, research, reusable, cloud, standardisation

Resource type: video

How can Software Containers help your Research? https://dresa.org.au/materials/how-can-software-containers-help-your-research This video explains software containers to a research audience. It is an introduction to why containers are beneficial for research. These benefits are standardisation, portability, reliability and reproducibility. Software Containers in research are a solution that addresses the challenge of a replicable computational environment and supports reproducibility of research results. Understanding the concept of software containers enables researchers to better communicate their research needs with their colleagues and other researchers using and developing containers. **Cite as** Australian Research Data Commons. (2021, July 26). How can software containers help your research?. Zenodo. https://doi.org/10.5281/zenodo.5091260 Contact us: https://ardc.edu.au/contact-us/ Containers, software containers, reproducibility, replicable computational environment, software, research, reusable, cloud, standardisation phd ecr researcher support
Merit Allocation Training for 2022

This merit allocation training session provides critical information for researchers considering to apply for time on Pawsey’s new Setonix supercomputer in 2022.

Keywords: supercomputer, supercomputing, merit allocation, allocation

Resource type: video

Merit Allocation Training for 2022 https://dresa.org.au/materials/merit-allocation-training-for-2022 This merit allocation training session provides critical information for researchers considering to apply for time on Pawsey’s new Setonix supercomputer in 2022. training@pawsey.org.au supercomputer, supercomputing, merit allocation, allocation
ARDC FAIR Data 101 self-guided

FAIR Data 101 v3.0 is a self-guided course covering the FAIR Data principles

The FAIR Data 101 virtual course was designed and delivered by the ARDC Skilled Workforce Program twice in 2020 and has now been reworked as a self-guided course.

Keywords: training material, FAIR data, research data, data management, FAIR

Resource type: presentation, quiz, activity

ARDC FAIR Data 101 self-guided https://dresa.org.au/materials/ardc-fair-data-101-self-guided FAIR Data 101 v3.0 is a self-guided course covering the FAIR Data principles The FAIR Data 101 virtual course was designed and delivered by the ARDC Skilled Workforce Program twice in 2020 and has now been reworked as a self-guided course. ARDC Contact us: https://ardc.edu.au/contact-us/ training material, FAIR data, research data, data management, FAIR phd ecr researcher support
Software publishing, licensing, and citation

A short presentation for reuse includes speaker notes.

Making software citable using a code repository, an ORCID and a licence.

Cite as
Liffers, Matthias. (2021, July 12). Software publishing, licensing, and citation. Zenodo. https://doi.org/10.5281/zenodo.5091717

Keywords: software citation, software publishing, software registry, software repository, research software

Resource type: presentation

Software publishing, licensing, and citation https://dresa.org.au/materials/software-publishing-licensing-and-citation A short presentation for reuse includes speaker notes. Making software citable using a code repository, an ORCID and a licence. **Cite as** Liffers, Matthias. (2021, July 12). Software publishing, licensing, and citation. Zenodo. https://doi.org/10.5281/zenodo.5091717 ARDC Contact us: https://ardc.edu.au/contact-us/ software citation, software publishing, software registry, software repository, research software phd ecr researcher support
WEBINAR: Getting started with deep learning

This record includes training materials associated with the Australian BioCommons webinar ‘Getting started with deep learning’. This webinar took place on 21 July 2021.

Are you wondering what deep learning is and how it might be useful in your research? This high level overview introduces...

Keywords: Deep learning, Bioinformatics, Machine learning

Resource type: video, presentation

WEBINAR: Getting started with deep learning https://dresa.org.au/materials/webinar-getting-started-with-deep-learning This record includes training materials associated with the Australian BioCommons webinar ‘Getting started with deep learning’. This webinar took place on 21 July 2021. Are you wondering what deep learning is and how it might be useful in your research? This high level overview introduces deep learning ‘in a nutshell’ and provides tips on which concepts and skills you will need to know to build a deep learning application. The presentation also provides pointers to various resources you can use to get started in deep learning. The webinar is followed by a short Q&A session. Materials are shared under a Creative Commons Attribution 4.0 International agreement unless otherwise specified and were current at the time of the event. Files and materials included in this record: Event metadata (PDF): Information about the event including, description, event URL, learning objectives, prerequisites, technical requirements etc. Index of training materials (PDF): List and description of all materials associated with this event including the name, format, location and a brief description of each file. Getting Started with Deep Learning - Slides (PDF): Slides used in the presentation Materials shared elsewhere: A recording of the webinar is available on the Australian BioCommons YouTube Channel: https://youtu.be/I1TmpnZUuiQ Melissa Burke (melissa@biocommons.org.au) Deep learning, Bioinformatics, Machine learning
ARDC Guide to making Software Citable

A short guide to making software citable using a code repository, an ORCID and a licence.

Cite as
Liffers, Matthias, & Honeyman, Tom. (2021). ARDC Guide to making software citable. Zenodo. https://doi.org/10.5281/zenodo.5003989

Keywords: software citation, software publishing, software registry, software repository, research software

Resource type: guide

ARDC Guide to making Software Citable https://dresa.org.au/materials/ardc-guide-to-making-software-citable A short guide to making software citable using a code repository, an ORCID and a licence. **Cite as** Liffers, Matthias, & Honeyman, Tom. (2021). ARDC Guide to making software citable. Zenodo. https://doi.org/10.5281/zenodo.5003989 ARDC Contact us: https://ardc.edu.au/contact-us/ software citation, software publishing, software registry, software repository, research software phd ecr researcher support
WEBINAR: Getting started with command line bioinformatics

This record includes training materials associated with the Australian BioCommons webinar ‘Getting started with command line bioinformatics’. This webinar took place on 22 June 2021.

Bioinformatics skills are in demand like never before and biologists are stepping up to the challenge of...

Keywords: Command line, Bioinformatics

Resource type: video, presentation

WEBINAR: Getting started with command line bioinformatics https://dresa.org.au/materials/webinar-getting-started-with-command-line-bioinformatics This record includes training materials associated with the Australian BioCommons webinar ‘Getting started with command line bioinformatics’. This webinar took place on 22 June 2021. Bioinformatics skills are in demand like never before and biologists are stepping up to the challenge of learning to analyse large and ever growing datasets. Learning how to use the command line can open up many options for data analysis but getting started can be a little daunting for those without a background in computer science. Parice Brandies and Carolyn Hogg have recently put together ten simple rules for getting started with command-line bioinformatics to help biologists begin their computational journeys. In this webinar Parice walks you through their hints and tips for getting started with the command line. She covers topics like learning tech speak, evaluating your data and workflows, assessing computational requirements, computing options, the basics of software installation, curating and testing scripts, a bit of bash and keeping good records. The webinar will be followed by a short Q&A session. The slides were created by Parice Brandies and are based on the publication ‘Ten simple rules for getting started with command-line bioinformatics’ (https://doi.org/10.1371/journal.pcbi.1008645). The slides are shared under a Creative Commons Attribution 4.0 International unless otherwise specified and were current at the time of the webinar. **Files and materials included in this record:** Event metadata (PDF): Information about the event including, description, event URL, learning objectives, prerequisites, technical requirements etc. Index of training materials (PDF): List and description of all materials associated with this event including the name, format, location and a brief description of each file. Getting started with command line bioinformatics - slides (PDF): Slides presented during the webinar **Materials shared elsewhere:** A recording of the webinar is available on the Australian BioCommons YouTube Channel https://youtu.be/p7pA4OLB2X4 Melissa (melissa@biocommons.org.au) Command line, Bioinformatics