R for Research
R is quickly gaining popularity as a programming language of choice for statisticians, data scientists and researchers. It has an excellent ecosystem including the powerful RStudio development environment and the Shiny web application framework.
This workshop is an introduction to data...
R for Research
https://intersect.org.au/training/course/r110
https://dresa.org.au/materials/r-for-research
R is quickly gaining popularity as a programming language of choice for statisticians, data scientists and researchers. It has an excellent ecosystem including the powerful RStudio development environment and the Shiny web application framework.
This workshop is an introduction to data structures (DataFrames) and visualisation (using the ggplot2 package) in R. The targeted audience for this workshop is researchers who are already familiar with the basic concepts in programming such as loops, functions, and conditionals.
We teach using RStudio, which allows program code, results, visualisations and documentation to be blended seamlessly.
Join us for a live coding workshop where we write programs that produce results, using the researcher-focused training modules from the highly regarded Software Carpentry Foundation.
Project Management with RStudio
Introduction to Data Structures in R
Introduction to DataFrames in R
Selecting values in DataFrames
Quick introduction to Plotting using the ggplot2 package
\Learn to Program: R\ or any of the \Learn to Program: Python\, \Learn to Program: MATLAB\, \Learn to Program: Julia\, needed to attend this course. If you already have some experience with programming, please check the topics covered in the \Learn to Program: R\ course to ensure that you are familiar with the knowledge needed for this course.
training@intersect.org.au
The Carpentries
R
A showcase of Data Analysis in Python and R: A case study using COVID-19 data
In all fields of research we are being confronted with a deluge of data; data that needs cleaning and transformation to be used in further analysis. This webinar demonstrates the effective use of programming tools for an initial analysis of COVID-19 datasets, with examples using both R and...
A showcase of Data Analysis in Python and R: A case study using COVID-19 data
https://intersect.org.au/training/course/coding002
https://dresa.org.au/materials/a-showcase-of-data-analysis-in-python-and-r-a-case-study-using-covid-19-data
In all fields of research we are being confronted with a deluge of data; data that needs cleaning and transformation to be used in further analysis. This webinar demonstrates the effective use of programming tools for an initial analysis of COVID-19 datasets, with examples using both R and Python.
Cleaning up a dataset for analysis
Using Jupyter lab for interactive analysis
Making the most of the tidyverse (R) and pandas (python)
Simple data visualisation using ggplot (R) and seaborn (python)
Best practices for readable code
The webinar has no prerequisites.
training@intersect.org.au
Intersect Australia
Python, R
Surveying with Qualtrics
Needing to collect data from people in a structured and intuitive way? Have you thought about using Qualtrics?
Qualtrics in a powerful cloud-based survey tool, ideal for social scientists from all disciplines. This course will introduce the technical components of the whole research...
Surveying with Qualtrics
https://intersect.org.au/training/course/qltrics101
https://dresa.org.au/materials/surveying-with-qualtrics
Needing to collect data from people in a structured and intuitive way? Have you thought about using Qualtrics?
Qualtrics in a powerful cloud-based survey tool, ideal for social scientists from all disciplines. This course will introduce the technical components of the whole research workflow from building a survey to analysing the results using Qualtrics. We will discover the numerous design elements available in order to get the most useful results and make life as easy as can be for your respondents.
If your institution has a licence to Qualtrics, then this course is right for you.
Format a sample survey using the Qualtrics online platform
Configure the survey using a range of design features to improve user experience
Decide which distribution channel is right for your needs
Understand the available data analysis and export options in Qualtrics
You must have access to a Qualtrics instance, such as through your university license. Speak to your local university IT or Research Office for assistance in accessing the Qualtrics instance.
training@intersect.org.au
Intersect Australia
Qualtrics
Learn to Program: MATLAB
MATLAB is an incredibly powerful programming environment with a rich set of analysis toolkits. But what if you’re just getting started – with MATLAB and, more generally, with programming?
Nothing beats a hands-on, face-to-face training session to get you past the inevitable syntax errors! ...
Learn to Program: MATLAB
https://intersect.org.au/training/course/matlab101
https://dresa.org.au/materials/learn-to-program-matlab
MATLAB is an incredibly powerful programming environment with a rich set of analysis toolkits. But what if you’re just getting started – with MATLAB and, more generally, with programming?
Nothing beats a hands-on, face-to-face training session to get you past the inevitable syntax errors!
So join us for this live coding workshop where we write programs that produce results, using the researcher-focused training modules from the highly regarded Software Carpentry Foundation.
Introduction to the MATLAB interface for programming
Basic syntax and data types in MATLAB
How to load external data into MATLAB
Creating functions (FUNCTIONS)
Repeating actions and analysing multiple data sets (LOOPS)
Making choices (IF STATEMENTS – CONDITIONALS)
Ways to visualise data in MATLAB
In order to participate, attendees must have a licensed copy of MATLAB installed on their computer. Speak to your local university IT or Research Office for assistance in obtaining a license and installing the software.
No prior experience with programming needed to attend this course.
We strongly recommend attending the Start Coding without Hesitation: Programming Languages Showdown and Thinking like a computer: The Fundamentals of Programming webinars. Recordings of previously delivered webinars can be found \here\.
training@intersect.org.au
The Carpentries
Matlab
Getting started with NVivo for Windows
Does your research see you working through unstructured and non-numerical data? With the ability to collect, store and analyse different data types all in the one location makes, it’s easy to see why NVivo is becoming the tool of choice for many researchers.
NVivo allows researchers to...
Getting started with NVivo for Windows
https://intersect.org.au/training/course/nvivo101
https://dresa.org.au/materials/getting-started-with-nvivo-for-windows
Does your research see you working through unstructured and non-numerical data? With the ability to collect, store and analyse different data types all in the one location makes, it’s easy to see why NVivo is becoming the tool of choice for many researchers.
NVivo allows researchers to simply organise and manage data from a variety of sources including surveys, interviews, articles, video, email, social media and web content, PDFs and images. Coding your data allows you to discover trends and compares themes as they emerge across different sources and data types. Using NVivo memos and visualisations combined with the ability to integrate with popular bibliographic tools you can get your research ready for publication sooner.
Create and organise a qualitative research project in NVivo
Import a range of data sources using NVivo’s integrated tools
Code and classify your data
Format your data to take advantage of NVivo’s auto-coding ability
Use NVivo to discover new themes and trends in research
Visualise relationships and trends in your data
In order to participate, attendees must have a licensed copy of NVivo installed on their computer. Speak to your local university IT or Research Office for assistance in obtaining a license and installing the software.
This course is taught using NVivo 12 Pro for Windows and is not suitable for NVivo for Mac users.
training@intersect.org.au
Intersect Australia
NVivo
Getting Started with NVivo for Mac
Does your research see you working through unstructured and non-numerical data? With the ability to collect, store and analyse different data types all in the one location makes, it’s easy to see why NVivo is becoming the tool of choice for many researchers.
NVivo allows researchers to...
Getting Started with NVivo for Mac
https://intersect.org.au/training/course/nvivo102
https://dresa.org.au/materials/getting-started-with-nvivo-for-mac
Does your research see you working through unstructured and non-numerical data? With the ability to collect, store and analyse different data types all in the one location makes, it’s easy to see why NVivo is becoming the tool of choice for many researchers.
NVivo allows researchers to simply organise and manage data from a variety of sources including surveys, interviews, articles, video, email, social media and web content, PDFs and images. Coding your data allows you to discover trends and compares themes as they emerge across different sources and data types. Using NVivo memos and visualisations combined with the ability to integrate with popular bibliographic tools you can get your research ready for publication sooner.
Create and organise a qualitative research project in NVivo
Import a range of data sources using NVivo’s integrated tools
Code and classify your data
Format your data to take advantage of NVivo’s auto-coding ability
Use NVivo to discover new themes and trends in research
Visualise relationships and trends in your data
In order to participate, attendees must have a licensed copy of NVivo installed on their computer. Speak to your local university IT or Research Office for assistance in obtaining a license and installing the software.
This course is taught using NVivo 12 Pro for Mac and is not suitable for NVivo for Windows users.
training@intersect.org.au
Intersect Australia
NVivo
Learn to Program: Julia
Julia is a high-level, high-performance dynamic programming language with more than 4,000 external libraries available. Julia allows you to range from tight low-level loops and conditionals, up to a high-level programming style, with its performance approaching and often matching the performance...
Learn to Program: Julia
https://intersect.org.au/training/course/julia101
https://dresa.org.au/materials/learn-to-program-julia
Julia is a high-level, high-performance dynamic programming language with more than 4,000 external libraries available. Julia allows you to range from tight low-level loops and conditionals, up to a high-level programming style, with its performance approaching and often matching the performance of the fastest programming languages!
This workshop expects that you are coming to Julia with some experience in the basic concepts of programming in another language. It is designed to help you migrate the basic concepts of programming that you already know to the Julia context.
Join us for this live coding workshop where we write programs that produce results, using Jupyter notebooks, which allow program code, results, visualisations and documentation to be blended seamlessly.
Introduction to the JupyterLab interface for programming
Basic syntax and data types in Julia
How to load external data into Julia
Creating functions (FUNCTIONS)
Repeating actions and analysing multiple data sets (LOOPS)
Making choices (IF STATEMENTS – CONDITIONALS)
Ways to visualise data using the Plots library in Julia
Some experience with the basic concepts of programming in another language needed to attend this course. It is an intensive course that is designed to help you migrate the basic concepts of programming that you already know to the Julia context in half a day instead of a full day. If you don’t have any prior experience in programming, please consider attending one of the \Learn to Program: Python\, \Learn to Program: R\ or \Learn to Program: MATLAB\ prior to this course.
We also strongly recommend attending the Start Coding without Hesitation: Programming Languages Showdown and Thinking like a computer: The Fundamentals of Programming webinars. Recordings of previously delivered webinars can be found \here\.
training@intersect.org.au
Intersect Australia
Julia
Beyond the Basics: Julia
Julia is a high-level, high-performance dynamic programming language with more than 4,000 external libraries available. Julia allows you to range from tight low-level loops and conditionals, up to a high-level programming style, with its performance approaching and often matching the performance...
Beyond the Basics: Julia
https://intersect.org.au/training/course/julia201
https://dresa.org.au/materials/beyond-the-basics-julia
Julia is a high-level, high-performance dynamic programming language with more than 4,000 external libraries available. Julia allows you to range from tight low-level loops and conditionals, up to a high-level programming style, with its performance approaching and often matching the performance of the fastest programming languages!
This workshop explores the more advanced features of functions in Julia, introduces widely used tools within Julia, as well as demonstrates the speed of Julia by benchmarking functions and different styles of scripting within Julia.
Join us for this live coding workshop where we write programs that produce results, using Jupyter notebooks, which allow program code, results, visualisations and documentation to be blended seamlessly.
Understand the role of Types within Julia
Create functions with complex arguments
Demonstrate programming patterns of list comprehension, pipes, and anonymous functions.
Benchmark Julia code and understand how to make it fast
If you already have experience with programming, please check the topics covered in the \Learn to Program: Julia\ to ensure that you are familiar with the knowledge needed for this course.
training@intersect.org.au
Intersect Australia
Julia
Heurist Tutorials
A set of video tutorials with accompanying walkthroughs for building your first Heurist database and website. The first three tutorials show you how to get started in Heurist. The five subsequent tutorials introduce you to the five main menus in the Heurist interface.
Keywords: Heurist, Data management, Data visualisation, Digital Humanities, Databasing, website
Resource type: tutorial
Heurist Tutorials
https://heuristnetwork.org/tutorials
https://dresa.org.au/materials/heurist-tutorials
A set of video tutorials with accompanying walkthroughs for building your first Heurist database and website. The first three tutorials show you how to get started in Heurist. The five subsequent tutorials introduce you to the five main menus in the Heurist interface.
michael.falk@sydney.edu.au
Falk, Michael
Johnson, Ian
Osmakov, Artem
Heurist, Data management, Data visualisation, Digital Humanities, Databasing, website
mbr
phd
ecr
researcher
support
Network Know-how and Data Handling Workshop
This workshop is a ‘train-the-trainer’ session that covers topics such as jargon busting, network literacy and data movement solutions. The workshop will also provide a peek at some collaborative research tools such as Jupyter Notebooks and CloudStor. You will learn about networks, integrated...
Keywords: Networks, data handling
Resource type: lesson, presentation
Network Know-how and Data Handling Workshop
https://zenodo.org/record/6403757#.Yk-Gl8gza70
https://dresa.org.au/materials/network-know-how-and-data-handling-workshop
This workshop is a ‘train-the-trainer’ session that covers topics such as jargon busting, network literacy and data movement solutions. The workshop will also provide a peek at some collaborative research tools such as Jupyter Notebooks and CloudStor. You will learn about networks, integrated tools, data and storage and where all these things fit in the researcher’s toolkit.
This workshop is targeted at staff who would like to be more confident in giving advice to researchers about the options available to them. It is especially tailored for those with little to no technical knowledge and includes a hands-on component, using basic programming commands, but requires no previous knowledge of programming.
Sara King - sara.king@aarnet.edu.au
King, Sara (orcid: 0000-0003-3199-5592)
Mason, Ingrid (orcid: 0000-0002-0658-6095)
Burke, Melissa (orcid: 0000-0002-5571-8664)
Networks, data handling
ARDC Datacite API Jupyter notebook
This Jupyter notebook presents a low-barrier entry to using the DataCite REST API to mint, update, publish, and deleted DOIs and their associated metadata.
It was designed specifically to not use any third-party libraries so that it can be reused in almost any Jupyter notebook environment
Code...
Keywords: jupyter, notebook, DataCite, api, python, metadata, DOI, training material
ARDC Datacite API Jupyter notebook
https://zenodo.org/record/5574653
https://dresa.org.au/materials/ardc-datacite-api-jupyter-notebook
This Jupyter notebook presents a low-barrier entry to using the DataCite REST API to mint, update, publish, and deleted DOIs and their associated metadata.
It was designed specifically to not use any third-party libraries so that it can be reused in almost any Jupyter notebook environment
Code is presented alongside human readable comments that explain the use of each component of the notebook.
contact@ardc.edu.au
Liffers, Matthias (orcid: 0000-0002-3639-2080)
jupyter, notebook, DataCite, api, python, metadata, DOI, training material
The Living Book of Digital Skills
The Living Book of Digital Skills (You never knew you needed until now) is a living, open source online guide to 'modern not-quite-technical computer skills' for researchers and the broader academic community.
A collaboration between Australia's Academic Research Network (AARNet) and the...
Keywords: digital skills, digital dexterity, community, open source
Resource type: guide
The Living Book of Digital Skills
https://aarnet.gitbook.io/digital-skills-gitbook-1/
https://dresa.org.au/materials/the-living-book-of-digital-skills
*The Living Book of Digital Skills (You never knew you needed until now)* is a living, open source online guide to 'modern not-quite-technical computer skills' for researchers and the broader academic community.
A collaboration between Australia's Academic Research Network (AARNet) and the Council of Australian Librarians (CAUL), this book is the creation of the CAUL Digital Dexterity Champions and their communities.
**Contributing to the Digital Skills GitBook**
The Digital Skills GitBook is an open source project and like many projects on GitHub we welcome your contributions.
If you have knowledge or expertise on one of our [requested topics](https://aarnet.gitbook.io/digital-skills-gitbook-1/requested-articles), we would love you to write an article for the book. Please let us know what you'd like to write about via our [contributor form](https://github.com/AARNet/Digital-Skills-GitBook/issues/new?assignees=sarasrking&labels=contributors&template=contributor-form.yml&title=Contributor+form%3A+).
There are other ways to contribute too. For example, you might:
* have a great idea for a new topic to be included in one of our chapters (make a new page)
* notice some information that’s out-of-date or that could be explained better (edit a page)
* come across something in the GitBook that’s not working as it should be (submit an issue)
Sara King - sara.king@aarnet.edu.au
Sara King
Miah de Francesch
Emma Chapman
Katie Mills
Ruth Cameron
digital skills, digital dexterity, community, open source
ugrad
masters
mbr
phd
ecr
researcher
support
Create a website resume
Written for the Qld Research Bazaar conference 2021, this self paced lesson breaks down how to use Github pages to make a resume, with a simple and basic template to start off with. It discusses how to use Markdown and minimum HTML to customize the template, and offers explanations on how the...
Keywords: personal development, website
Resource type: tutorial, guide
Create a website resume
https://amandamiotto.github.io/ResumeLesson/HowIMadeThis
https://dresa.org.au/materials/create-a-website-resume
Written for the Qld Research Bazaar conference 2021, this self paced lesson breaks down how to use Github pages to make a resume, with a simple and basic template to start off with. It discusses how to use Markdown and minimum HTML to customize the template, and offers explanations on how the components work together.
a.miotto@griffith.edu.au
Amanda Miotto
personal development, website
10 Reproducible Research things - Building Business Continuity
The idea that you can duplicate an experiment and get the same conclusion is the basis for all scientific discoveries. Reproducible research is data analysis that starts with the raw data and offers a transparent workflow to arrive at the same results and conclusions. However not all studies are...
Keywords: reproducibility, data management
Resource type: tutorial, video
10 Reproducible Research things - Building Business Continuity
https://guereslib.github.io/ten-reproducible-research-things/
https://dresa.org.au/materials/9-reproducible-research-things-building-business-continuity
The idea that you can duplicate an experiment and get the same conclusion is the basis for all scientific discoveries. Reproducible research is data analysis that starts with the raw data and offers a transparent workflow to arrive at the same results and conclusions. However not all studies are replicable due to lack of information on the process. Therefore, reproducibility in research is extremely important.
Researchers genuinely want to make their research more reproducible, but sometimes don’t know where to start and often don’t have the available time to investigate or establish methods on how reproducible research can speed up every day work. We aim for the philosophy “Be better than you were yesterday”. Reproducibility is a process, and we highlight there is no expectation to go from beginner to expert in a single workshop. Instead, we offer some steps you can take towards the reproducibility path following our Steps to Reproducible Research self paced program.
Video:
https://www.youtube.com/watch?v=bANTr9RvnGg
Tutorial:
https://guereslib.github.io/ten-reproducible-research-things/
a.miotto@griffith.edu.au; s.stapleton@griffith.edu.au; i.jennings@griffith.edu.au;
Amanda Miotto
Julie Toohey
Sharron Stapleton
Isaac Jennings
reproducibility, data management
masters
phd
ecr
researcher
support
Data Storytelling
Nowadays, more information created than our audience could possibly analyse on their own! A study by Stanford professor Chip Heath found that during the recall of speeches, 63% of people remember stories and how they made them feel, but only 5% remember a single statistic. So, you should convert...
Keywords: data storytelling, data visualisation
Data Storytelling
https://griffithunilibrary.github.io/data-storytelling/
https://dresa.org.au/materials/data-storytelling
Nowadays, more information created than our audience could possibly analyse on their own! A study by Stanford professor Chip Heath found that during the recall of speeches, 63% of people remember stories and how they made them feel, but only 5% remember a single statistic. So, you should convert your insights and discovery from data into stories to share with non-experts with a language they understand. But how?
This tutorial helps you construct stories that incite an emotional response and create meaning and understanding for the audience by applying data storytelling techniques.
m.yamaguchi@griffith.edu.au
a.miotto@griffith.edu.au
Masami Yamaguchi
Amanda Miotto
Brett Parker
data storytelling, data visualisation
support
masters
phd
researcher
Porting the multi-GPU SELF-Fluids code to HIPFort
In this presentation by Dr. Joseph Schoonover of Fluid Numerics LLC, Joe shares their experience with the porting process for SELF-Fluids from multi-GPU CUDA-Fortran to multi-GPU HIPFort.
The presentation covers the design principles and roadmap for SELF and the strategy to port from...
Keywords: AMD, GPUs, supercomputer, supercomputing
Resource type: presentation
Porting the multi-GPU SELF-Fluids code to HIPFort
https://docs.google.com/presentation/d/1JUwFkrHLx5_hgjxsix8h498_YqvFkkcefNYbu-DsHio/edit#slide=id.g10626504d53_0_0
https://dresa.org.au/materials/porting-the-multi-gpu-self-fluids-code-to-hipfort
In this presentation by Dr. Joseph Schoonover of Fluid Numerics LLC, Joe shares their experience with the porting process for SELF-Fluids from multi-GPU CUDA-Fortran to multi-GPU HIPFort.
The presentation covers the design principles and roadmap for SELF and the strategy to port from Nvidia-only platforms to AMD & Nvidia GPUs. Also discussed are the hurdles encountered along the way and considerations for developing multi-GPU accelerated applications in Fortran.
SELF is an object-oriented Fortran library that supports the implementation of Spectral Element Methods for solving partial differential equations. SELF-Fluids is an implementation of SELF that solves the compressible Navier Stokes equations on CPU only and GPU accelerated compute platforms using the Discontinuous Galerkin Spectral Element Method. The SELF API is designed based on the assumption that SEM developers and researchers need to be able to implement derivatives in 1-D and divergence, gradient, and curl in 2-D and 3-D on scalar, vector, and tensor functions using spectral collocation, continuous Galerkin, and discontinuous Galerkin spectral element methods.
The presentation discussion is placed in context of the Exascale era, where we're faced with a zoo of available compute hardware. Because of this, SELF routines provide support for GPU acceleration through AMD’s HIP and support for multi-core, multi-node, and multi-GPU platforms with MPI.
training@pawsey.org.au
Joe Schoonover
AMD, GPUs, supercomputer, supercomputing
Embracing new solutions for in-situ visualisation
This PPT was used by Jean Favre, senior visualisation software engineer at CSCS, the Swiss National Supercomputing Centre during his presentation at P'Con '21 (Pawsey's first PaCER Conference).
This material discusses the upcoming release of ParaView v5.10, a leading scientific visualisation...
Keywords: ParaView, GPUs, supercomputer, supercomputing, visualisation, data visualisation
Resource type: presentation
Embracing new solutions for in-situ visualisation
https://github.com/jfavre/InSitu/blob/master/InSitu-Revisited.pdf
https://dresa.org.au/materials/embracing-new-solutions-for-in-situ-visualisation
This PPT was used by Jean Favre, senior visualisation software engineer at CSCS, the Swiss National Supercomputing Centre during his presentation at P'Con '21 (Pawsey's first PaCER Conference).
This material discusses the upcoming release of ParaView v5.10, a leading scientific visualisation application. In this release ParaView consolidates its implementation of the Catalyst API, a specification developed for simulations and scientific data producers to analyse and visualise data in situ.
The material reviews some of the terminology and issues of different in-situ visualisation scenarios, then reviews early Data Adaptors for tight-coupling of simulations and visualisation solutions. This is followed by an introduction of Conduit, an intuitive model for describing hierarchical scientific data. Both ParaView-Catalyst and Ascent use Conduit’s Mesh Blueprint, a set of conventions to describe computational simulation meshes.
Finally, the materials present CSCS’ early experience in adopting ParaView-Catalyst and Ascent via two concrete examples of instrumentation of some proxy numerical applications.
training@pawsey.org.au
Jean Favre
ParaView, GPUs, supercomputer, supercomputing, visualisation, data visualisation
HPC file systems and what users need to consider for appropriate and efficient usage
Three videos on miscellaneous aspects of HPC usage - useful reference for new users of HPC systems.
1 – General overview of different file systems that might be available on HPC. The video goes through shared file systems such as /home and /scratch, local compute node file systems (local...
Keywords: HPC, high performance computer, File systems
Resource type: video, presentation
HPC file systems and what users need to consider for appropriate and efficient usage
https://www.youtube.com/watch?v=cNW7F9V1plA&list=PLjlLx279X4yO62jHF4rd7I9iEfbnz3Ts1
https://dresa.org.au/materials/hpc-file-systems-and-what-users-need-to-consider-for-appropriate-and-efficient-usage
Three videos on miscellaneous aspects of HPC usage - useful reference for new users of HPC systems.
1 – General overview of different file systems that might be available on HPC. The video goes through shared file systems such as /home and /scratch, local compute node file systems (local scratch or $TMPDIR) and storage file system. It outlines what users need to consider if they wish to use any of these in their workflows.
2 – Overview of the different directories that might be present on HPC. These could include /home, /scratch, /opt, /lib and lib64, /sw and others.
3 – Overview of the Message-of-the-day file and the message that is displayed to users every time they log in. This displays info about general help and often current problems or upcoming outages.
QCIF Training (training@qcif.edu.au)
Marlies Hankel
HPC, high performance computer, File systems
Basic Linux/Unix commands
A series of eight videos (each between 5 and 10 minutes long) following the content of the Software Carpentry workshop "The Unix Shell".
Sessions 1, 2 and 3 provide instructions on the minimal level of Linux/Unix commands recommended for new...
Keywords: HPC, high performance computer, Unix, Linux, Software Carpentry
Resource type: video, guide
Basic Linux/Unix commands
https://www.youtube.com/playlist?list=PLjlLx279X4yP5GodfbqQTJuJ1S9EJU3GM
https://dresa.org.au/materials/basic-linux-unix-commands
A series of eight videos (each between 5 and 10 minutes long) following the content of the Software Carpentry workshop ["The Unix Shell"](https://swcarpentry.github.io/shell-novice/).
Sessions 1, 2 and 3 provide instructions on the minimal level of Linux/Unix commands recommended for new users of HPC.
1 – An overview of how to find out where a user is in the filesystem, list the files there, and how to get help on Unix commands
2 – How to move around the file system and change into other directories
3 – Explains the difference between an absolute and relative path
4 – Overview of how to create new directories, and to create and edit new files with nano
5 – How to use the vi editor to edit files
6 – Overview of file viewers available
7 – How to copy and move files and directories
8 – How to remove files and directories
Further details and exercises with solutions can be found on the Software Carpentry "The Unix Shell" page (https://swcarpentry.github.io/shell-novice/)
QCIF Training (training@qcif.edu.au)
Marlies Hankel
HPC, high performance computer, Unix, Linux, Software Carpentry
Transferring files and data
A short video outlining the basics on how to use FileZilla to establish a secure file transfer protocol (sftp) connection to HPC to use a drag and drop interface to transfer files between the HPC and a desktop computer.
Keywords: sftp, file transfer, HPC, high performance computer
Resource type: video, guide
Transferring files and data
https://www.youtube.com/watch?v=9ABMxcKqfkQ&list=PLjlLx279X4yP3eTLu0S6nOt0HQ7XRf6WF
https://dresa.org.au/materials/transferring-files-and-data
A short video outlining the basics on how to use FileZilla to establish a secure file transfer protocol (sftp) connection to HPC to use a drag and drop interface to transfer files between the HPC and a desktop computer.
QCIF Training (training@qcif.edu.au)
Marlies Hankel
sftp, file transfer, HPC, high performance computer
Connecting to HPC
A series of three short videos introducing how to use PuTTY to connect from a Windows PC to a secure HPC (high performance computing) cluster.
1 - The very basics on how to establish a connection to HPC.
2 - How to add more specific options for the connection to HPC.
3 - How to save the...
Keywords: HPC, high performance computer, ssh
Resource type: video, guide
Connecting to HPC
https://www.youtube.com/playlist?list=PLjlLx279X4yPJBVQuIRhz1CVMfQpTuvZW
https://dresa.org.au/materials/connecting-to-hpc
A series of three short videos introducing how to use PuTTY to connect from a Windows PC to a secure HPC (high performance computing) cluster.
1 - The very basics on how to establish a connection to HPC.
2 - How to add more specific options for the connection to HPC.
3 - How to save the details and options for a connection for future use.
QCIF Training (training@qcif.edu.au)
Marlies Hankel
HPC, high performance computer, ssh
Use the Trove Newspaper & Gazette Harvester (web app version)
This video shows how you can use the web app version of the Trove Newspaper & Gazette Harvester to download large quantities of digitised newspaper articles from Trove. Just give it a search from the Trove web interface, and the harvester will...
Keywords: Trove, newspapers, GLAM Workbench, HASS, Trove Newspaper and Gazette Harvester
Resource type: video
Use the Trove Newspaper & Gazette Harvester (web app version)
https://youtu.be/WKFuJR6lLF4
https://dresa.org.au/materials/use-the-trove-newspaper-gazette-harvester-web-app-version-to-download-large-quantities-of-digitised-articles
This video shows how you can use the web app version of the [Trove Newspaper & Gazette Harvester](https://glam-workbench.net/trove-harvester/) to download large quantities of digitised newspaper articles from Trove. Just give it a search from the Trove web interface, and the harvester will save the metadata of all the articles from the search results in a CSV (spreadsheet) file for further analysis. You can also save the full text of every article, as well as copies of the articles as JPG images, and even PDFs.
The GLAM Workbench is a collection of tools, examples, tutorials, and apps that help you make use of collection data from GLAM organisations (Galleries, Libraries, Archives, and Museums). See: [https://glam-workbench.net/](https://glam-workbench.net/)
Tim Sherratt (tim@timsherratt.org and @wragge on Twitter)
Trove, newspapers, GLAM Workbench, HASS, Trove Newspaper and Gazette Harvester
ugrad
masters
phd
ecr
researcher
support
Research Data Management (RDM) Online Orientation Module (Macquarie University)
This is a self-paced, guided orientation to the essential elements of Research Data Management. It is available for others to use and modify.
The course introduces the following topics: data policies, data sensitivity, data management planning, storage and security, organisation and metadata,...
Keywords: research data, data management, FAIR data, training
Resource type: quiz, activity, other
Research Data Management (RDM) Online Orientation Module (Macquarie University)
https://rise.articulate.com/share/-AWqSPaEI_jTbHwzQHdmQ43R50edrCl0
https://dresa.org.au/materials/macquarie-university-research-data-management-rdm-online
This is a self-paced, guided orientation to the essential elements of Research Data Management. It is available for others to use and modify.
The course introduces the following topics: data policies, data sensitivity, data management planning, storage and security, organisation and metadata, benefits of data sharing, licensing, repositories, and best practice including the FAIR principles.
Embedded activities and examples help extend learner experience and awareness.
The course was designed to assist research students and early career researchers in complying with policies and legislative requirements, understand safe data practices, raise awareness of the benefits of data curation and data sharing (efficiency and impact) and equip them with the required knowledge to plan their data management early in their projects.
This course is divided into four sections
1. Crawl - What is Research Data and why care for it? Policy and legislative requirements. The Research Data Life-cycle. Data Management Planning (~30 mins)
2. Walk - Data sensitivity, identifiability, storage, and security (~60 mins)
3. Run - Record keeping, data retention, file naming, folder structures, version control, metadata, data sharing, open data, licences, data repositories, data citation, and ethics (~75 mins)
4. Jump - Best practice FAIR data principles (~45 mins)
5. Fight - Review - a quiz designed to review and reinforce knowledge (~15 mins)
https://rise.articulate.com/share/-AWqSPaEI_jTbHwzQHdmQ43R50edrCl0 *
*Password: "FAIR"
*Password: "FAIR"
Any queries or suggestions for course improvement can be directed to the Macquarie University Research Integrity Team: Dr Paul Sou (paul.sou@mq.edu.au) or Dr Shannon Smith (shannon.smith@mq.edu.au). Scorm files can be made available upon request.
Macquarie University
Queensland University of Technology
Shannon Smith
Jennifer Rowland
Mark Hooper
Paul Sou
Vladimir Bubalo
Brian Ballsun-Stanton
research data, data management, FAIR data, training
Deep Learning for Natural Language Processing
This workshop is designed to be instructor led and consists of two parts.
Part 1 consists of a lecture-demo about text processing and a hands-on session for attendees to learn how to clean a dataset.
Part 2 consists of a lecture introducing Recurrent Neural Networks and a hands-on session for...
Keywords: Deep learning, NLP, Machine learning
Resource type: presentation, tutorial
Deep Learning for Natural Language Processing
https://doi.org/10.26180/13100513
https://dresa.org.au/materials/deep-learning-for-natural-language-processing
This workshop is designed to be instructor led and consists of two parts.
Part 1 consists of a lecture-demo about text processing and a hands-on session for attendees to learn how to clean a dataset.
Part 2 consists of a lecture introducing Recurrent Neural Networks and a hands-on session for attendees to train their own RNN.
The Powerpoints contain the lecture slides, while the Jupyter notebooks (.ipynb) contain the hands-on coding exercises.
This workshop introduces natural language as data for deep learning. We discuss various techniques and software packages (e.g. python strings, RegEx, NLTK, Word2Vec) that help us convert, clean, and formalise text data “in the wild” for use in a deep learning model. We then explore the training and testing of a Recurrent Neural Network on the data to complete a real world task. We will be using TensorFlow v2 for this purpose.
datascienceplatform@monash.edu
Titus Tang
Deep learning, NLP, Machine learning
Getting Started with Deep Learning
This lecture provides a high level overview of how you could get started with developing deep learning applications. It introduces deep learning in a nutshell and then provides advice relating to the concepts and skill sets you would need to know and have in order to build a deep learning...
Keywords: Deep learning, Machine learning
Resource type: presentation
Getting Started with Deep Learning
https://doi.org/10.26180/15032688
https://dresa.org.au/materials/getting-started-with-deep-learning
This lecture provides a high level overview of how you could get started with developing deep learning applications. It introduces deep learning in a nutshell and then provides advice relating to the concepts and skill sets you would need to know and have in order to build a deep learning application. The lecture also provides pointers to various resources you could use to gain a stronger foothold in deep learning.
This lecture is targeted at researchers who may be complete beginners in machine learning, deep learning, or even with programming, but who would like to get into the space to build AI systems hands-on.
datascienceplatform@monash.edu
Titus Tang
Deep learning, Machine learning
Visualisation and Storytelling
This workshop explores how data visualisation techniques could be utilised to better understand data and to communicate research efforts and outcomes. The workshop covers a broad range of techniques from simple and static 2D graphics to advanced 3D visualisations in order to provide a broad...
Keywords: data visualisation, storytelling
Resource type: presentation, tutorial
Visualisation and Storytelling
https://doi.org/10.26180/13100510
https://dresa.org.au/materials/visualisation-and-storytelling
This workshop explores how data visualisation techniques could be utilised to better understand data and to communicate research efforts and outcomes. The workshop covers a broad range of techniques from simple and static 2D graphics to advanced 3D visualisations in order to provide a broad overview of the tools available for data analysis, presentation and storytelling. We explore, among others, animated charts and graphs, web visualisation tools such as scrollytellers, and the possibilities of 3D, interactive, and even immersive visualisations. We use real world, concrete examples along the way in order to tangibly illustrate how these visualisations can be created and how viewers perceive and interact with them. We also introduce the various tools and skill sets you would need to be proficient at presenting your data to the world.
By the conclusion of this workshop, you would gain familiarity with the various possibilities for presenting your own research data and outcomes. You would have a more intuitive understanding of the strengths and weaknesses of various modes of data visualisation and storytelling, and would have a starting point to obtain the right skill sets relevant to developing your visualisations of choice.
datascienceplatform@monash.edu
Daniel Waghorn
Nora Hamacher
Owen Kaluza
data visualisation, storytelling
Semi-Supervised Deep Learning
Modern deep neural networks require large amounts of labelled data to train. Obtaining the required labelled data is often an expensive and time consuming process. Semi-supervised deep learning involves the use of various creative techniques to train deep neural networks on partially labelled...
Keywords: Deep learning, Machine learning, semi-supervised
Resource type: presentation, tutorial
Semi-Supervised Deep Learning
https://doi.org/10.26180/14176805
https://dresa.org.au/materials/semi-supervised-deep-learning
Modern deep neural networks require large amounts of labelled data to train. Obtaining the required labelled data is often an expensive and time consuming process. Semi-supervised deep learning involves the use of various creative techniques to train deep neural networks on partially labelled data. If successful, it allows better training of a model despite the limited amount of labelled data available.
This workshop is designed to be instructor led and covers various semi-supervised learning techniques available in the literature. The workshop consists of a lecture introducing at a high level a selection of techniques that are suitable for semi-supervised deep learning. We discuss how these techniques can be implemented and the underlying assumptions they require. The lecture is followed by a hands-on session where attendees implement a semi-supervised learning technique to train a neural network. We observe and discuss the changing performance and behaviour of the network as varying degrees of labelled and unlabelled data is provided to the network during training.
datascienceplatform@monash.edu
Titus Tang
Deep learning, Machine learning, semi-supervised
Introduction to Deep Learning and TensorFlow
This workshop is intended to run as an instructor guided live event and consists of two parts. Each part consists of a lecture and a hands-on coding exercise.
Part 1 - Introduction to Deep Learning and TensorFlow
Part 2 - Introduction to Convolutional Neural Networks
The Powerpoints contain...
Keywords: Deep learning, convolutional neural network, tensorflow, Machine learning
Resource type: presentation, tutorial
Introduction to Deep Learning and TensorFlow
https://doi.org/10.26180/13100519
https://dresa.org.au/materials/introduction-to-deep-learning-and-tensorflow
This workshop is intended to run as an instructor guided live event and consists of two parts. Each part consists of a lecture and a hands-on coding exercise.
Part 1 - Introduction to Deep Learning and TensorFlow
Part 2 - Introduction to Convolutional Neural Networks
The Powerpoints contain the lecture slides, while the Jupyter notebooks (.ipynb) contain the hands-on coding exercises.
This workshop is an introduction to how deep learning works and how you could create a neural network using TensorFlow v2. We start by learning the basics of deep learning including what a neural network is, how information passes through the network, and how the network learns from data through the automated process of gradient descent. Workshop attendees would build, train and evaluate a neural network using a cloud GPU (Google Colab).
In part 2, we look at image data and how we could train a convolution neural network to classify images. Workshop attendees will extend their knowledge from the first part to design, train and evaluate this convolutional neural network.
datascienceplatform@monash.edu
Titus Tang
Deep learning, convolutional neural network, tensorflow, Machine learning
Use QueryPic to visualise searches in Trove's digitised newspapers (part 2)
This video shows how you can construct and visualise more complex searches for digitised newspaper articles in Trove using QueryPic (see part 1 for the basics). This includes limiting the date range of your query, and changing the time...
Keywords: Trove, GLAM Workbench, visualisation, newspapers, HASS
Resource type: video
Use QueryPic to visualise searches in Trove's digitised newspapers (part 2)
https://youtu.be/J_LgNL2EM4M
https://dresa.org.au/materials/use-querypic-to-visualise-searches-in-trove-s-digitised-newspapers-part-2
This video shows how you can construct and visualise more complex searches for digitised newspaper articles in Trove using [QueryPic](https://glam-workbench.net/trove-newspapers/#querypic) (see part 1 for the basics). This includes limiting the date range of your query, and changing the time scale to zoom in and out of your search results.
The GLAM Workbench is a collection of tools, examples, tutorials, and apps that help you make use of collection data from GLAM organisations (Galleries, Libraries, Archives, and Museums). See: https://glam-workbench.net/
Tim Sherratt (tim@timsherratt.org and @wragge on Twitter)
Trove, GLAM Workbench, visualisation, newspapers, HASS
ugrad
masters
phd
ecr
researcher
Use QueryPic to visualise searches in Trove's digitised newspapers (part 1)
This video demonstrates how to use the GLAM Workbench to visualise searches for digitised newspaper articles in Trove. Using the latest version of QueryPic, we can explore the complete result set, showing how the number of matching articles...
Keywords: Trove, GLAM Workbench, visualisation, newspapers, HASS
Resource type: video
Use QueryPic to visualise searches in Trove's digitised newspapers (part 1)
https://youtu.be/vdyKNowv9gw
https://dresa.org.au/materials/use-querypic-to-visualise-searches-in-trove-s-digitised-newspapers-part-1
This video demonstrates how to use the GLAM Workbench to visualise searches for digitised newspaper articles in Trove. Using the latest version of [QueryPic](https://glam-workbench.net/trove-newspapers/#querypic), we can explore the complete result set, showing how the number of matching articles changes over time. We can even compare queries to visualise changes in language or technology. It's a great way to start exploring the possibilities of GLAM data.
The GLAM Workbench is a collection of tools, examples, tutorials, and apps that help you make use of collection data from GLAM organisations (Galleries, Libraries, Archives, and Museums). See: https://glam-workbench.net/
Tim Sherratt (tim@timsherratt.org & @wragge on Twitter)
Trove, GLAM Workbench, visualisation, newspapers, HASS
ugrad
masters
ecr
researcher