Pawsey: AWS Quantum 101 Using Amazon Braket
Join us as AWS Quantum Specialists introduce quantum simulators and gate-based quantum computers, before turning to more advanced topics.
Keywords: Pawsey Supercomputing Centre, AWS, quantum, HPC
Pawsey: AWS Quantum 101 Using Amazon Braket
https://www.youtube.com/playlist?list=PLmu61dgAX-abDLr86-bG8zqfBIffu6Eh2
https://dresa.org.au/materials/pawsey-aws-quantum-101-using-amazon-braket
Join us as AWS Quantum Specialists introduce quantum simulators and gate-based quantum computers, before turning to more advanced topics.
training@pawsey.org.au
Pawsey Supercomputing Research Centre
Pawsey Supercomputing Centre, AWS, quantum, HPC
HIP Advanced Workshop
Additional topics presented about HIP, covering memory management, kernel optimisation, IO optimisation and porting CUDA to HIP.
Keywords: HIP, Pawsey Supercomputing Centre, supercomputing
HIP Advanced Workshop
https://www.youtube.com/playlist?list=PLmu61dgAX-absyWGpFsiw1TD1rgmjHZee
https://dresa.org.au/materials/hip-advanced-workshop
Additional topics presented about HIP, covering memory management, kernel optimisation, IO optimisation and porting CUDA to HIP.
training@pawsey.org.au
Pawsey Supercomputing Research Centre
HIP, Pawsey Supercomputing Centre, supercomputing
National Transfusion Dataset Secure eResearch Platform (SeRP)/SafeHaven Training
A short training video for NTD users on how to access and use the SeRP once data access is granted.
Keywords: research data, Data analysis, research data management
National Transfusion Dataset Secure eResearch Platform (SeRP)/SafeHaven Training
https://www.transfusiondataset.com/training-and-user-guides
https://dresa.org.au/materials/national-transfusion-dataset-secure-eresearch-platform-serp-safehaven-training
A short training video for NTD users on how to access and use the SeRP once data access is granted.
sphpm.ntd@monash.edu
research data, Data analysis, research data management
National Transfusion Dataset (NTD) Data Access Request process
A guide and resources on requesting data access from the NTD
National Transfusion Dataset (NTD) Data Access Request process
https://www.transfusiondataset.com/data-access
https://dresa.org.au/materials/national-transfusion-dataset-ntd-data-access-request-process
A guide and resources on requesting data access from the NTD
sphpm.ntd@monash.edu
research data
National Transfusion Dataset (NTD) Data Extraction Guide
A guide for hospital sites contributing data to the NTD.
Keywords: data management
National Transfusion Dataset (NTD) Data Extraction Guide
https://www.transfusiondataset.com/site-data-extraction
https://dresa.org.au/materials/national-transfusion-dataset-ntd-data-extraction-guide
A guide for hospital sites contributing data to the NTD.
sphpm.ntd@monash.edu
data management
Presentation of The Australian Companion Animal Registry of Cancers (ACARCinom)
With support from the Australian Research Data Commons (ARDC) through the Australian Data Partnership program, ACARCinom is the first Australia-wide registry of animal cancer occurrences that addresses the gaps in veterinary cancer data registries. ACARCinom aims to make a positive impact on...
Keywords: cancer, data, dog, cat
Presentation of The Australian Companion Animal Registry of Cancers (ACARCinom)
https://zenodo.org/records/10215500
https://dresa.org.au/materials/presentation-of-the-australian-companion-animal-registry-of-cancers-acarcinom
With support from the Australian Research Data Commons (ARDC) through the Australian Data Partnership program, ACARCinom is the first Australia-wide registry of animal cancer occurrences that addresses the gaps in veterinary cancer data registries. ACARCinom aims to make a positive impact on cancer research for our pets. Having reliable data is crucial for understanding the patterns of cancer and for evaluating treatments in both animals and humans.
Five university veterinary schools and Australia's 2 leading veterinary pathology providers are partnering in the ACARCinom project: The University of Queensland, Queensland University of Technology, University of Sydney, Gribbles Veterinary Pathology, IDEXX, University of Adelaide, Murdoch University
By uniting the expertise and resources of these institutions, ACARCinom is poised to make significant advancements in understanding and combating cancer in dogs and cats. This project represents a remarkable collaboration that harnesses the power of data to unlock new insights and drive progress in the field of veterinary oncology.
This video explains how the ACARCinom Dashboard works and what its functionalities are. You can have access to the ACARCinom database at the following link: acarcinom.org.au
Prof Chiara Palmieri
School of Veterinary Science
The University of Queensland
Chiara Palmieri
cancer, data, dog, cat
masters
phd
researcher
support
Fluoroquinolone antibiotics and Aortic Aneurysm or Dissection
The main objective of this project was to provide education on the use of data translated to the OMOP common data model. We aimed to showcase how the Atlas interface tool could be used to generate evidence for a highly relevant and significant research question. The clinical question that was...
Keywords: OMOP, Aortic Aneurysm, Fluoroquinolone antibiotics
Fluoroquinolone antibiotics and Aortic Aneurysm or Dissection
https://ohdsi-australia.org/full_tutorial.pdf
https://dresa.org.au/materials/fluoroquinolone-antibiotics-and-aortic-aneurysm-or-dissection
The main objective of this project was to provide education on the use of data translated to the OMOP common data model. We aimed to showcase how the Atlas interface tool could be used to generate evidence for a highly relevant and significant research question. The clinical question that was used to demonstrate the process revolved around investigating the potential association between the use of fluoroquinolones to treat urinary tract infection and the risk of experiencing aortic aneurysm and dissection within 30 days, 3 months, or 12 months of treatment initiation compared to other commonly used antibiotics. The workshop aimed to describe how data are translated to the OMOP CDM, how cohorts can be derived in these data, how to execute a robust analysis, and lastly, how to interpret the results of the study. Specifically, we described the process of translating Australian medicines dispensing data to the OMOP CDM, including the translation of the Australia Pharmaceutical Benefits Schedule data to the international RxNorm standard vocabulary.
The outcome of the project is an on-line training resource that highlights the process of study execution from start to finish. This training package will serve as an exemplar for researchers in Australia to unlock the value of their data that has been translated into the OMOP CDM. The audience for this project was database programmers, researchers, and decision-makers, and all those interested in using data to inform healthcare.
Roger Ward, Nicole Pratt
Roger Ward
Nicole Pratt
Christine Hallinan
OMOP, Aortic Aneurysm, Fluoroquinolone antibiotics
Managing Data using Acacia @ Pawsey
Acacia is Pawsey's "warm tier" or project storage. This object store is fully integrated with Setonix, Pawsey's main supercomputer, enabling fast transfer of data for project use.
These short videos introduce this high-speed object storage for hosting research data online.
Acacia is named...
Keywords: data, data skills, Acacia, Pawsey Supercomputing Centre, object storage, File systems
Managing Data using Acacia @ Pawsey
https://www.youtube.com/playlist?list=PLmu61dgAX-aYxrbqtSYHS1ufVZ9xs1AnI
https://dresa.org.au/materials/managing-data-using-acacia-pawsey
Acacia is Pawsey's "warm tier" or project storage. This object store is fully integrated with Setonix, Pawsey's main supercomputer, enabling fast transfer of data for project use.
These short videos introduce this high-speed object storage for hosting research data online.
Acacia is named after Australia’s national floral emblem the Golden Wattle – Acacia pycnantha.
training@pawsey.org.au
Pawsey Supercomputing Research Centre
data, data skills, Acacia, Pawsey Supercomputing Centre, object storage, File systems
ugrad
masters
phd
ecr
researcher
support
professional
PCon Preparing applications for El Capitan and beyond
As Lawrence Livermore National Laboratories (LLNL) prepares to stand up its next supercomputer, El Capitan, application teams prepare to pivot to another GPU architecture.
This talk presents how the LLNL application teams made the transition from distributed-memory, CPU-only architectures to...
Keywords: GPUs, supercomputing, HPC, PaCER
PCon Preparing applications for El Capitan and beyond
https://www.youtube.com/watch?v=cj7a7gWgt8o&list=PLmu61dgAX-aZ_aa6SmmExSJtXGS7L_BF9&index=4
https://dresa.org.au/materials/pcon-preparing-applications-for-el-capitan-and-beyond
As Lawrence Livermore National Laboratories (LLNL) prepares to stand up its next supercomputer, El Capitan, application teams prepare to pivot to another GPU architecture.
This talk presents how the LLNL application teams made the transition from distributed-memory, CPU-only architectures to GPUs. They share institutional best practices. They discuss new open-source software products as tools for porting and profiling applications and as avenues for collaboration across the computational science community.
Join LLNL's Erik Draeger and Jane Herriman, who presented this talk at Pawsey's PaCER Conference in September 2023.
training@pawsey.org.au
Erik Draeger
Jane Herriman
Pawsey Supercomputing Research Centre
GPUs, supercomputing, HPC, PaCER
masters
phd
researcher
ecr
support
professional
ugrad
OpenCL
Supercomputers make use of accelerators from a variety of different hardware vendors, using devices such as multi-core CPU’s, GPU’s and even FPGA’s. OpenCL is a way for your HPC application to make effective use of heterogeneous computing devices, and to avoid code refactoring for new HPC...
Keywords: supercomputing, Pawsey Supercomputing Centre, CPUs, GPUs, OpenCL, FPGAs
Resource type: activity
OpenCL
https://www.youtube.com/playlist?list=PLmu61dgAX-aa_lk5fby5PjuS49snHpyYL
https://dresa.org.au/materials/opencl
Supercomputers make use of accelerators from a variety of different hardware vendors, using devices such as multi-core CPU’s, GPU’s and even FPGA’s. OpenCL is a way for your HPC application to make effective use of heterogeneous computing devices, and to avoid code refactoring for new HPC infrastructure.
training@pawsey.org.au
Toby Potter
Pawsey Supercomputing Research Centre
Pelagos
Toby Potter
supercomputing, Pawsey Supercomputing Centre, CPUs, GPUs, OpenCL, FPGAs
masters
ecr
researcher
support
AMD Profiling
The AMD profiling workshop covers the AMD suite of tools for development of HPC applications on AMD GPUs.
You will learn how to use the rocprof profiler and trace visualization tool that has long been available as part of the ROCm software suite.
You will also learn how to use the new...
Keywords: supercomputing, performance, GPUs, CPUs, AMD, HPC, ROCm
Resource type: activity
AMD Profiling
https://www.youtube.com/playlist?list=PLmu61dgAX-aaQOCG5Jlw8oLBORJfoQC2o
https://dresa.org.au/materials/amd-profiling
The AMD profiling workshop covers the AMD suite of tools for development of HPC applications on AMD GPUs.
You will learn how to use the rocprof profiler and trace visualization tool that has long been available as part of the ROCm software suite.
You will also learn how to use the new Omnitools - Omnitrace and Omniperf - that were introduced at the end of 2022. Omnitrace is a powerful tracing profiler for both CPU and GPU. It can collect data from a much wider range of sources and includes hardware counters and sampling approaches. Omniperf is a performance analysis tool that can help you pinpoint how your application is performing with a visual view of the memory hierarchy on the GPU as well as reporting the percentage of peak for many different measurements.
training@pawsey.org.au
AMD
Pawsey Supercomputing Research Centre
supercomputing, performance, GPUs, CPUs, AMD, HPC, ROCm
Evaluate Application Performance using TAU and E4S
In this workshop, you learn about the Extreme-scale Scientific Software Stack and the TAU Performance System® and its interfaces to other tools and libraries. The workshop includes sample codes that illustrate the different instrumentation and measurement choices.
Topics covered include...
Keywords: supercomputing, TAU, E4S, Performance, ROCm, OpenMP
Resource type: activity
Evaluate Application Performance using TAU and E4S
https://www.youtube.com/playlist?list=PLmu61dgAX-aakuGnuVPiWVaqCLgm3kdRG
https://dresa.org.au/materials/evaluate-application-performance-using-tau-and-e4s
In this workshop, you learn about the Extreme-scale Scientific Software Stack and the TAU Performance System® and its interfaces to other tools and libraries. The workshop includes sample codes that illustrate the different instrumentation and measurement choices.
Topics covered include generating performance profiles and traces with memory utilization and headroom, I/O, and interfaces to ROCm, including ROCProfiler and ROCTracer with support for collecting hardware performance data.
The workshop also covers instrumentation of OpenMP programs using OpenMP Tools Interface (OMPT), including support for target offload and measurement of a program’s memory footprint.
During the session, there are hands-on activities on scalable tracing using OTF2 and visualization using the Vampir trace analysis tool. Performance data analysis using ParaProf and PerfExplorer are demonstrated using the performance data management framework (TAUdb) that includes TAU’s performance database.
training@pawsey.org.au
Sameer Shende
Pawsey Supercomputing Research Centre
supercomputing, TAU, E4S, Performance, ROCm, OpenMP
HIP Workshop
The Heterogeneous Interface for Portability (HIP) provides a programming framework for harnessing the compute capabilities of multicore processors, such as the MI250X GPU’s on Setonix.
In this course we focus on the essentials of developing HIP applications with a focus on...
Keywords: HIP, supercomputing, Programming, GPUs, MPI, debugging
Resource type: full-course
HIP Workshop
https://support.pawsey.org.au/documentation/display/US/Pawsey+Training+Resources
https://dresa.org.au/materials/hip-workshop
The Heterogeneous Interface for Portability (HIP) provides a programming framework for harnessing the compute capabilities of multicore processors, such as the MI250X GPU’s on Setonix.
In this course we focus on the essentials of developing HIP applications with a focus on supercomputing.
Agenda
- Introduction to HIP and high level features
- How to build and run applications on Setonix with HIP and MPI
- A complete line-by-line walkthrough of a HIP-enabled application
- Tools and techniques for debugging and measuring the performance of HIP applications
training@pawsey.org.au
Pelagos
Pawsey Supercomputing Research Centre
HIP, supercomputing, Programming, GPUs, MPI, debugging
C/C++ Refresher
The C++ programming language and its C subset is used extensively in research environments. In particular it is the language utilised in the parallel programming frameworks CUDA, HIP, and OpenCL.
This workshop is designed to equip participants with “Survival C++”, an understanding of the basic...
Keywords: supercomputing, C/C++, Programming
Resource type: activity
C/C++ Refresher
https://www.youtube.com/playlist?list=PLmu61dgAX-aYsRsejVfwHVhpPU2381Njg
https://dresa.org.au/materials/c-c-refresher
The C++ programming language and its C subset is used extensively in research environments. In particular it is the language utilised in the parallel programming frameworks CUDA, HIP, and OpenCL.
This workshop is designed to equip participants with “Survival C++”, an understanding of the basic syntax, how information is encoded in binary format, and how to compile and debug C++ software.
training@pawsey.org.au
Pelagos
Pawsey Supercomputing Research Centre
supercomputing, C/C++, Programming
A hands on introduction to Large Language Models like Bing Chat and ChatGPT
Event run 7 June at the MQ Incubator. Event description:
A two-hour hands-on workshop giving a brief history of the last 4 months of development of "Generative AI."
These tools, these Large Language Models, offer present promise and peril -- disruption -- to ways of working and of...
Keywords: Large Language Model, ChatGPT
A hands on introduction to Large Language Models like Bing Chat and ChatGPT
https://osf.io/rd24y/
https://dresa.org.au/materials/a-hands-on-introduction-to-large-language-models-like-bing-chat-and-chatgpt
Event run 7 June at the MQ Incubator. Event description:
A two-hour hands-on workshop giving a brief history of the last 4 months of development of "Generative AI."
These tools, these Large Language Models, offer present promise and peril -- disruption -- to ways of working and of learning. Outside the "hype," these tools are "calculators for words" and allow the same manipulation and reflection of a user's words as a calculator offers for a user's numbers.
The workshop will guide users into using various free and paid tools, and the effective use of Large Language Models through chain of thought prompting.
Remember: a LLM is "Always confident and usually correct."
OSF Description (LLM generated):
This two-hour workshop provides a comprehensive introduction to the world of Large Language Models (LLMs), focusing on the recent advancements in Generative AI. Participants will gain insights into the development and functionality of prominent LLMs such as Bing Chat and ChatGPT. The workshop will delve into the concept of LLMs as "calculators for words," highlighting their potential to revolutionize ways of working and learning.
The session will explore the principles of Prompt Engineering and Transactional Prompting, demonstrating how consistent prompts can yield reliable and reproducible results. Participants will also learn about the practical applications of LLMs, including editing and proofreading papers, generating technical documentation, recipe ideation, and more.
The workshop emphasizes the importance of understanding the terms of use and the responsibilities that come with using these powerful AI tools. By the end of the session, participants will be equipped with the knowledge and skills to effectively use LLMs in various contexts, guided by the mantra that a LLM is "Always confident and usually correct."
Brian Ballsun-Stanton (brian.ballsun-stanton@mq.edu.au)
Brian Ballsun-Stanton
Large Language Model, ChatGPT
researcher
Introduction to REDCap at Griffith University
This site is designed as a companion to Griffith Library’s Research Data Capture workshops. It can also be treated as a standalone, self-paced tutorial for learning to use REDCap (Research Electronic Data Capture) a secure web application for building and managing online surveys and databases.
Keywords: REDCap, survey instruments
Resource type: tutorial
Introduction to REDCap at Griffith University
https://griffithunilibrary.github.io/redcap-intro/
https://dresa.org.au/materials/introduction-to-redcap-at-griffith-university
This site is designed as a companion to Griffith Library’s Research Data Capture workshops. It can also be treated as a standalone, self-paced tutorial for learning to use REDCap (Research Electronic Data Capture) a secure web application for building and managing online surveys and databases.
y.banens@griffith.edu.au
Yuri Banens
REDCap, survey instruments
mbr
phd
ecr
researcher
support
Introduction to text mining and analysis
In this self-paced workshop you will learn steps to:
- Build data sets: find where and how to gather textual data for your corpus or data set.
- Prepare data for analysis: explore useful processes and tools to prepare and clean textual data for analysis
- Analyse data: identify different...
Keywords: textual training materials
Resource type: tutorial
Introduction to text mining and analysis
https://griffithunilibrary.github.io/intro-text-mining-analysis/
https://dresa.org.au/materials/introduction-to-text-mining-and-analysis
In this self-paced workshop you will learn steps to:
- Build data sets: find where and how to gather textual data for your corpus or data set.
- Prepare data for analysis: explore useful processes and tools to prepare and clean textual data for analysis
- Analyse data: identify different types of analysis used to interrogate content and uncover new insights
s.stapleton@griffith.edu.au; y.banens@griffith.edu.au;
Yuri Banens
Sharron Stapleton
Ben McRae
textual training materials
mbr
phd
ecr
researcher
support
Introducing Computational Thinking
This workshop is for researchers at all career stages who want to understand the uses and the building blocks of computational thinking. This skill is useful for all kinds of problem solving, whether in real life or in computing.
The workshop will not teach computer programming per se. Instead...
Keywords: computational skills, data skills
Resource type: tutorial
Introducing Computational Thinking
https://griffithunilibrary.github.io/intro-computational-thinking/
https://dresa.org.au/materials/introducing-computational-thinking
This workshop is for researchers at all career stages who want to understand the uses and the building blocks of computational thinking. This skill is useful for all kinds of problem solving, whether in real life or in computing.
The workshop will not teach computer programming per se. Instead it will cover the thought processes involved should you want to learn to program.
s.stapleton@griffith.edu.au
Belinda Weaver
computational skills, data skills
Advanced Data Wrangling with OpenRefine
This online self-paced workshop teaches advanced data wrangling skills including combining datasets, geolocating data, and “what if” exploration using OpenRefine.
Keywords: data skills, data
Resource type: tutorial
Advanced Data Wrangling with OpenRefine
https://griffithunilibrary.github.io/advanced-data-wrangle-2/
https://dresa.org.au/materials/advanced-data-wrangling-with-openrefine
This online self-paced workshop teaches advanced data wrangling skills including combining datasets, geolocating data, and “what if” exploration using OpenRefine.
s.stapleton@griffith.edu.au
Sharron Stapleton
data skills, data
mbr
phd
ecr
researcher
support
professional
Introduction to Data Cleaning with OpenRefine
Learn basic data cleaning techniques in this self-paced online workshop using open data from data.qld.gov.au and open source tool OpenRefine openrefine.org. Learn techniques to prepare messy tabular data for comupational analysis. Of most relevance to HASS disciplines, working with textual data...
Keywords: data skills, Data analysis
Resource type: tutorial
Introduction to Data Cleaning with OpenRefine
https://griffithunilibrary.github.io/data-cleaning-intro/
https://dresa.org.au/materials/introduction-to-data-cleaning-with-openrefine
Learn basic data cleaning techniques in this self-paced online workshop using open data from data.qld.gov.au and open source tool OpenRefine openrefine.org. Learn techniques to prepare messy tabular data for comupational analysis. Of most relevance to HASS disciplines, working with textual data in a structured or semi-structured format.
s.stapleton@griffith.edu.au;
Sharron Stapleton
data skills, Data analysis
mbr
phd
ecr
researcher
support
professional
Introduction to Unix
A hands-on workshop covering the basics of the Unix command line interface.
Knowledge of the Unix operating system is fundamental to the use of many popular bioinformatics command-line tools. Whether you choose to run your analyses locally or on a high-performance computing system, knowing...
Keywords: Unix, Command line, Command-line, CLI
Resource type: tutorial
Introduction to Unix
https://www.melbournebioinformatics.org.au/tutorials/tutorials/unix/unix/
https://dresa.org.au/materials/introduction-to-unix
A hands-on workshop covering the basics of the Unix command line interface.
Knowledge of the Unix operating system is fundamental to the use of many popular bioinformatics command-line tools. Whether you choose to run your analyses locally or on a high-performance computing system, knowing your way around a command-line interface is highly valuable. This workshop will introduce you to Unix concepts by way of a series of hands-on exercises.
This workshop is designed for participants with little or no command-line knowledge.
Tools: Standard Unix commands, FileZilla
Topic overview:
Section 1: Getting started
Section 2: Exploring your current directory
Section 3: Making and changing directories
Section 4: Viewing and manipulating files
Section 5: Removing files and directories
Section 6: Searching files
Section 7: Putting it all together
Section 8: Transferring files
Tutorial instructions available here: https://www.melbournebioinformatics.org.au/tutorials/tutorials/unix/unix/
For queries relating to this workshop, contact Melbourne Bioinformatics (bioinformatics-training@unimelb.edu.au).
Find out when we are next running this training as an in-person workshop, by visiting the Melbourne Bioinformaitcs Eventbrite page: https://www.eventbrite.com.au/o/melbourne-bioinformatics-13058846490
For queries relating to this workshop, contact Melbourne Bioinformatics (bioinformatics-training@unimelb.edu.au).
Morgan, Steven (orcid: 0000-0001-6038-6126)
Unix, Command line, Command-line, CLI
ugrad
masters
mbr
phd
ecr
researcher
support
professional
Managing Active Research Data
In this train-the-trainer workshop, we will be exploring and discussing methods for active data management.
Participants will become familiar with cloud storage and associated tools and services for managing active research data. Learn how to organise, maintain, store and analyse active data,...
Keywords: RDM Training, CloudStor, cloud
Resource type: lesson
Managing Active Research Data
https://doi.org/10.5281/zenodo.7259746
https://dresa.org.au/materials/managing-active-research-data
In this train-the-trainer workshop, we will be exploring and discussing methods for active data management.
Participants will become familiar with cloud storage and associated tools and services for managing active research data. Learn how to organise, maintain, store and analyse active data, and understand safe and secure ways of sharing and storing data.
Topics such as cloud storage, collaborative editing, versioning and data sharing will be discussed and demonstrated.
Sara King
Sara King
Brian Ballsun-Stanton
RDM Training, CloudStor, cloud
phd
support
masters
ecr
researcher
Principles Aligned Institutionally-Contextualised (PAI-C) RDM Training
This GitHub repository contains resources for an institution to contextualise a principles-based RDM training with its institution's research data management policies, processes and systems.
The adoption of PAI-C across institutions will contribute to a common baseline understanding of RDM...
Keywords: PAI-C, Training, Data Management
Principles Aligned Institutionally-Contextualised (PAI-C) RDM Training
https://github.com/Adrian-W-Chew/PAI-C-RDM-Training
https://dresa.org.au/materials/principles-aligned-institutionally-contextualised-pai-c-rdm-training
This GitHub repository contains resources for an institution to contextualise a principles-based RDM training with its institution's research data management policies, processes and systems.
The adoption of PAI-C across institutions will contribute to a common baseline understanding of RDM across institutions, which in turn will facilitate cross institutional management of data (e.g. when researchers move between institutions, and collaborate across institutions).
Dr Adrian W. Chew (w.l.chew@unsw.edu.au)
Dr Adrian W. Chew
Dr Adele Haythornthwaite
Brock Askey
Dr Jacky Cho
Dr Anesh Nair
Dr Kyle Hemming
Iftikhar Hayat
Joanna Dziedzic
Janice Chan
Kaitlyn Houston
Linlin Zhao
Caitlin Savage
Jessica Suna
Dr Emilia Decker
Sharron Stapleton
PAI-C, Training, Data Management
VOSON Lab Code Blog
The VOSON Lab Code Blog is a space to share methods, tips, examples and code. Blog posts provide techniques to construct and analyse networks from various API and other online data sources, using the VOSON open-source software and other R based packages.
Keywords: visualisation, Data analysis, data collections, R software, Social network analysis, social media data, Computational Social Science, quantitative, Text Analytics
Resource type: tutorial, other
VOSON Lab Code Blog
https://vosonlab.github.io/
https://dresa.org.au/materials/voson-lab-code-blog
The VOSON Lab Code Blog is a space to share methods, tips, examples and code. Blog posts provide techniques to construct and analyse networks from various API and other online data sources, using the VOSON open-source software and other R based packages.
robert.ackland@anu.edu.au
visualisation, Data analysis, data collections, R software, Social network analysis, social media data, Computational Social Science, quantitative, Text Analytics
researcher
support
phd
masters
Programming and tidy data analysis in R
A workshop to expand the skill-set of someone who has basic familiarity with R. Covers programming constructs such as functions and for-loops, and working with data frames using the dplyr and tidyr packages. Explains the importance of a "tidy" data representation, and goes through common steps...
Keywords: R, Tidyverse, Programming
Resource type: tutorial
Programming and tidy data analysis in R
https://monashdatafluency.github.io/r-progtidy/
https://dresa.org.au/materials/programming-and-tidy-data-analysis-in-r
A workshop to expand the skill-set of someone who has basic familiarity with R. Covers programming constructs such as functions and for-loops, and working with data frames using the dplyr and tidyr packages. Explains the importance of a "tidy" data representation, and goes through common steps needed to load data and convert it into a tidy form.
To be taught as a hands on workshop, typically as two half-days.
Developed by the Monash Bioinformatics Platform and taught as part of the Data Fluency program at Monash University. License is CC-BY-4. You are free to share and adapt the material so long as attribution is given.
Paul Harrison paul.harrison@monash.edu
Paul Harrison
Richard Beare
R, Tidyverse, Programming
phd
ecr
researcher
Linear models in R
A workshop on linear models in R. Learning to use linear models provides a foundation for modelling, estimation, prediction, and statistical testing in R. Many commonly used statistical tests can be performed using linear models. Ideas introduced using linear models are applicable to many of the...
Keywords: R statistics
Resource type: tutorial
Linear models in R
https://monashdatafluency.github.io/r-linear/
https://dresa.org.au/materials/linear-models-in-r
A workshop on linear models in R. Learning to use linear models provides a foundation for modelling, estimation, prediction, and statistical testing in R. Many commonly used statistical tests can be performed using linear models. Ideas introduced using linear models are applicable to many of the more complicated statistical and machine learning models available in R.
To be taught as a hands on workshop, typically as two half-days.
Developed by the Monash Bioinformatics Platform and taught as part of the Data Fluency program at Monash University. License is CC-BY-4. You are free to share and adapt the material so long as attribution is given.
Paul Harrison paul.harrison@monash.edu
Paul Harrison
R statistics
phd
ecr
researcher
Introduction to R
An introduction to R, for people with zero coding experience.
To be taught as a hands on workshop, typically as two half-days.
Developed by the Monash Bioinformatics Platform and taught as part of the Data Fluency program at Monash University. License is CC-BY-4. You are free to share and...
Keywords: R
Resource type: tutorial
Introduction to R
https://monashdatafluency.github.io/r-intro-2/
https://dresa.org.au/materials/introduction-to-r
An introduction to R, for people with zero coding experience.
To be taught as a hands on workshop, typically as two half-days.
Developed by the Monash Bioinformatics Platform and taught as part of the Data Fluency program at Monash University. License is CC-BY-4. You are free to share and adapt the material so long as attribution is given.
Paul Harrison paul.harrison@monash.edu
Paul Harrison
R
phd
ecr
researcher
Introduction to Jupyter Notebooks
This workshop will introduce you to Jupyter Notebooks, a digital tool that has exploded in popularity in recent years for those working with data.
You will learn what they are, what they do and why you might like to use them. It is an introductory set of lessons for those who are brand new,...
Keywords: jupyter, Introductory, training material, CloudStor, markdown, Python, R
Resource type: tutorial
Introduction to Jupyter Notebooks
https://zenodo.org/record/6859121
https://dresa.org.au/materials/introduction-to-jupyter-notebooks
This workshop will introduce you to Jupyter Notebooks, a digital tool that has exploded in popularity in recent years for those working with data.
You will learn what they are, what they do and why you might like to use them. It is an introductory set of lessons for those who are brand new, have little or no knowledge of coding and computational methods in research.
This workshop is targeted at those who are absolute beginners or ‘tech-curious’. It includes a hands-on component, using basic programming commands, but requires no previous knowledge of programming.
sara.king@aarnet.edu.au
Sara King
Mason, Ingrid
jupyter, Introductory, training material, CloudStor, markdown, Python, R
Beyond Basics: Conditionals and Visualisation in Excel
After cleaning your dataset, you may need to apply some conditional analysis to glean greater insights from your data. You may also want to enhance your charts for inclusion into a manuscript, thesis or report by adding some statistical elements. This course will cover conditional syntax, nested...
Beyond Basics: Conditionals and Visualisation in Excel
https://intersect.org.au/training/course/excel201
https://dresa.org.au/materials/beyond-basics-conditionals-and-visualisation-in-excel
After cleaning your dataset, you may need to apply some conditional analysis to glean greater insights from your data. You may also want to enhance your charts for inclusion into a manuscript, thesis or report by adding some statistical elements. This course will cover conditional syntax, nested functions, statistical charting and outlier identification. Armed with the tips and tricks from our introductory Excel for Researchers course, you will be able to tap into even more of Excel’s diverse functionality and apply it to your research project.
- Cell syntax and conditional formatting
- IF functions
- Pivot Table summaries
- Nesting multiple AND/IF/OR calculations
- Combining nested calculations with conditional formatting to bring out important elements of the dataset
- MINIFS function
- Box plot creation and outlier identification
- Trendline and error bar chart enhancements
Familiarity with the content of Excel for Researchers, specifically:
- the general Office/Excel interface (menus, ribbons/toolbars, etc.)
- workbooks and worksheets
- absolute and relative references, e.g. $A$1, A1.
- simple ranges, e.g. A1:B5
training@intersect.org.au
Intersect Australia
Excel
Exploring Chi-Square and correlation in SPSS
This hands-on training is designed to familiarize you further with the SPSS data analysis environment. In this session, we will traverse into the realm of inferential statistics, beginning with linear correlation and reliability. We will present a brief conceptual overview and the SPSS procedures...
Keywords: Data Analysis, SPSS
Exploring Chi-Square and correlation in SPSS
https://intersect.org.au/training/course/spss102
https://dresa.org.au/materials/exploring-chi-square-and-correlation-in-spss
This hands-on training is designed to familiarize you further with the SPSS data analysis environment. In this session, we will traverse into the realm of inferential statistics, beginning with linear correlation and reliability. We will present a brief conceptual overview and the SPSS procedures for computing Pearson's r and Spearman's Rho, followed by a short session on reliability . In the remainder of the session, we will explore the Chi-Square Goodness-of-Fit test and Chi-Square Test of Association for analysing categorical data.
#### You'll learn:
- Perform Pearson’s Correlation (r) Test
- Perform Spearman’s Rho Correlation (⍴) Test
- Carry out basic reliability analysis on survey items
- Perform Chi-Square Goodness-of-Fit test
- Perform Chi-Square Test of Association
#### Prerequisites:
In order to participate, attendees must have a licensed copy of SPSS installed on their computer. Speak to your local university IT or Research Office for assistance in obtaining a license and installing the software.
This workshop is recommended for researchers and postgraduate students who have previously attended the Intersect’s [Data Entry and Processing in SPSS](https://intersect.org.au/training/course/spss101/) workshop.
**For more information, please click [here](https://intersect.org.au/training/course/spss102).**
training@intersect.org.au
Data Analysis, SPSS