LASI 19 Workshops & Tutorials

We are pleased to present a great LASI 19 program that includes 8 Workshops and 7 Tutorials!

Based on feedback collected from prior years, this year's organizing committee has developed the following program. LASI participants will now take a deep dive into two workshops of 5.5 hours each on Tuesday and Wednesday. In addition, each participant will take part in two short 2-hour tutorials, enabling participants to get a flavor of a range of topics.

Each individual can choose 2 workshop and 2 tutorials. The organizing committee will strive to have all participants attend their top choices but based on attendance and popularity of certain sessions, your choices are not guaranteed. Register early to ensure your spot in your top choices!

<strong>Workshops</strong><strong>Tutorials</strong>

W1. Building Predictive Models of Student Success Using Amazon Web Services

Title: Building Predictive Models of Student Success Using Amazon Web Services

Description: In this workshop participants will use AWS Sagemaker to build classification, regression, and clustering models. Supervised machine learning algorithms (classification and regression) are often used in learning analytics to make predictions of student outcomes in support of timely interventions, while unsupervised (clustering) algorithms are often used to uncover common patterns of student behaviour. Participants will be able to apply these methods on supplied datasets. The final session of the workshop will be dedicated to exploring new data (and participants are encouraged to bring datasets they are interested in with them).

Participants will gain experience in Python, Jupyter notebooks, Machine Learning, and Amazon Web Services.

Prerequisite Skills/Experience: No experience is required.

Instructor: Craig Thompson,
University of British Columbia

W2. Text Mining for Learning Content Analysis

Title: Text Mining for Learning Content Analysis

Description: This workshop will introduce text mining methods and techniques, and enable the participants to develop working knowledge of text mining in the R programming language. During the workshop, we will go through the overall text mining process and examine each of its key phases: we will start with text preprocessing, then move to the transformation of unstructured textual content to a structured numerical format thus obtaining a feature set that will serve as the input to an algorithm for pattern mining (e.g., classification or clustering) or information extraction (e.g., topic modeling or keywords extraction). Since text mining is an iterative process, having evaluated the obtained results, we will consider how we can improve each phase of the overall process to achieve better results. Particular focus will be put on the feature creation and selection phases, since these are often crucial for the overall performance.

After attending the workshop, the participants should be able to apply text mining methods and techniques to analyse (e.g., classify) different kinds of text-based learning content, including both informal content (e.g. posts exchanged in online communication channels), and more formal content (e.g., answers to open-ended questions).

Prerequisite Skills/Experience: Workshop participants should have at least a basic level of programming experience in the R language. They are also expected to have at least a basic knowledge of key Machine Learning concepts and tasks (e.g., to know the difference between classification and clustering; cross-validation; overfitting; performance metrics for evaluation of classifiers).
No experience with any form of text mining / text analytics is required.

Advanced preparation:
Participants should bring their own laptop to the workshop. They are also expected to have the latest version of R and RStudio installed on their computers.
Introductory materials for developing basic R programming skills as well as materials for learning about basic Machine Learning concepts and tasks will be recommended to those who are not (sufficiently) familiar with R and/or with Machine Learning, but are interested in text mining and would like to productively participate in the workshop.
More information will be forthcoming to registered participants.

Instructor:Jelena Jovanovic,
University of Belgrade

W3. Python Bootcamp for Learning Science Practitioners

Title: Python Bootcamp for Learning Science Practitioners

Description: The hands-on workshop will provide a rigorous introduction to python for learning analytics practitioners. The intensive workshop consists of five parts: a) basic and intermediate python; b) statistics and visualization; c) machine learning d) causal inferencing and d) deep learning. The workshop will be motivated throughout by educational datasets and examples. The aim of the workshop is to provide a thorough introduction to computation and statistical methodologies in modern learning analytics.
1.1 Python. Python is the de facto language for scientific computing and one of the principal languages, along with R, for data science and machine learning. Along with foundational concepts such as data structures, functions, and iteration we will cover intermediate concepts such as comprehensions, collections, generators, map/filter/reduce, and object orientation. Special emphasis will be given to coding in “idiomatic Python”.
1.2 Exploratory Data Analysis, Statistics. In this section we will introduce the core python libraries for exploratory data analysis and basic statistics: numpy, pandas, matplotlib and seaborn. We will use the Jupyter Notebook environment for interactive data analysis, annotation, and collaboration. Exploratory data analysis is a foundational step for deriving insights from data. It also serves as a prelude to building formal models and simulations.
1.3 Machine Learning. In this section we will introduce participants to basic machine learning concepts and their application using the scikit-learn library. We will show how to predict continuous and categorical outcomes, for example, using linear and logistic regression. This demonstration will show how to create an entire prediction pipeline from scratch, starting from loading in data, cleaning and standardizing it, building the model, and demonstrating its validity through cross- validation. Some discussion of what an educator might do with such a model will be included.
1.4 Causal Inferencing. In this section of the workshop we build on our statistical understanding of correlation to study causality. Randomized control trials (RCTs) are considered the gold standard in efficacy studies because they aim to establish causality of interventions. But RCTs are very often impractical to carry out and have other limitations. Causal inference from Observational Studies (OS) is another form of statistical analysis to evaluate intervention effects. In causal inference, the causal effect of an intervention on a particular outcome is studied using observed data, without the need for randomization in advance. In this tutorial, we will show design of an OS to leverage the large amounts of data available through online learning platforms and student information systems to draw causal claims about their effectiveness.
1.5 Deep Learning. In this section we introduce how to build deep learning models. Deep learning is one of the fastest growing areas of machine learning and is particularly well suited for very large datasets. We begin by building a toy deep learning model by scratch in python. This is to understand the five foundational concepts of deep learning: neurons as the atomic computational unit of deep learning networks; neurons as organized in stacked layers to achieve increasingly abstract data representations; forward propagation as the end-to-end computational process for generating predictions; loss and cost functions as the method for quantifying the error between prediction and ground truth; and back propagation as the computational process for systematically reducing the error by adjusting the network’s parameters. After developing a conceptual understanding of deep learning, we apply some standard Python libraries such as Keras, PyTorch, and TensorFlow to build deep learning models.

Prerequisite Skills/Experience: Basic familiarity with a programming language.

Instructors: Alfred Essa, McGraw-Hill Education
Not Pictured: Lalitha Agnihotri, Shirin Mojarad, Neil Zimmerman, McGraw-Hill Education

W4. Building Learning Analytics Ecosystems

Title: Building Learning Analytics Ecosystems

Description: Learning Analytics has traditionally been developed using the data collected from single stand alone systems (e.g. a LMS), perhaps in combination with data from a student information system (SIS). However, learning environments tend to consist of many tools, some of which are loosely linked together in a LMS via LTI connections, some not. For example, at UTS we have supplemented a LMS with interactive tools (e.g. H5P, Atomic Assessment, Articulate storyline, and Atomic Jolt), social media environments (e.g. Twitter, GitHub, Trello and Slack), and other data sources (e.g. Kaltura). This means that realistic LA models must be built over data from multiple sources in the complex learning ecosystem.

This workshop will introduce participants to the complexity associated with building scalable LA solutions over a modern learning ecosystem. It will discuss two data standards for tracking learner activity (xAPI and Caliper), comparing and contrasting the approaches that they have adopted, along with their various levels of adoption by different providers of educational tools. Participants will learn how to integrate data sources from multiple places, harmonising the data so that it can be piped to various LA tools in a complex learning ecosystem.

At the end of this workshop you will have:

  • some understanding of the things that can go wrong when building or procuring EdTech solutions that do not meet established data standards,
  • some techniques for working with vendors to ensure that their products do indeed conform to the requirements of your LA ecosystem
  • an idea of how to transform data that does not meet your requirements to achieve pragmatic data interoperability within your LA ecosystem

Activities
Participants will:

  • be introduced to modern educational data standards (xAPI and Caliper)
  • learn about the complexity of architecting scalable LA systems and how to control it

Working in teams participants will work to:

  • send data from various learning environments to a learning record store (LRS)
  • make a simple learning record provider (LRP) for a tool of their choice
  • extract data from a LRS and use it to build a simple report
  • start to think about how to achieve data interoperability over a complex learning ecosystem
  • build an extension to the open source LA-API implemented as a prototype at UTS
  • advanced extension topic: pragmatic data interoperability, use GraphQL to build a LA pipeline over multiple LRPs and flexibly send data to various LA tools (e.g. OnTask and the UTS user configurable dashboards)

Prerequisite Skills/Experience: Participants should have some familiarity with coding, ideally in javascript or python, and a desire to learn about educational data standards (i.e. xAPI and Caliper) and data interoperability.

Advanced preparation:

    Read this blog post:

  • https://edutechnica.com/2015/06/09/flipping-the-model-the-campus-api/
  • Familiarise yourself with these sites:

  • xAPI (https://www.adlnet.gov/research/performance-tracking-analysis/experience-api/)
  • IMS Caliper (https://www.imsglobal.org/activity/caliper)
  • More advanced – you might want to have a look at the actual specifications for xAPI and Caliper:

  • xAPI specification (https://github.com/adlnet/xAPI-Spec) and its extension to xAPI profiles (https://github.com/adlnet/xapi-profiles)
  • IMS Caliper specification (https://www.imsglobal.org/sites/default/files/caliper/v1p1/caliper-spec-v1p1/caliper-spec-v1p1.html)
  • You will save time during the workshop if you come with:

  • a working development environment (for a programming language of your preference)
  • a GitHub account (see https://github.com/)
  • a working NoSQL database (e.g. MongoDB https://www.mongodb.com/download-center/community)

Instructor: Kirsty Kitto,
University of Technology Sydney

W5. Probing Equity with Data: HLM, Optimal Matching, & More

Title: Probing Equity with Data: HLM, Optimal Matching, & More

Description: Observations of differential learning outcomes, particularly those that are associated with socioeconomic or demographic characteristics of individuals, are central to efforts that seek to study diversity, equity, and inclusion in higher education. In this workshop, we discuss the contexts in which these outcomes can be investigated using university administrative data augmented with external data sources. Linear regression, propensity score matching, and LASSO are introduced and critiqued as basic tools for the ascertainment of the performance gaps that are associated with these learning outcomes. Finally, we end with an example of ongoing research into large scale interventions that address the gender performance differences observed in large, introductory STEM courses.

Activities:
In this workshop participants will:

  • Download and manipulate synthetic administrative data sets
  • Become familiar with the merging of sources such as data.gov and the American Community Survey with campus data
  • Use R and R libraries, including tidyverse, to explore and quantify performance gaps.
  • Explore the generalizability and replicability of the inferences that might be made in light of experimental interventions.

Takeaways:

  • Multiple strategies that participants can use to explore diversity and equity with administrative data on their own campuses.
  • Guidelines for the merging external data sources with local administrative data.
  • First steps to be taken to assess underlying factors that may affect performance gaps.
  • Experimental approaches to understanding or reducing observed gaps.

Prerequisite Skills/Experience: Basic statistics, programming in R.

Advanced preparation: R/RStudio installed on laptop, as well as the “tidyverse”,“optmatch”, and “lars” R packages.


Instructors: Tim McKay and Ben Koester,
University of Michigan

W6. Networks in Learning Environments: Beyond Centrality Measures

Title: Networks in Learning Environments: Beyond Centrality Measures

Description: If you, like me, are interested in social network analysis in learning settings, you may have noticed one particularly popular approach: looking at the relationship between learner positioning in a network based on her degree, betweenness, closeness or eigenvalue centrality, and learner grades. I think that there is so much more to the analysis of learner networks. To advance this area of work in learning analytics, a researcher needs good questions, but also the practice and intuition around what statistical network models can and cannot do, as well as exposure to network metrics across diverse learning settings. I hope to share what I have learnt about these elements to help others explore different of analyzing networks and strengthen learning analytics research around the social factors in learning.

This workshop centers around a case study using data conventionally collected at universities: course enrollment, forum activity, demographics and grades. I will share with you my experience of how to progress from having access to this data, to asking educational questions, to operationalizing your hypothesis. We will also take a helicopter look at the intuition behind some network statistical models (multi-level modelling of ego-networks, exponential random graph modelling, stochastic actor-oriented modelling, relational event modelling) to help you understand how to select the approach relevant to your question. The hands-on statistical analysis will include ego-networks and exponential random graph modelling.

Activities
The workshop will be structured around an example of a mock dataset reflective of those collected by the universities. Starting from a specific problem posed by the university management: ‘What can you tell us using this data?’, we will walk through the process of learning analytics inquiry analyzing social dynamics.

Every thematic block will include a group exercise followed by the instructor’s input. You can choose to work alone on the exercises throughout the workshop, but you will be asked to share your insights with the group every now and then.

We will follow the R scripts for the analysis prepared by the instructor to exemplify ego-network analysis and exponential random graph modelling.

Target Audience

This workshop is suitable for anyone with a natural curiosity around social processes in learning settings. Network analysis can be applied across all kinds of digitally recorded data, but this workshop will exemplify SNA applications in relation to social dynamics. Your interest in this area of research is the main pre-requisite.

This workshop targets so-called ‘false beginners’ that have basic literacy in network analysis. I define such basic literacy as follows:

  • you understand the main limitations around network construction in digital learning settings (operationalizing a node and a tie, and that your network is a representation of a phenomenon you are modelling)
  • you have some basic familiarity with network measures (density, degree, betweenness, closeness, clustering).
  • you played around with network data in the past (but unsure what the metrics mean or what to do with them).
  • you are aware of the terms ‘preferential attachment’, ‘small worlds’, ‘reciprocity’, ‘transitivity’ is a plus.

This workshop also targets those who may have more advanced understanding of network analysis than above, but struggle with linking network metrics with learning analytics research questions and problems. You may have played with some network data in the past using Gephi or R but was not sure not sure how to move beyond calculating the various metrics into something more meaningful.

Your openness to try things is an important element to making this workshop a success. Maybe you are uncomfortable with social science theories. Maybe you are uncomfortable with using R or statistics. Learning analytics research builds on both social science theories and computational methods. Neither of the two is privileged in this workshop, and I hope to connect participants with complementary strengths – to make a more authentic learning analytics discussion possible.

Advanced preparation:
You will be required to fill out a pre-workshop survey helping me understand your area of work, level of expertise, and your other needs as a workshop participant. This is an important part of the participation as I will tailor the contents presented above to the overall needs in the group.

You will need a laptop with installed RStudio and some packages. I will send instructions as to how to do it prior to the workshop.

Instructors: Oleksandra Poquet,
National University of Singapore

W7. Connecting Learning Design, Data and Student Support Actions

Title: Connecting Learning Design, Data and Student Support Actions

Description: Comprehensive data sets extracted from technology-mediated learning activities can contribute to the process of understanding and improving a learning experience. However, this contribution is effective if the data is properly contextualized and the sense-making stage produces indicators that are meaningful from the point of view of the learning design, and in the end are used to support students. This multi-faceted dependency is behind some of the challenges to provide compelling evidence of impact when using data-supported decision procedures.

In this workshop we will focus on how the interaction between learning design features, data capture can be combined to provide effective personalized student support actions (PSSAs). Using a hands-on approach the workshop will explore the areas of data capture and processing, learning design affordances, and provision of student support through personalized feedback processes. The open-source tool OnTask (ontasklearning.org) will be used for this last aspect of the workshop.

Prerequisite Skills/Experience:Attendees may have
expertise in any of the three areas: 1) programming/data management,
2) academic design, or 3) teaching.

Advanced preparation:

  • A personal computer is required (no iPads or phones).

Instructor: Abelardo Pardo,
University of South Australia

W8. Title: Learner-Centered Visual Learning Analytics

Title: Learner-Centered Visual Learning Analytics

Description: This workshop will cover basic visualization design principles and how to apply them. Participants will learn about the various considerations that need to be made when designing visual learning analytics by analyzing existing visualisations and designing a visual learning analytic. Through these activities, participants will gain experience with the basic tools and procedures that can help them create visualizations of analytics so they can be interpreted by students.

Activities
Participants will be introduced to basic principles and functions underlying
visualization
visual learning analytics (learning dashboards and open learner models)
Participants will analyze existing visual learning analytics
Participants will work with data to design a visualization to meet their desired communication goals
Participants will evaluate their design and iterate upon it

Takeaways
1. An understanding of basic visualization principles
2. An understanding of the trade-offs inherent to any visual design decision
3. The ability to design visual learning analytics in a principled way
4. An initial design to support your visual learning analytics context
5. First steps toward establishing collaborations to continue work on visual learning analytics

Target Audience/Prerequisites/Experience:
Anyone interested in creating or using reporting to support learning.
Those with backgrounds in information visualization are welcome to attend, but they may only gain an understanding of some of the considerations that are unique to an educational context.

Advanced preparation:

  • Bring your own design challenge and data (if you have it).

Supplementary (Optional) Readings

  • Bertin, J. (1983). Semiology of graphics: Diagrams, networks, maps. (W. J. Berg, Trans.). Madison, WI: University of Wisconsin Press.
  • Bull, S., & Kay, J. (2007). Student Models that Invite the Learner in: the SMILI:) Open Learner Modelling Framework. International Journal of Artificial Intelligence in Education (IJAIED), 17(2), 89–120.
  • Bull, S., & Kay, J. (2016). SMILI☺: a Framework for Interfaces to Learning Data in Open Learner Models, Learning Analytics and Related Fields. International Journal of Artificial Intelligence in Education, 26(1), 293–331. https://doi.org/10.1007/s40593-015-0090-8
  • Demmans Epp, C., & Bull, S. (2015). Uncertainty representation in visualizations of learning analytics for learners: Current approaches and opportunities. IEEE Transactions on Learning Technologies, 8(3), 242–260. https://doi.org/10.1109/TLT.2015.2411604
  • Guerra, J., Schunn, C. D., Bull, S., Barria-Pineda, J., & Brusilovsky, P. (2018). Navigation support in complex open learner models: assessing visual design alternatives. New Review of Hypermedia and Multimedia, 24(3), 160–192. https://doi.org/10.1080/13614568.2018.1482375
  • Guerra-Hollstein, J., Barria-Pineda, J., Schunn, C. D., Bull, S., & Brusilovsky, P. (2017). Fine-Grained Open Learner Models: Complexity Versus Support. In Proceedings of the 25th Conference on User Modeling, Adaptation and Personalization – UMAP ’17 (pp. 41–49). Bratislava, Slovakia: ACM Press. https://doi.org/10.1145/3079628.3079682
  • Shneiderman, B. (1996). The Eyes Have It: A Task by Data Type Taxonomy for Information Visualizations. In Proceedings of the 1996 IEEE Symposium on Visual Languages (pp. 336–343). Washington, DC, USA: IEEE Computer Society. Retrieved from http://dl.acm.org/citation.cfm?id=832277.834354
  • Tufte, E. R. (1997). Visual and statistical thinking: displays of evidence for making decisions. In Visual Explanations (pp. 27–53). Cheshire, CT, USA: Graphics Press.
  • Ware, C. (2004). Information visualization: perception for design. San Francisco, CA: Morgan Kaufman.
  • Instructors: Carrie Demmans Epp, University of Alberta
    Sharon Hsiao, Arizona State University
    Jordan Barria-Pineda, University of Pittsburgh

T1: Developing and Implementing Institutional Data Governance Policies for Educational Data

Title: Developing and Implementing Institutional Data Governance Policies for Educational Data

Description: Data governance is a set of roles and policies that are combined to improve how data assets are handled within an organisation. Data policy should establish a set of rules defining how decisions are made, who is accountable for data management, who can access the data and for what purposes, and how data is extracted, obtained and stored. This policy needs to be integrated with current organisational processes and practices– and local governmental policies– in order to translate into tangible improvements in how data assets are managed and used across the institution.
The interdependency between institutional data and its business processes and applications increases the need for clear guidelines regarding how such data is managed. This tutorial will discuss and present examples for you to learn how to:

  • define roles and responsibilities in relation to the governance of institutional data;
  • identify the best practices in data management to facilitate its use within the institution,
  • provide a secure environment for data access and analysis whilst ensuring the privacy of all stakeholders;
  • define the organisational structure of roles, access rights and responsibilities;
  • establish clear lines of accountability; and
  • assure that the University complies with the current laws, regulations and standards about data management.

Advanced preparation:
No advance preparation is required for this tutorial.


Instructor: Grace Lynch, Royal Melbourne Institute of Technology University (RMIT)

T2: Qualitative Approaches to Learning Analytics

Title: Qualitative Approaches to Learning Analytics

Description: Learning analytics is a field concerned with ‘the measurement, collection, analysis and reporting of data about learners and their contexts’. This implies a quantitative approach. However, it has become increasingly clear that ‘understanding and optimizing learning and the environments in which it occurs’ also requires qualitative approaches.
Qualitative approaches can be used to understand why learners and teachers act as they do, to explain their decisions, to explore the importance of context for learning analytics, and to increase the value of learning analytics tools and methods.

When you have successfully completed this tutorial, you will have developed your ability to:

  • decide when and why to use qualitative approaches in learning analytics research
  • design a qualitative research study
  • analyse qualitative data
  • establish the trustworthiness of qualitative research.

Prerequisite Skills/Experience: This tutorial is intended for participants who have little experience of qualitative research and would like to explore how it can be used in the context of learning analytics.

Advance Preparation: No advance preparation is required for this tutorial.


Instructor: Rebecca Ferguson, Open University, UK

T3: Critical Issues of Machine Learning in Education: Nuance and Ethical Issues

Title: Critical Issues of Machine Learning in Education: Nuance and Ethical Issues

Description: Machine learning and big data are a common theme in the fields of learning analytics and educational data mining. In this tutorial, we’ll discuss ways in which we can measure bias in educational predictive models, and how we can understand and contextualize the issues which come with such biases. The tutorial will be technical but accessible in nature, and focus on the intersection of techniques and the educational domain, and should be appropriate for learning scientists, educational researchers, and computer scientists alike. This will include both a lecture component as well as a demonstration component using real datasets, and will culminate with a short discussion of ethics, impact, and decision making with involved in by all of the tutorial participants. Throughout the tutorial we aim to engage with participants to co-create a series of best practices for identifying, measuring, understanding, and responding to bias in educational machine learned models.

Activities
1. Participants will be introduced to the issue of bias and how machine learning models might pick up biases
2. Specific case studies looking at bias in machine learned models will be covered
3. Ethical considerations when using machine learning in education will be discussed

Prerequisite Skills/Experience:
This tutorial is designed for anyone interested in the intersection of machine learning, big data, algorithmic transparency, and education. A technical understanding of machine learning is not strictly necessary, but might be beneficial. You will leave with:

Takeaways
1. Knowledge of how bias may be generated in machine learned models.
2. An understanding of how we might measure this bias for given models.
3. A set of co-created considerations when applying these models in practice.

No advance preparation is required for this tutorial


Instructor: Christopher Brooks,
University of Michigan

T4: Learning Analytics Deployment Tactics: A Meta-Tutorial

Title: Learning Analytics Deployment Tactics: A Meta-Tutorial

Description: Learning analytics is a young field and beyond its research space its uptake has been slow across academics. Often, top down strategies are not easy adopted or focus on metrics that may not align across all disciplines in a university while bottom up approaches, while well focused have difficulty increasing their reach and capacity. Ultimately, designing a professional development plan in a university is not enough at best, and incorrect at worst.
This tutorial is intended for participants who are interested in disseminating learning analytics tools across courses and disciplines, enabling cross-discipline collaboration and innovation. The intent of the tutorial is to discover the first steps required to breach this barrier with a constructive approach.
At the completion of the workshop, you will have explored: (a) components that rely on first principles to identify datasets and metrics generically useful, and how to identify more specific variables. (b) development of tactics for increasing engagement among academics and across units.

Prerequisite Skills/Experience: Basic understanding of learning analytics.

Advanced preparation:
Read through the Canvas modules once signed into the workshop and conduct the pre-workshop activities. These resources will be available to participants a week before LASI. Bring a laptop to the workshop.

Instructor: Pablo Munguia,
Royal Melbourne Institute of Technology University (RMIT)

T5: Deep Dive into Corporate LMS – Uncover Workplace Learning Analytics

Title: Deep Dive into Corporate LMS – Uncover Workplace Learning Analytics

Description: Learning Management Systems (LMS) is an integral component for delivering, managing, and measuring organizational learning and development. Increasingly, the ability to measure workplace learning impact and the sequence allocation of resources largely depend on the type of reporting and associated learning analytics available in LMS. In the recent years, LMS vendors have continued to improve and refine the types of measurements their systems offer: individual performance tracking, collaboration effort, learning retention, level of learner engagement, and the likelihood that a learner will fail a course are few examples of such metrics. While it is not possible to survey over 700 LMS currently available in the market, this tutorial aim to highlight some of the new and emerging learning analytics in these platforms.
Specifically, we will take a through look under the hood and dig into two modern corporate LMS and their abilities in analyzing, measuring, and reporting learning. It is expected that participants will have a chance to interact with the two systems and explore functions in a sandbox environment.

Prerequisite Skills/Experience: None.


Instructor: Stella Lee, Paradox Learning

T6: Course Design As Learning Analytics Variable

Title: Course Design As Learning Analytics Variable

Description: While some may think it difficult, if not impossible, to measure pedagogical intent, all institutions probably observe students who appear to be more engaged in some courses than others. Is this simply an “instructor effect” or tied to certain disciplines or intrinsically motivated students? Or are there elements of good course design that might be considered universal, even transformative? Given the variety and type of digital learning environments, a key assumption is that an instructor’s pedagogy can be observed or inferred from student use of digital learning environments like the LMS, eTextbooks, clickers, blogs and the like. Even apart from these IT behavior-based proxies for engagement, if we define student success as not only passing one course, but also the next one that requires it, what might we learn about our students — and their courses — if there is variation in outcomes for all such two-course combination progressions across campus (see this brief demo of one such report for two UMBC math courses that typically have high failure rates)? This tutorial will focus on how course design has been conceptualized as a plausible learning analytics variable, and how course redesign could be one of the most scalable and transformative interventions any institution can pursue.

Prerequisite Skills/Experience: Prior teaching experience of any kind is helpful, but not required.

Advanced preparation:
While shamelessly UMBC-focused, individuals interested in this tutorial might find doit.umbc.edu/analytics to be of interest.

The following readings are helpful:

  • Fritz, J., & Whitmer, J. (2017, July 17). Moving the Heart and Head: Implications for Learning Analytics Research. EDUCAUSE Review Online. Retrieved from http://er.educause.edu/articles/2017/7/moving-the-heart-and-head-implications-for-learning-analytics-research
  • Fritz, J., & Whitmer, J. (2017, February 27). Learning Analytics Research for LMS Course Design: Two Studies. EDUCAUSE Review Online. Retrieved from http://er.educause.edu/articles/2017/2/learning-analytics-research-for-lms-course-design-two-studies
  • Macfadyen, L. P., & Dawson, S. (2012). Numbers are not enough. Why e-learning analytics failed to inform an institutional strategic plan. Journal of Educational Technology & Society, 15(3), 149–163. Retrieved from http://www.ifets.info/journals/15_3/11.pdf
  • Dawson, S., McWilliam, E., & Tan, J. P. L. (2008). Teaching smarter: How mining ICT data can inform and improve learning and teaching practice. Proceedings Ascilite Melbourne 2008. Retrieved from http://ascilite.org.au/conferences/melbourne08/procs/dawson.pdf
  • Robertson, D. L. (1999). Professors’ perspectives on their teaching: A new construct and developmental model. Innovative Higher Education, 23(4), 271–294. https://doi.org/10.1023/A:1022982907040
  • Chickering, A. W., & Gamson, Z. F. (1987). Seven principles for good practice in undergraduate education. American Association for Higher Education Bulletin, 39(7), 3–7. Retrieved from http://eric.ed.gov/?id=ED282491


Instructor: John Fritz,
University of Maryland, Baltimore County

T7: Do I Know the Right Path? An Intuitive Crash Course on Applying Statistical Tests on Your Datasets

Title: Do I Know the Right Path? An Intuitive Crash Course on Applying Statistical Tests on Your Datasets

Description: Applying statistical tests on datasets is inevitable before drawing any conclusions. However, there is always a question of which test to use as different datasets need different tests. In this workshop/tutorial, we will start with a basic recap of statistics and then gradually expand on showing which tests suit which tasks. To be more specific, after providing some hints on R and a brief review of statistic concepts, we will focus on how to leverage different parametric and non-parametric tests. If possible and time permits, we would continue by covering analysis topics such as latent variable analysis, regression, and correlation. In this session, the emphasis is not on statistics principles much and rather we try to intuitively make sense of applying statistical techniques.

Activities
1. Participants will be introduced to R language and a brief review of R operations will be provided. It is best if they have already installed R on their laptops.
2. Some concepts in statistics will be reviewed,
3. Parametric and non-parametric tests will be applied on a set of examples. The goal is to intuitively discuss the examples.

Prerequisite Skills/Experience: The tutorial is designed for graduate students and anyone interested in applying statistics on their research data could attend. Given the intuitive nature of the session, there should not be any strict need for specific prior knowledge in statistic although basic level of familiarity would help. You will leave with an understanding of perspectives on how to apply statistical tests on research and consequently how to design experiments so that proper analysis could be performed after experiments.

Advanced preparation:

  • Bring a laptop with the R installed.
  • Here is an introduction on R, which is useful for getting to know how to install R: https://cran.r-project.org/doc/manuals/R-intro.pdf


Hootan Rashtian, University of British Columbia

 
×

Register | Lost Password