What self-regulation can tell us about analytics

Learners who see technologies as being beneficial to learning are likely to use them less, while those with poor strategies for learning will use them more. This may appear counter intuitive and challenges common perceptions used to determine the success of learning technologies, but essentially insists on quality rather than quantity of interaction which makes sense. The findings behind this emerge from the literature on self-regulated learning and cognitive engagement, which highlight several non-linear relationships between perception, learning strategies and the use of tools. In this article I will summarise three pieces of research and discuss their implications for common metrics used to evaluate learning technologies.

Winne and Hadwin (1998) argue that studying can be distinguished from learning and ‘compels students to engage in complex bundles of goal-directed cognitive and motivational processes’ (pg. 278). Self-regulated studying is demonstrated to be more effective than normal studying, where self-regulation involves a four stage process: (1) definition of task; (2) goal setting and planning; (3) enactment; and (4) adaptation. The stages are recursive – products and evaluations from one stage feeds into the next – and weakly sequenced – the stages can overlap, skip, or go back – and while these are expected to leave traces in the forms of products the recursive nature of the system make this complex to research. Davidson and Sternberg (1998) argue that learners, or experts, who apply effective metacognitive strategies spend more time in the planning stage and as a result less time in the enactment stage – they are also likely to make more effective evaluations and adaptations because of standards identified during planning. Less skilled problem solvers spend more time in the enactment stage due to reduced planning that is often caused by insufficient domain or metacognitive knowledge. Unless the novice learner quits the task due to frustration it is likely that overall expert problem solvers are likely to spend less time on the problem and achieve greater results. 

Approaches to study and teaching (Richardson) - Winne & Hadwin SRL

Winne and Hadwin, 2008

Continue reading

Neuroscience and Educational Technology – reflecting on seminar by @thinksitthrough

I was fortunate to be able to attend Professor Diana Laurillard’s seminar on Learning ‘number sense’ through digital games with intrinsic feedback. The main presentation was the use of a specially designed game as an intervention method to support students with ‘dyscalculia’, however this was positioned in the broader context of bringing together the fields of neuroscience and educational research through educational technology.

Dyscalculia is a core deficit in numerosity that typically manifests through a lack of understanding of relationships between numbers and a reliance on counting to solve numerical problems. While this bears no relation to overall intelligence it persists through life and may cause issues and anxieties through schooling and present challenges to everyday tasks such as counting change or measurement. Individuals will typically develop compensatory strategies. Neuroscience identifies abnormalities in the intraparietal sulci which usually results in less grey matter and a reduced possibility for connections in the region. Cognitive testing in educational settings is used to distinguish low numeracy from dyscalculia and cognitive science theorises learning process based on self-regulation, constructionism and design research.

The intervention designed is a constructivist game that provides the learner with scaffolded progress through levels. The first level uses colour beads that can be sliced or added together to form the target number. The next level adds numbers (symbols) to the beads, the next removes colour and the final level uses the number symbols only.

game

Continue reading

Docs, zaps, and rows – starting the #PhD

Been a few years in the making but I’ve started as a part-time PhD student at University of Melbourne researching engagement profiles and learner analytics. Before I get into a study period I like to get set up with a group of tools and a workflow that let me know I’m in study mode, and hopefully aid the process at the same. Embarking on a 6-year project has led me to infer I need a little bit more structure than in my previous studies – for example using reading logs to remember what I’ve read rather than rely on my brain, which I need to keep free for research or ordering coffee.

I had a simple set of criteria for the tools:

  1. Should be cloud-based (or syncable) – I don’t always use the same laptop
  2. Mobile app preferred particularly for notes and ideas which I may have on the go
  3. Free wherever possible
  4. Very little setup or admin.

The first 2 years in my case are a literature review leading up to confirmation so I’ve devised the following workflow across a range of different platforms:

Blank Flowchart - New Page (1)

Continue reading

When are they learning? A #Moodle activity calendar heatmap

As a precursor to looking at withdrawal and drop-out points I wanted to visualise learner activity across an academic year. This could help determine the patterns of behaviour for different individuals. The idea being that you can quickly see at a glance which days each learner is using the system. Extending last week’s exploration with activity heatmaps I came across the lattice based time series calendar heatmap which provides a nice way of plotting this for exploration. It is quite a simple process that requires some date manipulation to create extra calendar classifications. Then I made a change to the facet to show each row as a different user rather than a year.

Calendar Heatmap

calendar_heatmap

In the calendar heatmap each row is a learner and the grid shows the spread of daily activity across each month in a familiar calendar format. The visualisation quickly reveals patterns such as activity tailing off in the final months for Student4, the extended Easter holiday during Apr for Student8, and the late starter or crammer that is Student10. A couple of students also broke the usual Christmas learning abstinence and logged in during the holidays. There are a few variants of this that are possible to achieve by playing with the facet or applying it to different summaries of the log data for example a facet on activity types within a course or activity participation for a single learner that I may explore in future.

Continue reading

Where is your learning activity? A #Moodle component heatmap.

Understanding which courses use which tools is a useful starting point for exploration and may be informative to staff development programs or used in conjunction with course observations. The Moodle Guide for Teachers, for example, could be used to help form an understanding of the tools in question. I’m interested in the exploration side, having started a new project in the last week with some former colleagues. We’re exploring what can be learned from learner data and so if I know where different types of activity are happening then I can drill-down into these areas.

I’m using an idea I picked up from Flowing Data to create a heat map of tool use in a Moodle LMS by category. The heat map visualisation site nicely with the existing tool guide so seems a good approach. The Moodle site has been recently upgraded and so the dataset has old style logs (mdl_log) and new style logs (mdl_logstore_standard_log) so the data extraction and wrangling has to account for both formats. Then it is a case of manipulating the data into the heat map format.

Heat Map

heatmap

I’ve focused on learner activity within each type of tool rather than the number of tools in a course. The intention is to show the distribution of learner activity. It shows clearly the dominance of resource and assessment type tools, as well as some pockets of communication and collaboration. In this instance the values are skewed by the large number of resource based activities and the dominance of a single department in terms of activity numbers, which can be seen in the bar chart below. However, the technique can be applied to comparing courses within a department or comparing users within a course, which may share more similar scales.

barplot

Continue reading

Assignment engagement timeline – starting with basics @salvetore #mootau15 #moodle #learninganalytics

Having joined the assessment analytics working group for Moodle Moot AU this year, I thought I’d have a play around with the feedback event data and it’s relation to future assignments. The simplified assumption to explore is that learners who view their feedback are enabled to perform better in subsequent assignments, which may be a reduction of potentially more complex distance travelled style analytics. To get started exploring the data I have produced a simple timeline that shows the frequency of assignment views within a course based on the following identified status of the submission:

  1. Pre-submission includes activities when the learner is preparing a submission
  2. Submitted includes views after submission but before receiving feedback (possibly anxious about results)
  3. Graded includes feedback views once the assignment is graded
  4. Resubmission includes activities that involve the learner resubmitting work if allowed

The process I undertook was to sort the log data into user sequences and use a function to set the status based on preceding events. For example, once the grade is released then count subsequent views as ‘graded’. This gives an idea of the spread and frequency of assignment engagement.

Timeline

Timeline

The timeline uses days on the x-axis and users on the y-axis. Each point plotted represents when events were logged for each learner – coloured by the status and sized according to the frequency on that day. There are a few noticeable vertical blue lines which correspond to feedback release dates (i.e. many learners view feedback immediately on its release) and you start to get an idea that some learners view feedback much more than others. The pattern of yellow points reveal learners who begin preparing for their assignment early, contrasted with those who cram a lot of activity closer to deadlines. I have zoomed into a subset of the learners below to help show this.

Timeline-zoomed

Having put this together quickly I am hoping I will have some time to refine the visualisation to better identify some of the relationships between assignments. I could also bring in some data from the assignment tables to enrich this having limited myself just to event data in the logs thus far. Some vertical bars showing deadlines, for example, might be helpful, or timelines for individual users with assignments on the y-axis to see how often users return to previous feedback across assignments as shown below. Here you can see the very distinct line of a feedback release, which for formative assessment it may have been better learning design to release feedback more regularly and closer to the submission.

timeline-learner

Continue reading

Can activity analytics support understanding engagement a measurable process? Inspiration from @birdahonk

I was pleased to find out my revised paper on this topic was accepted for publication in the September issue of the Journal of Applied Research in Higher Education. The basic premise of the paper is that engagement can be measured as a metric through the appropriation of ideas commonly used in social marketing metrics. For this post I’ll briefly discuss how I approached this by presenting engagement as a learning theory using the ideas of Freire and Vygotsky, as a process, and as a metric. I’ll also share my workshop slides from the conference if you want to try and create your own learner engagement profile. While I’ve started looking into different approaches, this post summarises some of the key principles developed throughout the paper that have guided my thinking of engagement.

UVUEngagement as a learning theory

The paper proposes a concept of engagement that draws on the work of Paolo Freire and Lev Vygotsky and the evolution of the learner voice. The first aspect of this is to re-position the learner as the subject within education and not the object of education, supplanting previous models which portray the learner as a passive recipient of pre-packaged knowledge. The second aspect is understanding the learner voice as a creative (Freire) and spontaneous (Vygotksy) expression within a socialised teaching-learning process that supports dialectical interactions between learner and teacher curiosity. This positions engagement as the process of recognising and respecting the learner’s world, which as Freire reveals is after all the ‘primary and inescapable face of the world itself’ in order to support the development of higher-order thinking skills. The repression of this voice is likely to result in patterns of inertia, non-engagement and alienation that are discussed widely in motivation and engagement literature. This triangulation between motivation and engagement remains a theme central to a range of learning analytics research. Correlation between learning and autonomy remains an interesting area of research.

Engagement as Process

For the paper I used Haven’s engagement process model and overlaid it with concepts from engagement literature reviews by Fredricks, Blumenfeld, & Paris (2004) and Trowler (2010)Haven posits that engagement is the new metric that supersedes previous linear metaphors encompassing the quantitative data of site visits, the qualitative data of surveys and performance, as well as the fuzzy data in between that represents social media. Haven and Vittel elaborate this into an expansive process that link four components of engagement: involvement, interaction, intimacy, and influence through the key stages of discovery, evaluation, use, and affinity (see below). To appropriate this into the educational literature research of Fredricks et al. one can explore examples of involvement and interaction as behavioural engagement, intimacy as emotional engagement, and influence as cognitive engagement. Furthermore when considering whether engagement is high or low in each component, Trowler’s categorisation of negative engagement, non-engagement, and positive engagement can be adopted. Engagement Process

Engagement as a metric

Learner Dashboard

The goal of positioning this as a metric was to create a learner engagement profile, similar to Haven’s engagement profile for marketing. I used Stevenson’s (2008) Pedagogical Model and Conole and Fill’s (2005) Task Type Taxonomy as ways of classifying log data and social network analysis to understand interactions between the different course actors. These were used to form dashboards such as the example above that could then be used to understand profiles, such as the one below (name fictionalised). One insight is that where simple raw VLE data might have suggested an engaged learner who is regularly online and features centrally in discussions, the engagement profile reveals the possibility of a learner who may lack academic support during their time online (evenings) and demonstrating a pattern of alienation based on an apparently strategic approach within an environment that is heavily structured through teacher-led inscription. Given the number of users who have not logged in or have yet to post to the discussions it might also seem sensible to target other learners for engagement interventions, however this would miss opportunities, revealed in the engagement profile, to provide useful support interventions targeting improved learner voice.

Engagement Profile

Continue reading