Artificial Intelligence (AI) creates a dilemma for education in that there is change on the horizon that requires new jobs and new skills, but we don’t know what these future jobs and skills will be. Will there be less jobs? Will there be more jobs? Will the sharing economy evolve? Will we have more leisure time? Professor Rose Luckin, from UCL/Knowledge Lab, gave an insightful seminar at the Centre for Research in Assessment and Digital Learning (CRADLE) on how we can start preparing education for this unknown future.
Every man’s world picture is and always remains a construct of his mind and cannot be proved to have any other existence.
Constructivism essentially argues that epistemic agents know nothing about the world except what they have put together as cognitive structures. Rather than knowledge being a representation of what exists, constructivists posit knowledge as a mapping of what turns out to be feasible based on human experience. Piaget considers knowledge as the set of cognitive structures that are viable given our experiences and maintain a dynamic state of equilibrium in which knowledge yields expected results for experience. Piaget elaborates a two-fold instrumentalism for knowledge: (1) a utilitarian instrumentality where action schemes at the sensory-motor level help learners achieve goals in their interactions with the world; and (2) an epistemic instrumentality where operative schemes at the level of reflective abstraction create a coherent conceptual network that reflects the paths of acting and thinking that are viable. Learning occurs when an existing scheme produces unexpected results and leads to perturbations. (von Glasersfeld, 1998)
Learners who see technologies as being beneficial to learning are likely to use them less, while those with poor strategies for learning will use them more. This may appear counter intuitive and challenges common perceptions used to determine the success of learning technologies, but essentially insists on quality rather than quantity of interaction which makes sense. The findings behind this emerge from the literature on self-regulated learning and cognitive engagement, which highlight several non-linear relationships between perception, learning strategies and the use of tools. In this article I will summarise three pieces of research and discuss their implications for common metrics used to evaluate learning technologies.
Winne and Hadwin (1998) argue that studying can be distinguished from learning and ‘compels students to engage in complex bundles of goal-directed cognitive and motivational processes’ (pg. 278). Self-regulated studying is demonstrated to be more effective than normal studying, where self-regulation involves a four stage process: (1) definition of task; (2) goal setting and planning; (3) enactment; and (4) adaptation. The stages are recursive – products and evaluations from one stage feeds into the next – and weakly sequenced – the stages can overlap, skip, or go back – and while these are expected to leave traces in the forms of products the recursive nature of the system make this complex to research. Davidson and Sternberg (1998) argue that learners, or experts, who apply effective metacognitive strategies spend more time in the planning stage and as a result less time in the enactment stage – they are also likely to make more effective evaluations and adaptations because of standards identified during planning. Less skilled problem solvers spend more time in the enactment stage due to reduced planning that is often caused by insufficient domain or metacognitive knowledge. Unless the novice learner quits the task due to frustration it is likely that overall expert problem solvers are likely to spend less time on the problem and achieve greater results.
Winne and Hadwin, 2008
I was fortunate to be able to attend Professor Diana Laurillard’s seminar on Learning ‘number sense’ through digital games with intrinsic feedback. The main presentation was the use of a specially designed game as an intervention method to support students with ‘dyscalculia’, however this was positioned in the broader context of bringing together the fields of neuroscience and educational research through educational technology.
Dyscalculia is a core deficit in numerosity that typically manifests through a lack of understanding of relationships between numbers and a reliance on counting to solve numerical problems. While this bears no relation to overall intelligence it persists through life and may cause issues and anxieties through schooling and present challenges to everyday tasks such as counting change or measurement. Individuals will typically develop compensatory strategies. Neuroscience identifies abnormalities in the intraparietal sulci which usually results in less grey matter and a reduced possibility for connections in the region. Cognitive testing in educational settings is used to distinguish low numeracy from dyscalculia and cognitive science theorises learning process based on self-regulation, constructionism and design research.
The intervention designed is a constructivist game that provides the learner with scaffolded progress through levels. The first level uses colour beads that can be sliced or added together to form the target number. The next level adds numbers (symbols) to the beads, the next removes colour and the final level uses the number symbols only.
Been a few years in the making but I’ve started as a part-time PhD student at University of Melbourne researching engagement profiles and learner analytics. Before I get into a study period I like to get set up with a group of tools and a workflow that let me know I’m in study mode, and hopefully aid the process at the same. Embarking on a 6-year project has led me to infer I need a little bit more structure than in my previous studies – for example using reading logs to remember what I’ve read rather than rely on my brain, which I need to keep free for research or ordering coffee.
I had a simple set of criteria for the tools:
- Should be cloud-based (or syncable) – I don’t always use the same laptop
- Mobile app preferred particularly for notes and ideas which I may have on the go
- Free wherever possible
- Very little setup or admin.
The first 2 years in my case are a literature review leading up to confirmation so I’ve devised the following workflow across a range of different platforms:
As a precursor to looking at withdrawal and drop-out points I wanted to visualise learner activity across an academic year. This could help determine the patterns of behaviour for different individuals. The idea being that you can quickly see at a glance which days each learner is using the system. Extending last week’s exploration with activity heatmaps I came across the lattice based time series calendar heatmap which provides a nice way of plotting this for exploration. It is quite a simple process that requires some date manipulation to create extra calendar classifications. Then I made a change to the facet to show each row as a different user rather than a year.
In the calendar heatmap each row is a learner and the grid shows the spread of daily activity across each month in a familiar calendar format. The visualisation quickly reveals patterns such as activity tailing off in the final months for Student4, the extended Easter holiday during Apr for Student8, and the late starter or crammer that is Student10. A couple of students also broke the usual Christmas learning abstinence and logged in during the holidays. There are a few variants of this that are possible to achieve by playing with the facet or applying it to different summaries of the log data for example a facet on activity types within a course or activity participation for a single learner that I may explore in future.
Understanding which courses use which tools is a useful starting point for exploration and may be informative to staff development programs or used in conjunction with course observations. The Moodle Guide for Teachers, for example, could be used to help form an understanding of the tools in question. I’m interested in the exploration side, having started a new project in the last week with some former colleagues. We’re exploring what can be learned from learner data and so if I know where different types of activity are happening then I can drill-down into these areas.
I’m using an idea I picked up from Flowing Data to create a heat map of tool use in a Moodle LMS by category. The heat map visualisation site nicely with the existing tool guide so seems a good approach. The Moodle site has been recently upgraded and so the dataset has old style logs (mdl_log) and new style logs (mdl_logstore_standard_log) so the data extraction and wrangling has to account for both formats. Then it is a case of manipulating the data into the heat map format.
I’ve focused on learner activity within each type of tool rather than the number of tools in a course. The intention is to show the distribution of learner activity. It shows clearly the dominance of resource and assessment type tools, as well as some pockets of communication and collaboration. In this instance the values are skewed by the large number of resource based activities and the dominance of a single department in terms of activity numbers, which can be seen in the bar chart below. However, the technique can be applied to comparing courses within a department or comparing users within a course, which may share more similar scales.