Constructing the world: some notes and thoughts for an active learning paradigm

Every man’s world picture is and always remains a construct of his mind and cannot be proved to have any other existence.

Edwin Schrodinger

Constructivism

Constructivism essentially argues that epistemic agents know nothing about the world except what they have put together as cognitive structures. Rather than knowledge being a representation of what exists, constructivists posit knowledge as a mapping of what turns out to be feasible based on human experience. Piaget considers knowledge as the set of cognitive structures that are viable given our experiences and maintain a dynamic state of equilibrium in which knowledge yields expected results for experience. Piaget elaborates a two-fold instrumentalism for knowledge: (1) a utilitarian instrumentality where action schemes at the sensory-motor level help learners achieve goals in their interactions with the world; and (2) an epistemic instrumentality where operative schemes at the level of reflective abstraction create a coherent conceptual network that reflects the paths of acting and thinking that are viable. Learning occurs when an existing scheme produces unexpected results and leads to perturbations. (von Glasersfeld, 1989) 

Self-regulated Learning - Piagetian Constructivsim (1)

 

Continue reading

What self-regulation can tell us about analytics

Learners who see technologies as being beneficial to learning are likely to use them less, while those with poor strategies for learning will use them more. This may appear counter intuitive and challenges common perceptions used to determine the success of learning technologies, but essentially insists on quality rather than quantity of interaction which makes sense. The findings behind this emerge from the literature on self-regulated learning and cognitive engagement, which highlight several non-linear relationships between perception, learning strategies and the use of tools. In this article I will summarise three pieces of research and discuss their implications for common metrics used to evaluate learning technologies.

Winne and Hadwin (1998) argue that studying can be distinguished from learning and ‘compels students to engage in complex bundles of goal-directed cognitive and motivational processes’ (pg. 278). Self-regulated studying is demonstrated to be more effective than normal studying, where self-regulation involves a four stage process: (1) definition of task; (2) goal setting and planning; (3) enactment; and (4) adaptation. The stages are recursive – products and evaluations from one stage feeds into the next – and weakly sequenced – the stages can overlap, skip, or go back – and while these are expected to leave traces in the forms of products the recursive nature of the system make this complex to research. Davidson and Sternberg (1998) argue that learners, or experts, who apply effective metacognitive strategies spend more time in the planning stage and as a result less time in the enactment stage – they are also likely to make more effective evaluations and adaptations because of standards identified during planning. Less skilled problem solvers spend more time in the enactment stage due to reduced planning that is often caused by insufficient domain or metacognitive knowledge. Unless the novice learner quits the task due to frustration it is likely that overall expert problem solvers are likely to spend less time on the problem and achieve greater results. 

Approaches to study and teaching (Richardson) - Winne & Hadwin SRL

Winne and Hadwin, 2008

Continue reading

Neuroscience and Educational Technology – reflecting on seminar by @thinksitthrough

I was fortunate to be able to attend Professor Diana Laurillard’s seminar on Learning ‘number sense’ through digital games with intrinsic feedback. The main presentation was the use of a specially designed game as an intervention method to support students with ‘dyscalculia’, however this was positioned in the broader context of bringing together the fields of neuroscience and educational research through educational technology.

Dyscalculia is a core deficit in numerosity that typically manifests through a lack of understanding of relationships between numbers and a reliance on counting to solve numerical problems. While this bears no relation to overall intelligence it persists through life and may cause issues and anxieties through schooling and present challenges to everyday tasks such as counting change or measurement. Individuals will typically develop compensatory strategies. Neuroscience identifies abnormalities in the intraparietal sulci which usually results in less grey matter and a reduced possibility for connections in the region. Cognitive testing in educational settings is used to distinguish low numeracy from dyscalculia and cognitive science theorises learning process based on self-regulation, constructionism and design research.

The intervention designed is a constructivist game that provides the learner with scaffolded progress through levels. The first level uses colour beads that can be sliced or added together to form the target number. The next level adds numbers (symbols) to the beads, the next removes colour and the final level uses the number symbols only.

game

Continue reading

Docs, zaps, and rows – starting the #PhD

Been a few years in the making but I’ve started as a part-time PhD student at University of Melbourne researching engagement profiles and learner analytics. Before I get into a study period I like to get set up with a group of tools and a workflow that let me know I’m in study mode, and hopefully aid the process at the same. Embarking on a 6-year project has led me to infer I need a little bit more structure than in my previous studies – for example using reading logs to remember what I’ve read rather than rely on my brain, which I need to keep free for research or ordering coffee.

I had a simple set of criteria for the tools:

  1. Should be cloud-based (or syncable) – I don’t always use the same laptop
  2. Mobile app preferred particularly for notes and ideas which I may have on the go
  3. Free wherever possible
  4. Very little setup or admin.

The first 2 years in my case are a literature review leading up to confirmation so I’ve devised the following workflow across a range of different platforms:

Blank Flowchart - New Page (1)

Continue reading

When are they learning? A #Moodle activity calendar heatmap

As a precursor to looking at withdrawal and drop-out points I wanted to visualise learner activity across an academic year. This could help determine the patterns of behaviour for different individuals. The idea being that you can quickly see at a glance which days each learner is using the system. Extending last week’s exploration with activity heatmaps I came across the lattice based time series calendar heatmap which provides a nice way of plotting this for exploration. It is quite a simple process that requires some date manipulation to create extra calendar classifications. Then I made a change to the facet to show each row as a different user rather than a year.

Calendar Heatmap

calendar_heatmap

In the calendar heatmap each row is a learner and the grid shows the spread of daily activity across each month in a familiar calendar format. The visualisation quickly reveals patterns such as activity tailing off in the final months for Student4, the extended Easter holiday during Apr for Student8, and the late starter or crammer that is Student10. A couple of students also broke the usual Christmas learning abstinence and logged in during the holidays. There are a few variants of this that are possible to achieve by playing with the facet or applying it to different summaries of the log data for example a facet on activity types within a course or activity participation for a single learner that I may explore in future.

Continue reading

Where is your learning activity? A #Moodle component heatmap.

Understanding which courses use which tools is a useful starting point for exploration and may be informative to staff development programs or used in conjunction with course observations. The Moodle Guide for Teachers, for example, could be used to help form an understanding of the tools in question. I’m interested in the exploration side, having started a new project in the last week with some former colleagues. We’re exploring what can be learned from learner data and so if I know where different types of activity are happening then I can drill-down into these areas.

I’m using an idea I picked up from Flowing Data to create a heat map of tool use in a Moodle LMS by category. The heat map visualisation site nicely with the existing tool guide so seems a good approach. The Moodle site has been recently upgraded and so the dataset has old style logs (mdl_log) and new style logs (mdl_logstore_standard_log) so the data extraction and wrangling has to account for both formats. Then it is a case of manipulating the data into the heat map format.

Heat Map

heatmap

I’ve focused on learner activity within each type of tool rather than the number of tools in a course. The intention is to show the distribution of learner activity. It shows clearly the dominance of resource and assessment type tools, as well as some pockets of communication and collaboration. In this instance the values are skewed by the large number of resource based activities and the dominance of a single department in terms of activity numbers, which can be seen in the bar chart below. However, the technique can be applied to comparing courses within a department or comparing users within a course, which may share more similar scales.

barplot

Continue reading

Assignment engagement timeline – starting with basics @salvetore #mootau15 #moodle #learninganalytics

Having joined the assessment analytics working group for Moodle Moot AU this year, I thought I’d have a play around with the feedback event data and it’s relation to future assignments. The simplified assumption to explore is that learners who view their feedback are enabled to perform better in subsequent assignments, which may be a reduction of potentially more complex distance travelled style analytics. To get started exploring the data I have produced a simple timeline that shows the frequency of assignment views within a course based on the following identified status of the submission:

  1. Pre-submission includes activities when the learner is preparing a submission
  2. Submitted includes views after submission but before receiving feedback (possibly anxious about results)
  3. Graded includes feedback views once the assignment is graded
  4. Resubmission includes activities that involve the learner resubmitting work if allowed

The process I undertook was to sort the log data into user sequences and use a function to set the status based on preceding events. For example, once the grade is released then count subsequent views as ‘graded’. This gives an idea of the spread and frequency of assignment engagement.

Timeline

Timeline

The timeline uses days on the x-axis and users on the y-axis. Each point plotted represents when events were logged for each learner – coloured by the status and sized according to the frequency on that day. There are a few noticeable vertical blue lines which correspond to feedback release dates (i.e. many learners view feedback immediately on its release) and you start to get an idea that some learners view feedback much more than others. The pattern of yellow points reveal learners who begin preparing for their assignment early, contrasted with those who cram a lot of activity closer to deadlines. I have zoomed into a subset of the learners below to help show this.

Timeline-zoomed

Having put this together quickly I am hoping I will have some time to refine the visualisation to better identify some of the relationships between assignments. I could also bring in some data from the assignment tables to enrich this having limited myself just to event data in the logs thus far. Some vertical bars showing deadlines, for example, might be helpful, or timelines for individual users with assignments on the y-axis to see how often users return to previous feedback across assignments as shown below. Here you can see the very distinct line of a feedback release, which for formative assessment it may have been better learning design to release feedback more regularly and closer to the submission.

timeline-learner

Continue reading