Artificial Intelligence (AI) creates a dilemma for education in that there is change on the horizon that requires new jobs and new skills, but we don’t know what these future jobs and skills will be. Will there be less jobs? Will there be more jobs? Will the sharing economy evolve? Will we have more leisure time? Professor Rose Luckin, from UCL/Knowledge Lab, gave an insightful seminar at the Centre for Research in Assessment and Digital Learning (CRADLE) on how we can start preparing education for this unknown future.
The recent ABC show The AI Race argued that all jobs will be impacted by AI with possibly as many as 60% of the careers students currently study for not existing in 30 years time. This is projected to significantly impact young people, who are already suffering from high underemployment amid global youth unemployment increases. Prof. Luckin argues, however, that as humans, with sentient intelligence ourselves, ‘we get to make the choices’ about AI adoption. In this approach educators have a responsibility to develop learners with the premium skills that AI can not easily replace: creativity, emotional intelligence, conversation and social interaction. Prof. Luckin’s argument is two-fold. Firstly, Artificial Intelligence is not a technology (e.g. machine learning) but an interdisciplinary science that requires a deep understanding of human intelligence and problem solving. Secondly, human intelligence remains incredbly powerful and uniquely self-aware through metacognition, self-regulation and self-efficacy. Focusing on human intelligence skills, combined with our knowledge of how to learn, how to teach, and how to research allows for the potential to use AI as a way to better understand ourselves and that this opens huge possibilities for education.
This will require choices about which problems we direct AI to work on. AI has been successful in winning Go, Chess, and Jeopardy against human champions, has struggled with general IQ intelligence, and become successful at designing cancer treatment plans. While the future will need more computer programmers and data scientists to develop AI further, more importantly it will need individuals and organisations across all industries to be more informed about what AI is good for and what it isn’t, as well as the ethical issues involved. AI will surpass humans in pattern matching and classification problems, for example, which has great promise for areas such as medical imaging, image recognition, and personalised intervention plans. Societal problems, including those impacting on education such as global teacher shortages and increasing inequalities or human challenges such as supporting an individual to come to terms with trauma or a diagnosis of a terminal disease will require humans to work with AI and big data. This first opportunity for education is to emphasise learning on the cognitive or social skills that are distinctly human alongside how AI can be used to magnify human ability. This will continue the focus on learning to learn skills becoming an increasingly central part of the journey.
The second opportunity for education is the use of AI to support the teaching process. If ‘data is the new oil’ as many suggest, then Prof. Luckin suggested that like oil it is extracted in crude forms and requires substantial refinement to be usable. AI’s abilities in pattern detection, classification, and consistently performing repetitive tasks make it a valuable partner for research data refinement. This might help teachers focus on the intervention rather than the diagnosis. Prof. Luckin presented two examples from her own research around an interactive multimedia activity developing metacognitive skills and a face-to-face activity targeting collaborative problem-solving skills. The use of sensors inclduing clickstreams in the multimedia environment and movement or facial tracking in the classroom showed how data could be collated in multiple research contexts. AI was used to support an adaptive learning model in the multimedia activity and for extracting, processing, and visualising the multiple data sensors in the face-to-face example. In both cases the AI helped indentify where specific learner groups could benefit from additional support. Further AI investigations may help identify patterns that would allow a reduction in the number of sensors required, particularly the use of human observers so that this could be applied more generally in classrooms.
The final opportunity related to learners themselves. Educating learners to become more adept at managing their own data and understanding how this could be used to demonstrate valued skills offers significant potential. For example, imagine being able to respond to the interview question ‘can you give an example of how you worked well within a team’ with a digital portfolio of your involvement in collaborative problem solving exercises. Emergent technologies such as RedPanda, based on xAPI, and micro-credentialing based on blockchains perhaps reinforce Prof. Luckin’s argument that it is not about the technology but developing the skills to be able to use these to enhance our self-awareness. For learning analytics this would involve looking for the signifiers in student data that associate with expert understanding of teaching and learning and using this as a prompt to greater understanding. For students it will be about empowering them to own, understand, and develop how this may increase their own potential.