Dealing artfully with artificial intelligence

Assessing students’ work can be a difficult business.  It is, of course, a critically important piece of what schools do – both formatively to help the students improve, and summatively to assign an attainment level.  Like many aspects of most professions, it sounds quite easy until you look hard at the details – but assuring exactly the same standards apply to hundreds of very different students with multiple teachers of varying experience, across disparate subjects, consistently over many years takes up a great deal of time and effort.  And it’s never perfect.

 

We can ask – can technology help?  That question is, of course, part of a much wider conversation that’s been taking place right across society for many years, especially with regard to Artificial Intelligence (AI), and the current explosion of interest in AI really does appear to offer something new.  The falling costs of extraordinary computing power, the potential of the cloud, the ubiquity of smart devices, and social media as an agglomerator of otherwise distributed data are all coming together to form something genuinely different.   We have seen phenomenal success in limited tasks, where AI already exceeds human capabilities. In 2018, for example, a Stanford-developed AI was able to diagnose 14 types of medical conditions, exceeding the human diagnostic accuracy for pneumonia (Rajpurkur et al).  So can AI help us assess students?

 

 

As soon as we get beyond marking multiple choice papers (the most limited, basic form of assessment) the current answer seems to be no.  AI is not yet able to mark essays to the same degree of accuracy or reliability as humans (let alone performances or projects).  But the thought experiment is an interesting one; when I was Chief Assessor for IB TOK many years ago we had this debate; if AI can mark essays as reliably as humans, would we accept the marks?

 

The question raises many interesting and difficult issues – and in the centre of these questions sits the notion of computer algorithms – sets of instructions designed to solve problems; these are the procedures encoded in AIs.   The potential biases that can exist via poorly implemented algorithms – racism and sexism for example – are well documented and profound  – so much so that movements for algorithmic justice are needed and have developed.  But let’s imagine these problems can be solved, or at least addressed so that the machines are able to be less biased than humans.   Would we accept machine marking then?

 

Most people, when I’ve asked that question, have an instinctive reaction one way or another.   Some respond yes, if it’s more accurate than humans, how could we not use it? while others argue that the human judgement aspects of assessment should never be squeezed out.  There are strong feelings on either side; perhaps this is as much a matter of taste as one of truth; reasonable people can and will differ. 

 

We can, of course, look more broadly than student assessment and consider the impact of AI more generally.  The European Commission’s recently published impact of Artificial Intelligence on learning, teaching, and education argues that AI may enable new ways of teaching and learning.  This is an exciting project especially if we can find ways to ensure AI can enrich the human capacity to learn as much as writing, books and digital technologies have done.

 

The authors neatly state the problem of algorithms: As AI learning algorithms are based on historical data, they can only see the world as a repetition of the past but they nevertheless remain very optimistic about the possibilities for AI: Many interesting things will happen when already existing technologies will be adopted, adapted, and applied for learning, teaching, and education. For example, AI may enable both new learning and teaching practices, and it may generate a new social, cultural, and economic context for education.  It is possible to imagine many exciting possibilities for teaching. 

 

This theme of transformation has become commonplace; but the report goes a long way beyond the familiar and relentless technological drumbeat that seems to dominate my inbox.   It is hard to argue with the systemic realism expressed here:

 

There will be great economic incentives to use AI to address problems that are currently perceived as important…for educational technology vendors it is easy to sell products that solve existing problems, but it is very difficult to sell products that require changes in institutions, organisations and current practices. To avoid hard-wiring the past, it would be important to put AI in the context of the future of learning. 

 

This is the generalisation of the problem of algorithms; that far from creating new futures, AI may lock us into the past: As AI scales up, it can effectively routinize old institutional structures and practices that may not be relevant for the future.   In particular, many… teacher tasks might be automated. However, this is based on the assumption that the role of teachers is rather mechanical and purely instructional with summative assessment playing a central role.  As many educators in many systems realise, the social, values and skill-based elements of education are at least as important  –  but as these cannot (currently) be captured by AI they may be ignored.  The danger highlighted in the report is the chilling one that as a result, there is a risk that AI might be used to scale up bad pedagogical practices… Instead of renewing the system and orienting it to the needs of a post-industrial economy and knowledge society, AI may increasingly mechanise and reinvent outdated teaching practices and make them increasingly difficult to change.

 

This has the ring of truth about it, and we’ve seen the same thing elsewhere – you might argue that the new gig economy, for example, has done more to create an ‘underclass’ than to provide flexibility and opportunity.  As educators and parents therefore, we need clear and explicitly stated visions and policies that put emerging technical possibilities in the broader context of what we want for our students, children and future citizens.   Schools of vision have always had these, have always been tinkering to innovate and improve. 

 

The big takeaway is that creating concrete experiments in an authentic context with teachers and experts in education is now not just a nice to have, but a critical necessity.  

About admin

Check Also

Finalsite University Keynote: Challenges and Opportunities of International Education

Welcome everyone to the United World College of SE Asia, Dover Campus for two days …