Assessment in the Age of AI

Isometric flat vector concept of online exam, questionnaire form, online education, survey, internet quiz. Survey or exam form long paper sheet with answered quiz checklist and success result

With AI adoption accelerating faster than schools and trusts can adapt, the question is no longer how do we stop students using AI, but how do we fairly assess learning in an AI-driven world? This is the focus of Gary Henderson’s latest article

Artificial intelligence is now part of the everyday world including that of education where it has seen rapidly development. Tech companies race towards Artificial General Intelligence (AGI) in search of the riches this will bring but irrespective of some of the risks and harms that may result. From chatbots capable of producing fluent essays to tools that can create polished presentations in seconds, AI’s arrival has sparked an understandable wave of concern in schools. Most of this concern however, revolves around one core fear: if AI can produce work for students, how can teachers trust what is submitted? Yet this very question reveals a deeper truth about assessment, one I suspect with has been overlooked for too long, possibly due to its uncomfortable nature but also due to the established approaches currently in place.

Assessment was never really about the production of a piece of written text, a slide deck, or a neatly formatted report. These are merely artefacts or the output of the thing we actually want to measure: learning. The real purpose of assessment is to determine whether students have actually understood or learned from what was taught or from the learning activities they have undertaken.

AI Can Produce Work

A student can now ask an AI tool to write an essay on photosynthesis, generate a Shakespearean analysis, or create a PowerPoint on climate change. The resultant product may be very good, but it doesn’t prove anything in relation to the students conceptual understanding of the topic in hand. This is precisely why AI, despite its capabilities, is not the threat to assessment that some fear. If a form of assessment can be completed convincingly by AI without the student needing to understand the content, then the assessment was never effective in measuring learning in the first place.

A Chance to Re‑Examine Assessment

The presence of AI in the classroom creates an opportunity to rethink long‑standing assessment habits. We have relied on written submissions because they are practical, scalable, and easy to store or mark. I often joke about finding life on Mars and simply bundling some pens, paper and exam papers onto a rocket for the newly found Martians to undertake their GCSEs.

If the aim is to understand student learning, then assessment should focus on approaches that surface student thinking more directly. This might include oral questioning, discussion, problem‑solving in real time, or demonstrating understanding through practical application. University viva‑style conversations involving brief, structured dialogues where learners discuss what they have produced, offer one such example. In these scenarios, the written artefact can still play a role, but it is no longer the sole piece of evidence. Instead, it becomes a starting point for deeper probing.

Far from undermining academic integrity, this shift strengthens it. If students know they will be asked to explain, justify, or extend the ideas within their submitted work, the incentive to rely entirely on AI disappears and maybe the need to more deeply engage with the learning hand increases. More importantly, teachers gain a clearer, more direct window into students’ understanding.

AI as a Learning Partner, Not a Shortcut

It is also crucial to recognise that students can learn through using AI when the interaction is thoughtful and active rather than passive. Many current concerns stem from an assumption that students will simply ask AI to “do the work” for them. For some students this may be exactly the case, but this is only one mode of use, and arguably the least educational. A more powerful, and pedagogically rich use of AI is as a co‑creator or critical partner. Students can generate ideas and then refine them, or they might ask AI to critique their arguments or to challenge their assumptions. They might ask AI for feedback or recommendations or ask AI to question, explore or extend their understanding. If rather than asking the AI to do the work, students co-create the work with AI, involving a dialogue and exploration of the topic, then deep and meaningful learning can be achieved. It is for schools and for educators to teach students this, to teach students how to use AI positively, ethically and responsibly.

Towards More Authentic Learning Evidence

To make this all possible, assessment practices need to evolve, and this may require a cultural shift within schools: more emphasis on in‑class demonstration of learning, more opportunities for student dialogue, and a stronger integration of formative assessment. It also means recognising that written work, while still valuable, cannot stand alone as the definitive measure of understanding in an age where text can be generated instantly by a machine.

Conclusion

AI has not made assessment impossible; it has made outdated assessment practices impossible to ignore. If the real purpose of assessment is to understand what students have learned, not simply what they have submitted, then AI becomes less of a threat and more of a catalyst for change.

Rather than asking how we can stop students using AI, perhaps the better question is:

How can we design assessment that more accurately, authentically, and confidently brings student understanding to the surface?

Don’t forget to follow us on Twitter like us on Facebook or connect with us on LinkedIn!

Be the first to comment

Leave a Reply