OfS consultation on the future approach to quality regulation: Response from the Chartered ABS
The Chartered ABS has submitted its response to the OfS’ consultation on the future approach to quality regulation based on feedback from our learning and teaching community.
The AI storm is here: Learning to swim in the AI flood
Authors
Dr Lubna Rizvi CMBE
Assistant Lecturer, College of Business and Law, Coventry University
Dr Lubna Rizvi CMBE issues a clear call to action: to dismantle our outdated, AI-vulnerable assessment model. She champions a practical redesign built on mandatory vivas to assess understanding, the strategic return of handwritten exams to measure cognitive depth, and AI-integrated assignments that teach critical collaboration with technology. The goal is no less than a new system that measures the human thinking we truly value.
Let’s be honest. The past two years in education have felt less like a gentle transition and more like a Category 5 hurricane named Generative AI. And while we’ve been scrambling to build sandbag walls with plagiarism policies and AI detectors, we’ve missed the most critical point: the landscape of knowledge and skill has been permanently altered. The flood is here. It’s time to learn how to swim in it, not just bail out the water.
The data is staggering, and it tells a story of institutional paralysis.
As of mid-2023, a mere 3% of institutions had a formal student AI policy. Let that sink in. The most disruptive technology to hit education in decades, and 97% of us were without a map.
A review of top universities revealed a telling breakdown:
27% had no clear guidance. Radio silence.
51% left it to individual instructors. The "patchwork" approach that creates confusion and inconsistency for students.
18% banned AI by default. A defensive, and frankly, futile stance.
4% allowed it with citation. A small step in the right direction.
We’re so busy designing robotic measures to catch robotic cheating that we’ve forgotten to ask the fundamental question: Are we even assessing students correctly anymore?
On one hand, AI, machine learning, and robotics have revolutionised academia. They provide incredible apps, engagement tools, and assist with translation and summarisation, empowering students in ways we never dreamed of. But on the other hand, our assessment model, submit online, run through Turnitin, grade, repeat is crumbling.
Are we truly assessing learning, or are we just brushing the problem under the carpet?
We see the surface: assignments are submitted, they often look polished, and the system churns along. But beneath that smooth surface, a dangerous void is growing. Where is the real learning? Where is the proof of true understanding? When we penalise students for using the very tools that define their future workplaces, are we, the educators, doing our job justly?
I fear we’ve developed a "can't be bothered" attitude. The responsibility is on our shoulders, yet we educators wait for policyholders to act, and the policyholders wait for us to demand change. This stalemate is a disservice to our students.
So, what do we do? It’s time to redesign, not just restrict.
The age of AI demands a mixed-methodology approach to assessment. We must move beyond the easily-gamed, solitary online submission. We need to create a system that values process as much as product and human intellect as much as AI output.
Here’s how we can start:
Re-introduce the human voice: Make vivas mandatory.
For every significant project, thesis, or dissertation, a viva voce (oral examination) should be non-negotiable. There is no better way to assess deep understanding, defend reasoning, and ensure the student is the true architect of their work. You can’t outsource your voice to a chatbot.
Embrace the power of the pen: bring back written assessments.
While online is convenient, we cannot abandon the written word. A recent Oxford University Review and other literature surveys have concluded that typed and handwritten exams are not equivalent. The evidence suggests:
Typed exams may lead to longer but less thoughtful answers
Handwritten responses often reflect deeper cognitive processing and better capture critical thinking.
Written assessments reduce equity gaps, minimise distraction, and force a clarity of thought that typing can sometimes bypass. They are a vital tool for authentic evaluation.
Design AI-integrated, not AI-prohibited, assignments.
Let’s get creative. Give students a task to use an AI tool for a first draft, then have them critically edit and improve it, justifying every change. Teach them to use AI as a brainstorming partner or a research assistant, not a ghostwriter. Assess the process of collaboration with the tool, not just the final output.
The storm of AI isn’t a threat to education; it’s a catalyst. It’s forcing us to shed outdated practices and finally build an assessment model that measures human ingenuity, critical thought, and articulate defence skills that no machine can truly replicate.
The waiting is over. The responsibility is ours. Let’s stop being gatekeepers and start being architects of a more resilient, relevant, and human-centric future for learning.