A teacher’s review of Lodge & Loble on AI, cognitive offloading, and what we must protect in learning
Over the past few years, there’s been a dramatic shift taking hold in classrooms. It’s not just that students can now “get help” faster via AI. It’s that help can arrive as a fully formed performance: fluent, confident, neat, and (often) persuasive.
The “illusion of mastery” is when AI-generated work feels so fluent and polished that students (and teachers) mistake it for – and present it as – their genuine understanding. The learner can create the illusion by producing an impressive output, while the underlying cognitive work of building knowledge, making meaning, and verifying claims hasn’t actually happened. If there is an illusion of mastery, the “mastery” is more performance than durable learning.
Lodge and Loble’s 2026 report discussed this issue with clarity. The report explores the risk that AI can interfere with the cognitive processes of knowledge construction – the very processes that build long-term memory and the foundations of critical thinking.
AI doesn’t just change what students can produce. It changes what they might no longer need to do.
The key distinction: what to offload… and what never to offload
To me, the report’s central takeaway for a lot of K-12 educators is nuance around how and why students can and should use AI in the learning process. They frame a deceptively simple distinction:
Detrimental cognitive offloading (outsourcing) is when AI bypasses the intrinsic cognitive effort – the “desirable difficulties” or cognitive friction – required to build long-term knowledge schemas.
Beneficial cognitive offloading is when AI reduces extraneous load (they use the example of grammar checking), freeing working memory so learners can focus on essential, intrinsic tasks.
Not a simplistic “AI: good or bad?” or “Learning: hard or easy?”. But when students have access to AI – and they do – the question is what struggle should be given over to a technology and what should be retained by the learner.
This is where I’ve started using a phrase that’s become central in my own thinking: Teachers must maintain appropriate levels of cognitive friction. Learning experiences in our classroom should not be frictionless. They should not be unnecessarily abrasive. It’s incumbent upon teachers to find a pedagogical ‘sweet spot’.
The job of teachers is to help students use AI in ways that reduce the unnecessary friction – the confusing instructions, the surface-level busywork, clunky workflows – so that students have the capacity to engage in the necessary friction that matters:
- the challenges of retrieval
- the struggles to explanation
- the puzzlement that comes with interpretation, analysis, and evaluation
- the grappling to make judgements on verification, reliability, and perspectives
If AI removes the wrong kind of difficulty, the experience of education becomes performative. Students may look more capable in the short term while becoming less capable over time.
The performance paradox: when outputs improve but learning decays
Lodge and Loble describe what they call a performance paradox.
Unstructured AI use trends toward detrimental offloading. Students’ short-term performance improves, while durable, long-term learning is harmed.
Why does this happen? Because AI delivers something that schooling has often rewarded: polished output. And that polish can create an illusion of competence, encouraging a metacognitive laziness – where students abdicate the generative effort required to build deep knowledge.
This is why I keep returning to a blunt classroom truth:
If AI makes performance cheap, then effort becomes precious.
Desirable difficulties and the role of the teacher
The report’s emphasis on desirable difficulties matters deeply for K-12. Learning must be accessible, yes. But it must also remain challenging in the right ways. Desirable difficulties are the productive struggles that help learning sticky. When AI bypasses that struggle, it can hollow out the very growth we’re aiming for.
This is where the teacher’s work becomes even more important – not less.
In the age of AI, one of our central roles is to:
- keep the learning process visible
- keep the struggle purposeful
- keep the student cognitively present
That’s what I mean by maintaining cognitive friction.
The report points to Load Reduction Instruction (LRI) as one promising pathway towards maintaining cognitive friction.
In plain teacher language, LRI is explicit teaching designed to manage cognitive load so students can succeed with complex learning without removing the thinking.
It involves:
- Reducing extraneous load (unnecessary complexity, clutter, confusion)
- Sequencing intrinsic load (step-by-step growth from worked examples through guided practice to independent performance)
- Providing scaffolds that fade (support is temporary; independence is the destination)
- Keeping the generative work with the student (AI supports; it doesn’t replace)
A simple line I’ve found useful:
LRI isn’t about making learning easy. It’s about making the right parts hard — and the wrong parts easier.
The Matthew Effect
Lodge and Loble flag the Matthew Effect (with AI) as something for teachers to be aware of. The effect is essentially this: Students who already have strong domain knowledge and metacognition – our high achievers – are more likely to use AI to accelerate and amplify their learning. Students without those foundations are more likely to outsource and fall further behind. This observation resonates strongly with my own school-based research.
This is where “know your learners and how they learn” is not just an AITSL principle to teach by but a non-negotiable in the AI-infused era. This idea is a central tenet of my soon to be publisher peer-reviewed paper – The Bubble and Burner Model of AI-Infusion: A Framework for Teaching and Learning, (DOI: 10.1080/23735082.2026.2672370, published through the journal Learning: Research and Practice). I draw upon the work exploring skill acquisition by Dreyfus and Dreyfus to make the point that teachers cannot make assumptions about homogenous ‘one-size-fits-all’ teaching that is implicit in the industrial model of education.
In the AI age, we cannot assume who is a novice and who is an expert in our classrooms. We only discover it through relationship and careful formative practice – the everyday work of knowing students and how they learn.
And that means:
- checking for understanding frequently
- building cultures of transparency
- constructing chains of evidence of learning, not one-off polished products
In other words, equity in the age of AI is not primarily a tools issue. It’s a pedagogy-and-relationships issue.
The solution is pedagogical, not technological
One of the report’s most hopeful claims is also one of its most practical:
The impact of AI is not primarily technologically deterministic; it is pedagogical. Structured interventions can foster self-regulated learning, critical thinking, and deep engagement.
That aligns exactly with my own stance: we don’t start with the tech. We start with purpose, values, and the kind of human we are trying to form. Then we ask: What might be offloaded — and what must remain human work?
In my forthcoming article, I’ve tried to give teachers a conceptual way to locate themselves within the learning process when AI becomes a relational presence. The Bubble and Burner model frames the teaching and learning process as a complex interchange:
- Bubbles of varying sizes representing the ways AI is used within a learning space where AI is not only present but responsive, and useful
- a Burner representing the teacher’s regulatory role – controlling the “heat” of learning, maintaining cognitive friction, and ensuring AI becomes an amplifier rather than a substitute
This conceptualisation matters because AI is not just another tech product being switched on and off. And that means teachers must remain active designers of the learning ecology.
A snapshot: What this looks like in my classroom now
This is where the report’s argument meets the reality of my year 8 History classroom.
“Think First!”
Before AI enters the task, students engage in first-pass engagement with historical sources using styluses within their indivdual OneNotes. At this stage they undertake ‘THINKING’ through:
- initial guided individual interpretation, analysis and evaluation of historical sources, and
- annotation of the sources with any observations and any questions they have.
This protects desirable difficulty. It keeps students cognitively present.
They then ‘PAIR’. They work with a partner or partners collaboratively – on the same task – with the same sources that are printed on A3 paper. Students are required to collaborate at this stage without access to their notes on their individual OneNote.
Each group then ‘SHARES’ aspects of their combined work in a full class discussion of the source led by the teacher. At this point, additional exposition of content and discussion of student observations and questions are shared.
After these three phases, students return to their OneNotes and update their unit content schema (mind maps) and ensure records of their notes are used.
Only after the initial thinking and pairing stages is AI use allowed and, even then, within limits and parameters which privilege the intent, purpose and meaning of the learning. Students are explicitly taught to maintain cognitive friction.
Transparency around AI use is a norm of the classroom culture.
To make learning visible.
Chains of evidence of student learning are built within this process. Rather than focusing on the creation of a single answer and arriving at a set common point, this approach privileges recording:
- first thinking (pre-AI)
- rough drafts / rough interpretations
- feedback cycles (including AI critique)
- revisions with commentary
- final synthesis
It’s an anti-performativity stance: process matters because learning is a process. Within this process AI can act as a tutor and as a study buddy but we focus on the importance of human effort and of our own thinking, our own human voice.
With my Year 8s using Copilot, I’m structuring AI use after students engage in human-first routines and frameworks that foreground judgement and verification.
Two of these are:
- ADAMANT RUP (a deliberate pre-AI engagement structure)
- ILLUSION (a framework to interrogate AI output and resist false mastery)
I’m also using thinking routines such as 2× “I” statements and Verb and Plus 3 (C-E-C) to help students unpack historical sources — but those routines only do their job when students have already done the first human work of reading, noticing, and questioning.
I’ll unpack these frameworks and routines more fully in coming posts.


You must be logged in to post a comment.