This Teacher’s Journal: Blog Post 8 | March 7, 2025

This week, my students’ unscripted screen recordings revealed something unexpected. Perhaps skeptics became converts? Perhaps AI became a tool for deeper inquiry? But I also reflected on a new challenge that might be emerging: Could the ways we teach and work with AI create a new educational divide?


From Performativity to Inquiry: How Student-Recorded Metacognition Changed Everything

In last week’s blog, I reflected on the challenges my Year 9 students faced in their first deep engagement with AI as part of their research assignment. I observed three broad groups: those comfortable using AI as a thinking tool, those who overthought every step, and those who (appeared to) underthink – treating AI as a shortcut rather than an inquiry partner. This week, I was uncertain of what to expect from the latter two groups of students as they completed a check for understanding. I suspect I feared “more of the same” types of responses.

Instead, what I saw surprised me.

Despite a disrupted school week due to Tropical Cyclone Alfred, all my Year 9 students submitted their required Teams Assignment checkpoint – a progress marker in their research assignment. This checkpoint required them to submit a screen-recorded, unscripted ‘show-and-tell’ of their research process so far. This checkpoint was heavily scaffolded for the students in both their Class OneNote and within the Teams Assignments platform.

The results were nothing short of a revelation.


The Power of Student-Recorded Metacognition

The unscripted nature of the screen recordings at this progress checkpoint provided a level of transparency and insight into student thinking that is not always captured in traditional assessment. In an AI age, it is increasingly important to treat assessment as a process rather than as a product. Such screen-recordings are a powerful way of recording students’ process of learning.

They give insight into what students ‘are doing’ not just what students ‘have done’.

Through these recordings, I was able to witness the authentic cognitive processes of every student: how they annotated instructions and class materials, how they engaged with source material, how they linked new concepts into existing frameworks and learning, how they were structuring their approaches to independent research, and – critically – how they worked with AI as a thinking and learning tool.

Using an unscripted screen-recording of students work privileged the importance of seeing students’ authentic unpolished work-in-progress. The checkpoint was a real-time snapshot of their thinking – warts and all. It captured their uncertainties, hesitations, reflections, insights, problem-solving, and, for some, genuine lightbulb ‘ah-hah’ moments.

Wariness, Skepticism, Trust, and the ‘Doubting Thomases’

One of the most striking changes this week for me was a transformation in my observations regarding those students referred to in last week’s blog as ‘Uncomfortable Underthinkers’ – those who, in last week’s lesson, seemed disengaged, hesitant, or outright resistant to deep inquiry and the use of AI.

On the basis of the evidence of this week, it appears that I had misread (at least) some of these students. Perhaps, a number of these students were simply wary learners – what I might now call ‘sensible skeptics’ or ‘doubting Thomases.’

Perhaps this particular group of students had been burned before – forced into learning tasks that felt like busywork, or conditioned to believe that performative engagement (completing the task rather than truly thinking) was ‘enough’. My desire to ‘do more’ in the teaching of history is likely to be challenging for students who have had proven success – if measured narrowly with grades – in the past.

Perhaps my… Comfortable students were actually a group of confident and trusting early adopters / first followers in learning and technology?

Perhaps my so-called Uncomfortable Overthinkers and my Uncomfortable Underthinkers were simply sitting on the other side of a chasm of confidence and trust?

[Source of image: Adapted From Sinusoid, D. (October 20, 2021) ‘Crossing the Chasm: Technology Adoption Life Cycle’]

On this occasion, I had been demanding of them, I had challenged them to become uncomfortable. No matter how well scaffolded and ‘warm‘ my relationship, I was asking them to do something different when they had prior success in assessment. I was asking them to embrace new ways of working that took them outside the familiar. I was requiring of them to develop skills in learning via a hybrid model where human intelligence (HI) and artificial intelligence (AI) collaborated.

Through their screen recordings, I saw that many of these students had, in effect, tested my claims. They had, in class, approached the task with doubt, but in private took risks and trusted my approach, scaffolding, and advice to them. They rose to the challenge of the new. By the time they had sat down to record their progress at this checkpoint, their tone had (universally) changed. They had seen, for themselves, the value of iterative AI engagement, lateral reading, and structured prompting. They weren’t just using AI as a ‘shortcut’; they were questioning, refining, and – crucially – thinking.


AI as a Thinking Tool, Not a Shortcut

Another striking observation this week was the way that students had evolved in their AI usage. Whereas last week some saw AI as a shortcut – as a ‘answer machine’, this week’s recordings showed a shift: AI was becoming a thinking tool.

Students demonstrated:

  • The ability to develop complex yet structured prompts based upon ‘thinking routines’ / ways of working.
  • A growing mastery of iterative multi-turn chat strategies – refining AI responses, asking for clarification, elaboration, and challenging /critiquing generated content.
  • The ability to combine their use of AI with lateral reading – comparing AI-generated responses to external sources and identifying nuances between ‘wrong’ answers and ‘imprecise’ or ‘ambiguous’ ones.
  • A growing confidence in direct prompting – realising that they could be blunt, specific, and demanding with AI, using natural language rather than the stilted style of engagement most effective in a more familiar search engine.

One student, surprised at the directness with which natural language could be used with AI pushed her choice of platform for a more helpful responses. Critical of the verbose and complex response she recieved, she simply said: “Dumb it down for me” as one of her iterative multi-turn chat instructions. Another wondered aloud about whether using “please and thank you” in prompts improved the quality of the response. Not only was she questioning whether AI required (or deserved) human etiquette – a question that raises interesting discussions on the dangers of anthropomorphising AI – but also questioning how best to engage with AI in pursuit of a productive and helpful response. (For more on this see Please Be Polite to ChatGPT.)


A Bigger Question: Will AI Widen Educational Inequities?

A final thought. One student’s reflection gave me particular pause. She noted that AI wasn’t ‘wrong’ when she used it at this stage of her assignment. She noted that when factchecking the output from her engagement with AI, her task wasn’t about simply identifying what is ‘right’ or ‘wrong’. She had seen the ambiguities and lack of clarity in some AI generated responses. She noted that some of the apparently accurate responses of AI was sometimes misleading. She notes the need for reflecting on an AI response, and at times, digging more deeply into it.

She noted that the responses she obtained through careful construction of prompts and her iterative multi-turn chat were, overwhelmingly, ‘right’… but that they often needed some further elaboration, nuance, clarification, and complexity before they could be considered ‘the full story’.

Teaching students to ‘uncover the full story’ takes time and hard work. It requires challenging a social desire for a quick and convenient answer. Deep and sustained reflection on learning seems counter-cultural in a world of education which too often rewards the superficial or performative.

This raises a critical issue: Could a new digital divide emerge as some schools teach robust skills and techniques for engaging critically with AI while others do not? Will a new educational divide emerge? A new society split between the AI-enabled and the rest?

Will some students – those explicitly taught to work in ways that combine their human intelligence (HI) with AI in a form of blended hybrid / co-intelligence model of pedagogy – become privileged through their use of AI as a ‘cognitive amplifier’? Meanwhile, will others, without this structured training, fall back on AI as a shortcut, producing shallow, performative work with little inquiry?

Perhaps we need to be wary of a new form of educational privilege – where those who learn to question AI, refine its responses, and integrate it into deep inquiry will have access to a richer and more powerful learning experiences than those who don’t.

As an educators, we need to work to ensure that AI is a tool for the benefit of all not just another digital divide.


Discover more from Disrupted History

Subscribe to get the latest posts sent to your email.

Trending

Discover more from Disrupted History

Subscribe now to keep reading and get access to the full archive.

Continue reading