This post was inspired by a powerful line from Divya Ganesan’s recent article in Education Week (“The One Thing This Student Will Never Ask AI to Do”). Her reflections capture something essential about what it means to think in the AI age – and what it means to teach.

“I use AI to polish my grammar and formatting – ChatGPT catches my pesky capitalisation mistakes and cleans up awkward phrasing. But here’s my golden rule: Never, ever let it generate ideas. I’ve tried, and every time, the results feel hollow – like words without real thought.” (Divya Ganesan, Stanford senior student)

It quote is a reminder, in the clearest possible terms, that we’re not just facing a technological shift in education – we’re confronting a deep pedagogical challenge. Ganesan doesn’t reject AI – she uses it purposefully and with clarity. She uses it to help her express her ideas more effectively and more powerfully.

Her reflection reminds me of a recent observation by Jason Lodge of the University of Queensland. His research indicates that when we use generative AI we’re often no longer working independently. We’re “engaging in collaboration, turning traditional individual tasks into de facto group assignments. It just so happens that the other group member is a technology, not a human”.

“Unlike previous educational technologies that functioned purely as tools and were used transactionally, generative AI simulates a level of agency that blurs the line between tool and peer. Increasingly, when students interact with AI systems, they aren’t just using a resource but participating in a collaborative exchange of ideas, prompts, and refinements.” (Jason Lodge)

What’s promising in Ganesan’s reflection is her realisation of the need for discernment and self-regulation. When we write with AI and learn with AI need to have the self-regulatory skills to know where to draw the line. Her refusal to let AI generate ideas is not about following a rule – it’s about preserving integrity, voice, and the deeply human act of thinking for oneself.

This kind of discernment – the ability to decide when to use AI and when to step away from it – is rapidly becoming one of the most important skills our students can learn. (Notably, it’s also central to how senior history is already framed in Queensland. The QCAA Year 11 and 12 Modern History syllabus explicitly calls for students to be discerning users of evidence, sources and interpretation. In the age of AI, that descriptor takes on even more weight.)

If we don’t actively support students in developing this capability, we risk raising a generation of pseudo-learners, grade-producers, and assessment-completers who can produce fluent text but haven’t engaged in any meaning-making or genuine thought.


Pedagogical Adaptation: From Tool Use to Thoughtful Use

In my research into a renewed and technology-infused history pedagogy, I have been exploring how classroom practice must evolve in ways that empower student agency – civic, global, and personal. In my view, we can no longer think of digital tools as optional add-ons to traditional models of schooling. Instead, we need to reimagine learning processes. Generative AI technology has provided for students (and teachers) a highly accessible tool for new hybrid form of learning where meaning can be co-constructed. As educators, we need to focus some attention on shaping learning as process in which students work with AI as collaborators, while retaining control over their thinking, questioning, and conclusions.

We are already far beyond the question “Should students use AI?”

The real question now seems to be “How do we best enhance learning in ways that leverage this ubiquitous new technology?”

I suspect that one of the first significant casualties of the new technology will be assessment that is, by and large, performative.

Ganesan frames this issue with compelling clarity:

“If an AI-generated essay earns an A in a class, that’s not just a problem with AI use; it’s a problem with the assignment itself.”

Scott McLeod at the University of Colorado captures the problem amusingly in a diagram.

Ganesan is right. If a bot can complete a task and receive top marks, then the task isn’t sufficiently engaging students in complex, original, or ethically rich thinking. In history education, that’s an opportunity – not a crisis.

McLeod is right. If the work we set for students lacks true meaning – for both teachers and students… If the work is purely performative… If the work lacks true connection to the human experience of students… We can hardly be surprised if it is completed is ways that are performative. In ways that lack a true connection to student learning.

If we genuinely believe that learning is about the construction of meaning, then we’re confronting a deep pedagogical challenge. We must engage with this challenge.

As I write this, I am aware that many Australian history teachers are wading through grading student papers. If you’re a teacher bemoaning the impact upon AI on student work then perhaps this is a stark observation to consider.

If AI can complete a task and receive strong marks, then the task isn’t sufficiently human. It isn’t sufficiently engaging students in a process of complex, connected, ethically rich learning.

In history education, that’s an opportunity – not a crisis.

It’s time to reimagine the ‘Big Six” historical thinking skills for our classes… and our assessment.


Revisiting the Big Six for the Age of AI

Despite our confrontation with a deep pedagogical challenge, Seixas and Morton’s Big Six historical thinking concepts – historical significance, evidence, continuity and change, cause and consequence, historical perspectives, and the ethical dimension – remain foundational to rigorous and agentic history education in Australian classrooms. They are deeply embedded into the assessment criteria in both state and national curriculums. However, in an era where students are engaging with content through AI-enhanced tools, we need to revisit how these concepts function not just as disciplinary lenses, but as frameworks to build discernment, student voice and agency, and digital literacy.

Each concept continues to offer powerful entry points for inquiry – but now they must also support students in thinking critically with and about AI-generated historical knowledge.

Historical Significance

AI can quickly make statements that purport to identify what has been deemed “important” across its datasets – but historical significance is a conversation that needs to be explored in the most human way. In the teaching of a unit, we must give appropriate attention to the questions: “So what?” and “Why does this even matter?” We need to help students to make sense of the information that they engage with in ways that are authentic and connected to their lived experience. It may well be that questions of historical significance relate to individual’s experiences within the class; the experiences of their families; communities’ experiences; global events.

We need to equip students to ask: By what criteria are we judging historical significance?

Students might be encouraged to:

  • Challenge AI-suggested responses by asking:
    • Questions directly related to their authentic and human lived experiences.
      • Consider the use of the two I-statements and a verb approach followed by a multi-turn chat with the AI tool.
        • Try this possible prompt: “I am a year 9 student in Australia. My family has lived in Australia since migrating here the late 1970s. I am of Vietnamese heritage. I sometimes feel disconnected from studying the history of the Australian experience during World War 1. It seems a very Anglo-Aussie history to be studying. It doesn’t feel like something that relates very much to me. List for me five possible ways that this Australian experience in World War 1 is historically significant in the life of my family and I.”
        • You can see an example of this prompt and a multiturn chat related to it HERE.
    • Questions that raise the issue of perspectives of others:
      • In what way is issue this significant for different groups or individuals? [Name specific groups or people related to the study. Consider the varieties of possible ‘takes’ on this question that may be influenced by personal experiences, ethnicity, gender, nationality and so on.]
    • In what way is issue this significant for different groups?
  • Investigate silences or marginalised narratives absent in AI summaries
  • Explore how significance shifts across time, place, and cultural lens
    In doing so, students practise their own judgement, their discernment, about what deserves historical attention – and why.

Evidence

AI is not a search engine. It can be incredibly powerful when engaging with historical evidence. When use of AI is paired with ‘human intelligence” (HI) and appropriate foundational knowledge, understandings, and skills, we can now do far more with sources than many history teachers imagine.

With the right prompts and scaffolds, AI can assist in interpreting, analysing, evaluating, and comparing sources. It can identify perspectives. It can translate. It can summarise. It can explain. It can do all of these things at a level beyond the abilities of what ‘the typical’ high school student might be likely to be able to.

Whether the tools can do these things is not in question.

How well it does these things, however, is often highly dependent on the skill of the user in using the tools, the user’s historical knowledge and understanding, and the user’s historical skills. What is needed is for users to become more skilled in using the AI tools at their disposal and need for them to then to further develop their skills of discernment and self-regulation.

If we teach students skills of working with evidence without a wider civic and personal purpose then student work will be performative. It will “feel hollow – like words without real thought” “empty of substance”.

When we work with evidence using AI we need to teach students to find and draw the line – and why. Teach them to be discerning. While AI can now generate (high quality) responses, students need help to see the ability to use evidence to inform action as a valuable and important human skill that is connected to their own world. They need to see working with evidence as being connected to acting with integrity, preserving and / or amplifying voice, and about the deeply human act of thinking for oneself.

When paired with frameworks like ADAMANT or integrated into thoughtful inquiry design, AI can support students in becoming more critical and independent evaluators of historical evidence, but they must still bring their own critical judgement to the table. AI can surface patterns, identify features, highlight inconsistencies, surmise perspectives, and even offer plausible inferences – but it is the human learner who must construct meaning. It is the learner who must make sense of the evidence decide what is valuable, what is trustworthy, and what is worth further investigation.

Rather than bypassing the skill of source analysis, AI – when used discerningly – can actually deepen it. It can function as a dynamic thinking partner: asking different questions, proposing alternative readings, and modelling habits of historical interrogation. When embedded thoughtfully into the inquiry process, it encourages students not to outsource thinking, but to extend it.

Students might use AI to:

  • Apply a structured lens (such as ADAMANT) to identify key aspects of a source, such as authorship, audience, and motive.
  • Compare two sources on the same event or issue to evaluate reliability or bias. (Some more detailed tips, ideas and discussions are available HERE.)
  • Test their own interpretations by asking AI to offer counter-readings of a source
  • Explore how similar sources have been interpreted in different historiographical contexts

Some quick examples of how AI might help students to engage with evidence are discussed in a previous blog post.

Continuity and Change

AI excels at identifying patterns across time – but understanding what matters in those patterns requires discernment. That decision is a historical judgment. That’s where the teaching energy and effort – the human attention must go!

Students should be encouraged to:

  • Connect patterns of continuity and change to their own experience – and those of their families and communities.
  • Make value judgements and ethical judgements on the nature of changes and continuities.
  • Question AI’s tendency toward generalisation or narrative closure.
  • Explore how specific individuals, groups, communities and perspectives experience continuity or change differently.
  • Reflect on the nature of progress. Is progress linear? How, why and when does change happen? How might we bring about change in our world?
  • Consider how digital tools might reinforce linear or progressivist models of history.

By doing so, students learn to interpret historical patterns in a more critical, situated way… and bring the deeper lessons of life into their world.

Cause and Consequence

AI tools are now capable of generating detailed and layered causal explanations – from mapping event sequences to hypothesising on short- and long-term effects. When prompted effectively, these tools can simulate a kind of historical reasoning that mirrors the structure of scholarly explanation.

However, the strength of this process still relies heavily on the human intelligence (HI) guiding it.

Students can use AI as a co-investigator to:

  • Explore multiple interpretations of causality by refining prompts and examining how the AI frames historical events
  • Identify gaps, assumptions, or oversimplifications in initial responses, and use these to iterate and dig deeper
  • Generate contrasting causal frameworks (e.g. structural vs individual agency) to analyse historical outcomes
  • Use AI to model possible consequences of counterfactuals or alternative decisions within a given context

By interacting with AI in these ways, students aren’t simply consuming content – they are constructing and contesting explanations. They learn to interrogate causal chains, test for complexity, and recognise how explanatory narratives are shaped by the questions we ask.

The capacity for nuanced, causal thinking is not removed by AI – it is recontextualised. It becomes a collaborative, iterative process that invites students to take a more meta-historical stance: not just asking what caused this?, but how is causality constructed in historical explanation, and how might AI reflect or distort that process?

Historical Perspectives

AI offers new ways for students to engage with historical perspectives, but it also demands far greater ethical sensitivity and methodological awareness than traditional tools. It can simulate voices, summarise ideologies, and predict what a person from the past might have said – but it always does so through the lens of present-day data and language models. As such, it risks collapsing difference, erasing silences, and flattening the complexity of human experience across time.

Leon Furze has described this practice as digital necromancy – the act of summoning the dead through generative AI. This may sound dramatic, but it’s a useful warning. When we ask an AI to “speak” as a suffragette, a slave, a soldier, or a political leader, we are performing a kind of ahistorical ventriloquism: putting words into the mouths of people who can no longer consent, clarify, or correct us.

As educators, we need to frame this practice with care. Again discernment is key. The values of empathy, inclusion, and respect apply.

Students can be supported to:

  • Evaluate AI-generated “historical perspectives” against primary and secondary sources, identifying where the simulation aligns with evidence and where it diverges
  • Critically reflect on the ethics of simulation, especially when the AI represents people from communities that have historically been silenced or misrepresented
  • Discuss the implications of anachronism, tone, and motive in machine-generated historical voices – particularly the risk of presentist bias
  • Ask: Who gets simulated? Who is left out? And who decides what that voice sounds like?

The guiding principle should be that AI is not a historical oracle, but a starting point for critical interrogation. Roleplay and simulation may still have a place, but they must be framed as constructed, incomplete, and speculative – not authoritative.

At the same time, AI can be a powerful tool for identifying historical perspectives. Students might:

  • Use AI to summarise the apparent worldview or ideology behind a source, then compare it to their own interpretation
  • Ask AI to identify language or rhetorical devices that signal bias, alignment, or omission
  • Generate alternative framings of an event to explore how perspective shifts depending on values, goals, or social position
  • Investigate how different historical actors might have viewed the same event – and use AI to draft hypotheses as a starting point for historical discussion.

As we tell students, the past is a different country—they do things differently there. AI doesn’t live there. But our students can visit, with care.

The Ethical Dimension

Of all the Big Six historical thinking concepts, the ethical dimension is the most under-addressed – and arguably the most urgent – in the age of AI.

Too often, ethics in history is treated as an optional extension activity: perhaps a quick moment of moral reflection, if teachers have time, that’s tacked on at the end of a unit before the holidays come!

It is rarely explicitly assessed and is frequently overlooked or sidelined in classroom dialogue.

In an educational landscape shaped by generative tools, misinformation, and global digital participation, ethics cannot be peripheral. It must become central. Ethics, along with conversations about digital safety and privacy, is now core business to the history classroom.

The ethical dimension asks students to consider what we do next with our knowledge and skills – how we interpret, share, challenge, or distort the past – and with what consequences. This becomes even more complex when working with AI, which is capable of amplifying narratives, manipulating imagery, erasing nuance, and reproducing bias at scale.

This is why history pedagogy must become more generative – helping students to develop the a sense of hope, the purpose and the tools they need for building a better world; more reparative – engaging with silenced, omitted, or misrepresented voices and responding in ways that do better; and more transformative – giving students tools to not only understand the past but to shape the future.

These aren’t abstract goals. They emerge directly from the ethical questions students are increasingly confronted with in our disrupted world.

Ethical questions must not be isolated to discrete lessons. They must be embedded meaningfully as an underpinning element of the design of units within a course of study – for example, through the selection of inquiry questions and case studies that speak to and reflect students’ lived realities.

Units that explore colonisation, migration, civil rights, climate change, democracies and dictatorships, struggles of justice or protest movements offer immediate and obvious opportunities to connect historical learning with the values, challenges, and hopes of students’ communities.

In this context, the ethical dimension aligns directly with the General Capabilities of the Australian Curriculum, particularly where students are called upon to

apply ethical concepts such as equality, respect and fairness, examine shared beliefs and values that support Australian democracy and citizenship, and become aware of their own roles, rights and responsibilities as participants in their social, economic and natural world.

The ethical dimension of the study of history is NOT an ‘optional extra’.

By centring these priorities, students should see history not as a settled record, but as an active, ethical project – something they shape through their choices, their framing, their voice, and their actions.

To prepare students for a world of powerful tools, uncertain truths, and contested narratives, we must foreground ethical reasoning in history education.

This is about equipping students to think carefully, care deeply, and act responsibly as historical agents in the present – and to do so in ways that are both reparative and future-facing.


Discernment as a Historical Thinking Practice

In many ways, discernment is not a new skill. It has always been central to historical thinking. What has changed is the context. Today, students are navigating learning spaces shaped by AI’s predictive algorithms, instant text generators, and information overload. In that environment, the historian’s habits – asking thoughtful questions, seeking credible evidence, considering multiple perspectives – are more relevant than ever.

But discernment is also about agency. When students pause to reflect – to ask, What do I think about this? – they are not just analysing. They are claiming ownership of their learning and their intellectual voice.


A Renewed Role for Teachers

In a world where AI can do so much of the traditional performativity that drives schools, then our responsibility as teachers becomes even more vital. We must not operate as simply content providers – we are now curators of complexity, facilitators of human reflection, and co-designers in the creation of learning experiences that require discernment and sense-making.

The challenge ahead is to ensure that our assessments, classroom practices, and pedagogical goals ask more of students than AI ever could. We must give students meaningful reasons to think, to question, to speak, and to care. If we don’t, we run the very real risk of making schools factories that produce pseudo-learners, grade-producers, and assessment-completers – those who can produce fluent text but who cannot make sense of or take action in the world around them.

And that’s a responsibility – and opportunity – we must embrace.


Discover more from Disrupted History

Subscribe to get the latest posts sent to your email.

Trending

Discover more from Disrupted History

Subscribe now to keep reading and get access to the full archive.

Continue reading