A few recent posts online had me reflecting this week about some experiences using Microsoft Copilot (then Bing) over the last couple of years. In late 2023, my Year 7 History students completed a task that wove together ancient belief systems, source-based historical writing, and the trial-and-error logic of prompting. Their mission was to research an assigned Ancient Egyptian god or goddess, write a historical paragraph that analysed that deity’s significance, the symbolism associated with it, and its societal role, and then use AI to generate an image of the figure based on their research.
But this wasn’t just about gods or glossy outputs. Upon reflection, this was an experiment in student agency while researching, their discernment when using primary and secondary historical sources, and their comfort in operating with a new technology in what Nathaniel Swain might described as explorer-mode. It was also an invitation to rethink what it means to “do history” when our sources now include AI-generated media.


From Source to Sentence to Synthesis
The structure of the task was rooted in what’s probably a familiar disciplinary practice for many history teachers. Students were allocated a specific deity – Sobek, Ma’at, Nut, Geb, Ra, Hapi, and others – and guided through scaffolded research using curated online and print sources. They then composed a short analytical paragraph, in historical voice, which described the god’s iconography, role in Egyptian cosmology, and connection to the broader structures of Ancient Egyptian society.
From there, the unfamiliar began.
Students had to take their research and transform it into an image prompt. The leap from paragraph to prompt required them to abstract their knowledge, distil key iconographic details, and communicate historical understanding through visual cues. They were no longer just knowledge consumers. They became creators.
Moving into Explorer Mode
This activity grew out of a broader intention to move students beyond surface-level comprehension and into what I’ve previously described as explorer mode – the space where learners are given permission to navigate uncertainty, revise assumptions, and construct knowledge actively, not passively.
Explorer mode doesn’t reject explicit instruction. It builds from it. It’s the found in the moments after the scaffolds end, when students can test the limits of their understanding through creative or critical acts.
This task offered a springboard into precisely that kind of learning. Students were given opportunities to encounter unexpected hurdles – such as historical ambiguities, technological challenges, AI misinterpretations, ethical frictions – and had to respond in real time. In doing so, they were enacting multiple facets of historical thinking.


Historical Thinking in Action
Peter Seixas’ six concepts of historical thinking were alive in this process:
- Students were exploring historical significance: Students had to determine which elements of their deity mattered and were meaningful to Ancient Egyptians – and why.
- Students used evidence: They worked with secondary and primary sources, considering reliability, usefulness, and synthesising the ideas they discovered to form a cohesive picture.
- Students weighed up historical perspectives: Students explored how Ancient Egyptians conceptualised divinity, death, nature, and kingship in ways different from today.
- Students considered continuity and change: Many students noted how depictions of deities shifted over time while still retaining many key features. Some recognised the use of Ancient Egyptian religious symbolism today.
- Students had opportunities to identify causes and consequences: Some reflected on how certain gods emerged in response to geographical, ecological or political needs.
- The ethical dimension: This became especially salient during the AI generation process— – when students had to think carefully about accuracy, attribution, and the guardrails and limits associated with AI technology.
The AI as Unreliable Collaborator
It quickly became clear that AI generation wasn’t a plug-and-play exercise.
Several students noticed that the image generator confidently inserted features that had no historical basis. These errors in depiction became powerful teaching moments.
Students were forced to go back to their sources and ask why the AI got it wrong. They refined their prompts, revised the structure of their sentences, experimented with vocabulary and specificity. They became historians steering and arguing with a technology – and that was part of the point.
In doing so, they were not just learning to “write better prompts.” They were learning to ask better questions. They were learning both a healthy trust and a healthy mistrust of the product created by AI.
This is one aspect of what a transformative historical pedagogy might look like: using emergent tools not to replace thinking, but to provoke it.


The Goddess Nut and Platform Boundaries
One of the most revealing episodes involved a student assigned to depict Nut, the sky goddess. Time and again, one student found that her image generation prompt was blocked by the AI platform. At first, we were puzzled. Her prompt was well-researched, respectful, and aligned with her paragraph. So why was it being flagged as outside the content guidelines of the AI tool?
The answer lay in the wording of online sources, in art history and in the AI moderation protocols for image creation. Nut is traditionally shown as naked, a star-covered woman arched protectively over the earth god, Geb. Despite the historical validity of this depiction, the AI’s safety filters likely flagged the pose and nudity as inappropriate.
Here was a collision of historical representation, technological restriction, and ethical decision making. A great teaching moment. Together, the student and I revised the prompt in ways that described Nut as “fully clothed in a star-covered linen robe”.
The image was created.
We debriefed as a class about the important role of safety features in generative AI platforms, how modern tools may carry embedded assumptions about the meaning of words and what is “acceptable”. This was not a detour from the history curriculum. It did not represent a ‘failure’ in the activity. The difficulties faced by this student were important. The experience was the curriculum. It was a natural consequence of learning in explorer mode.

Reimagining Student Work as Inquiry Artefact
The students’ final outputs – their paragraphs, prompts, and images – were submitted on Word documents via Teams as a mini digital gallery. The images collected – with their paragraph captions – were printed on A3 in colour and displayed in the classroom. Each entry respresented a convergence of cognitive, creative, and critical work. Effectively the AI prompt and their follow up iterative chats to create images were an artefact of student decision-making.
The activity required the development of both digital and historical literacy.


Final Reflections: Beyond the Task
This task aligned deeply with my PhD research, which explores how a technology-infused, transformative history pedagogy can enhance students’ global, civic, and personal agency. And this activity -seemingly small, focused on a pantheon of deities- touched on some of my thinking about ‘what matters’ when teaching history.
By asking students to build images, I was really asking them to build meaning. By asking them to revise prompts, I was helping them revise their own understanding. By confronting the limits of platforms, we helped them see that history is always mediated – and that it’s our job to be critical, creative, and agentic in how we respond.
In short: they weren’t just learning about Ancient Egypt.
They were learning how to think historically in a world where the past is increasingly filtered through artificial lenses.
And in doing so, I suspect they took one more step toward becoming the thoughtful, discerning, future-facing citizens our pedagogy imagines.



You must be logged in to post a comment.