Conversations describing teacher use of AI are changing (again). Initial educator conceptualisation of AI as ‘just another edtech tool’ have quickly crumbled – but the metaphor of seeing AI as a tool remains.

That said, the tool as metaphor is also now fragmenting as early adopters and some first followers grapple with the conceptual nature of AI bots and agents.

While a close colleague of mine recently told me that he genuinely “does not care” about this conversation, it’s time to talk about the words we use to describe an increasing number of AI availabilities for teachers. I’m talking about these things we call bots and agents… and I’m going to throw in an extra for you to think about here: the pseudo-agents.

Over the past few weeks, I have found myself in an increasing number of conversations with educators, edtech leaders and early adopters who enthusiastically describe their use of what they believe to be ‘AI agents’. Every time I here that people are ‘using AI agents’, I have an involuntary urge to challenge the wording. (I’m sorry.)

I think it is worth noting that there’s a significant difference between what (most) teachers might be using in classrooms that they refer to as “agents” and what is truly “agentic AI”.


An agent or a bot?

The context varies – from classroom workflows to assessment automation, from student support to professional learning – but the core misunderstanding remains surprisingly consistent. What many describe as ‘agents’ are, in fact, just highly sophisticated chatbots. Useful, impressive, often deeply integrated with productivity tools – but not truly agentic. I think the wording and, more importantly, the conceptual grounding matters.

Bots, while I love your work, and no matter what people call you, you are not agents (yet).

It is not hard to see why confusion exists.

Microsoft Copilot refers to some of its advanced bots as “Agents” (and I suspect there’s some clever marketing positioning going on behind that choice). Google calls its programmable entities “Gems”. OpenAI labels customisable user-created bots as “GPTs”. Playlab and others offer what they call user-created “Apps”… but call them, what you will, let’s keep in mind that this is NOT agentic AI.

The language is seductive. The results feel magical. But in most cases, these are still best understood as task-oriented assistants rather than independent agents. They simulate agency but lack the underlying characteristics that define agentic AI. I think we can, at best, call the most sophisticated versions of these products – such as Gemini and ChatGPT using Deep Research mode – the pseudo-agents.


What are Agents?

So what, then, is agentic AI? Why does it matter that we draw a line between the bots, the pseudo-agents, and the ‘true’ agents?

Drawing from a quick review of 2025 academic literature, we can start to sketch that line more clearly.

‘True’ agents are better thought of as parts of systems – or sitting on top of systems. These agents and agentic AI systems are not simply responsive or interactive as bots are.

Agentic Artificial Intelligence (AI) builds upon Generative AI (GenAI). It constitutes the next major step in the evolution of AI with much stronger reasoning and interaction capabilities that enable more autonomous behavior to tackle complex tasks. Since the initial release of ChatGPT (3.5), Generative AI has seen widespread adoption, giving users firsthand experience. However, the distinction between Agentic AI and GenAI remains less well understood. (Schneider, 2025)

Unlike traditional AI, Agentic AI systems are designed to operate with a high degree of autonomy, allowing them to independently perform tasks such as hypothesis generation, literature review, experimental design, and data analysis. (Sapkota et al, 2025)

To me agentic AI systems are intelligent agents that ‘think’ and act independently. They exhibit a high level of autonomy, sustained reasoning, and act with and based upon a self-constructing memory They have the ability to break down and solve complex problems, and an adaptability that lets them handle dynamic real-world scenarios. I think they are best are envisioned as autonomous collaborators with and for humans – what Suleyman as called the “cospecies” that is capable of handling complex, long-duration tasks with minimal oversight.

As such, agents are nore than bots. Again, I love the bots that exist at the moment. Yet they are just a step along the road to agentic AI.

There’s going to be a tipping point.

When a “Gem”, a “Copilot Agent” or a “GPT” becomes truly capable of acting as – or sitting as a meta-bot on top of – an autonomous system that’s capable of identifying, pursuing and adapting complex goals over time and across changing conditions, often with minimal or no direct supervision, it becomes an agent. True agentic AI is where AI is a system that doesn’t just respond – they intitate, plan, decide, monitor, iterate, collaborate and act.

… and we’re not there yet!

In this framework, a chatbot that can summarise an article, provide and coach students through maths problems, or generate a lesson plan on command might be powerful, but it is not agentic.


Blurring the line? The pseudo-agents

Lately, we’re seeing a wave of AI modes that appear a bit like what we might expect of an agent. They really do look like they can think for themselves. Some people are calling these tools “agents” – but, to me, that’s jumping the gun.

What I’m describing here is the pseudo-agent. They’re clever Generative AI systems that follow instructions, link together tasks, and give an excellent illusion of independence. But, and here’s the catch, they don’t actually set their own goals, and they can’t work without someone telling them what matters. They’re like a helpful assistant who’s great at following directions but can’t yet think through the bigger picture on their own. They are the helpful and eager to please, and highly competent, intern. They are the PA who acts upon your direction that you can send away to do some work for you. Note this deliberate change in metaphor. It’s perhaps best not to think of the pseudo-agent as a tool. The pseudo-agent is your copilot (but it’s not autopilot).

Pseudo-agents are impressive. They can save time. I’ve used them in research, to develop teaching and learning plans, to test my thinking against research. They can be extremely helpful as we seek to personalise learning, and support students in various ways. They are moving into the territory once dominated by the search engine. They can research and write some extremely strong essays. Despite the metaphor, they’re still tools – powerful ones – but tools all the same. And that’s an important thing to remember as we decide what we trust them to do.

As much as they might feel like they are agents, they are not meeting the definition of agentic AI (yet).

Again,

Bots like ChatGPT and Gemini in Deep Research mode, while I REALLY LOVE your work, and no matter what people call you, you are not agents (yet).

A pseudo-agent, like ChatGPT or Gemini using Deep Research mode, that can string together web searches and data retrieval to create a tailored report is closer to an agent and feels like an agent – especially when it shows reasoning steps, memory and decision-making. But even these systems, tremendously impressive as they are, still depend significantly on human-initiated tasks, instructions, oversight and validation.

In contrast to a pseudo-agent (still a Generative AI bot), agentic AI will operate in open-ended problem spaces. An agent will receive a high-level goal, break it down into subtasks, allocate resources, loop through planning cycles, and adapt its approach as new information emerges. An agent may delegate part of its work to other bots and agents on behalf of the user. It will decide when, how, and what to do… and probably a good chunk of the why!

Think of an agent as a digital ecosystem of AIs designed not just to answer a question but to identify important questions, gather evidence, test hypotheses, and even escalate findings or decisions to human stakeholders when thresholds are met. I loosely describe agents as being a form of AI in ‘fire and forget’ mode.

In education, this distinction becomes critical. If we assume our tools are agentic when they are not, we risk outsourcing pedagogical decisions to systems that cannot understand the broader context or consequences. If we fail to notice when tools do become agentic, we may overlook the implications for teacher agency, student agency, curriculum design and ethical governance.

Further, in education, this distinction is also crucial because an AI agent sometimes sounds, in my mind, a little bit like some of my students…


Between copilot and Autopilot

We are, as the headline metaphor suggests, currently somewhere between using AI as copilot and autopilot. This shift is not just technical – it is conceptual.

The path from LLM-based assistants to autonomous agentic systems requires not only advances in architecture, memory and reasoning, but also a deeper reckoning with what it means to teach, to learn and to delegate. It requires us to think in metaphor as modelled for us by Microsoft AI CEO Mustafa Suleyman. Thinking in metaphor is not often a strong suit for many school leaders who have thrived and risen to the roles in an industrial age but they must embrace a new paradigm and metaphor.

We’re moving from a space in which using AI is as much about mastering the technical, the tools, the routines, and principles of grounded use, to mastering new ways of conceptualising education and teaching. Embracing metaphors.

I believe, we’ll soon be engaging with true agents in school. Agentic AI will be a reality – although it’s not one yet. To deal with this paradigm shift, we’ll need to think in metaphor. Teachers will need to think about what having AI as cospecies in the classroom really means. They’ll need to move beyond the brand names of Copilot and Gemini to embrace what those names foreshadow a copilot alongside us, a twin with us… a companion offering a voice in our ear as we think.

CHATting to us.

Shaping us.


The coming wave…?

A wave of profound change is engulfing our world. We’ve not seen a pardigm shift like this since the start of the Industrial Revolution in the 1700s.

The wave is here already… but it is yet to crest… it is building…

Educators must learn to distinguish between the guided, responsive and interactive bot, the pseudo-agent set loose on a long leash, and the truly agentic system.

We must ask: Who is setting the goals? Who is deciding what matters? Who is accountable when things go wrong?

Agentic AI promises much… but it is not yet.

Until agents are in our midst, we must be precise in our language and vigilant in our expectations. The bots are not yet agents. The autopilot is still under construction.

And as always, the challenge for educators is not just to adopt technology, but to do so in ways grounded in pro-social values, high ethical standards, and a rich love of what’s human. This wave upon use is challenging our assumptions about so much that has been established as the default position of the industrial world. We need to reimagine our place in a new age with clarity of vision, care and curiosity.


Discover more from Disrupted History

Subscribe to get the latest posts sent to your email.

Trending

Discover more from Disrupted History

Subscribe now to keep reading and get access to the full archive.

Continue reading