Reflections on MIT’s Guidebook to AI in Schools (Part 1)
If you were to ask me about Australian K-12 teachers’ response to the arrival of AI in schools, based upon my experience I’d suggest a walk into a staffroom in 2026 would reveal a range of responses.
In many schools, I suspect there’d be some evidence confidence, interest and enthusiasm from a relatively small group of teachers who we might describe as the ‘early adopters’ or ‘first followers’.
There’d be a significant number of teachers operating in what Ethan Mollick might call ‘secret cyborg mode’. There’d be evidence of some trepidation and caution from many teachers. There’d be some railing against AI impacts as a tool for ‘cheating’ and its challenges to assessment and academic integrity. Not to minimise these concerns by grouping them together but there’d be some teachers questioning the direction of Generative AI development on grounds including its impacts on the environment, art, copyright, democracy and the concentration of power, and so on. There’d also be many living in ignorance of the impacts of AI as the tools become ubiquitous in our lives. For these teachers, they are operating in a ‘business as usual mode’. I suspect there are many more responses to AI from K-12 teachers.
Broadly speaking, I suspect we’d also find within all of those groups a great deal of evidence of ongoing curiosity, apprehension, and change fatigue!
In essence, teachers are a microcosm of society. I find it hard now to present to rooms of teachers from a variety of schools at conferences. Teachers are in so many different spaces when it comes to AI. The lived reality in schools right now is messy. AI has arrived. Teachers are not in control of this revolution, we’re in the midst of it.
That’s why A Guide to AI in Schools: Perspectives for the Perplexed from the team at MIT’s Teaching Systems Lab (authored by a team led by Justin Reich, Director of MIT’s Teaching Systems Lab) deserves serious attention.
In the coming weeks, as Australian school approach the beginning of the 2026 school year, I’ll be taking a look at a few chunks of this excellent source and sharing some of my reflections upon it.
We’re building the plane while we fly it.
Reich and his team frame the moment we’re in with humility and honesty. In the preface to their guide, Reich likens writing an AI guidebook for schools to writing a handbook on aviation in 1905 – two years after the Wright brothers took flight.
“We’re kind of just building the plane while we fly it.”
The metaphor lands hard: we simply definitely don’t know yet what best practice for using AI looks like. But we do know one thing – AI is already in our schools… and, as a profession, we’re working things out as we go along. Bear Grills-style, K-12 teachers are improvising, adapting and overcoming as they face the challenges in schools that are thrown up by AI.
The authors introduce the term “arrival technology” to describe Generative AI, They emphasise that it’s a technology that didn’t go through some formal procurement process or via approval channels. It simply appeared in classrooms. The guide reminds us that this isn’t like the adoption of laptops, which came with budgeting, infrastructure, and strategy. This is closer to when mobile phones or search engines started reshaping student behaviour – only faster, bigger, and with far less predictability. Students are using it. Teachers are using it. There was no rollout plan. It arrived. And that demands a very different response.
The case for informed experimentation
One of the guide’s strengths is its refusal to over-simplify. It embraces the VUCA nature of this moment. The space of AI in education is: volatile, uncertain, complex, and ambiguous. The guide invites educators to responsibly try things, make informed guesses, reflect, and try again. Teachers are urged to tell students, “we’re learning too” and that, when it comes to AI “our ideas might change.” Teachers are called to be humble and transparent here. Don’t try to be the ‘sage on the stage’ when it comes to AI. Noone has the answers all worked out yet. I would caution educators to be wary of those claiming to already have AI in education ‘figured out’. This is not a domain of silver bullet panaceas. Anyone presenting a totalising solution to ‘how to use AI’ is, at best, selling something and, at worst, ignoring the messy realities of schools, students, teaching and learning. As the guidebook reminds us: this is not about knowing the knots; it’s about exploring how others are bending the rope – and finding out which configurations are sturdy, and which ones fall apart.
Further, early in the Guidebook, the authors draw attention to the multitude of use cases for AI and urge teachers to keep in mind that there is a need to strike a blance in AI use within schools.
“We’re trying to find the balance between using [AI] as a productivity tool… but also making sure that students can still have the skills that we want them to have in terms of being able to read, write, and think on their own.”
Reich writes, “Because of all of the experimenting that educators are doing, in a decade we’re going to know much more about how to teach kids to [learn] with and without AI”.
Therefore we need to keep the experimenting we do ethical, transparent, and pedagogically appropriate.
A call for thoughtful, humble communities of practice
One of the most valuable gestures in this guide is its recognition of the importance of building communities of practice. On page 4, Reich et al encourages schools to build cross-functional teams to navigate the evolving AI terrain. Teams who together share their learning, their reflections, their successes, their failures, and so on. This is the time to for teachers to find their tribe. To work with them. To read. To experiment. To listen. To reflect. To share. To adapt. To grow.
The team doesn’t recommend racing toward the goal of permanency in the creation of some sort of complete suite of polished documents or ways of working. The MIT team urge school leaders to anticipate the need for ongoing review and the creation of versions and updates to all documents guiding teachers and students. For example, school leaders might state:
“These are our 2025-2026 AI guidelines. We’re going to revise them as needed.”
This is a healthy way of working. It models responsiveness and learning over rigidity. It reflects a true understanding of the ways AI is rapidly development and models a growth mindset.
Leading with ethics
Impressively, the first substantive section of the guide dives directly into ethics – and rightly so. Eleven ethical principles are outlined, each connected to key questions and grounded in real-world classroom examples. I suspect this list is not intended to be read as hierarchical but I have a bone to pick here. While “Transparency” is placed first on the list addressed by MIT, I would argue that Privacy (and I’d include Safety) and Children’s Rights deserve even more ‘higher’ emphasis. Sometimes, it’s worth emphasising that, while in higher education students are usually adults, in K-12 environments where students are minors under our care. I can’t help but think, when papers are written by experts in higher education institutions, this distinction might slip through under the radar a little too easily. We can’t take this for granted. I was also particularly struck by the absence of a clear articulation of the core question I believe should be central to every school’s AI conversation: Will this tool help or harm the children in our care? Yes, the guide does pose that same question in relation to teachers, and it certainly explores “Non-maleficence” as a principle. But, in this moment of rapid adoption, we must elevate student wellbeing, privacy, and pedagogical appropriateness as primary filters.
Again, I must emphasise that I really like this document. I find all the ethical principles articulated by the MIT team worthy and I find it hard to disagree with any of the intent… but I would like to sharpen the focus on these couple of areas. I would hate to think, any school leader could pick up this document uncritically, and ‘miss’ what I’d urge us all to place front and centre in the work of schools.
When it comes to AI, lets place safety and privacy (Non-malefience) unambiguously in the glare of our professional spotlight.
When considering AI within our practice, let’s ask that question, MIT pose on behalf of teachers
Will this tool help or harm the children in our care?
Privacy is not optional — especially for minors
Before wrapping this blog post up, I’d like to draw some special attention to page 8 of the Guidebook. The discussion there about data privacy is essential reading.
It rightly warns that AI systems, even when users don’t provide personal information, can infer identity markers and potentially share data – intentionally or not – for commercial purposes.
“AI tool users may unintentionally share personally identifiable information… [which] could then be used for a variety of purposes,” including advertising.
In schools we must think about what tools are being provided by our organisation, and what (realistically) we might be able to do to help safeguard the identities, privacy, and data of the kids in our care. In primary and secondary school settings, this is not a technicality. We have a duty of care. We cannot act like higher education institutions where students are legal adults. Our obligations are – and must be – higher.
A guidebook worth engaging with
This guide isn’t definitive, and it doesn’t pretend to be. In fact, the authors, rightly, own that fact that we are all “building the plane as we fly it”.
Instead of offering easy fixes and comfortable solutions, Reich and his coauthors model the kind of thoughtful, questioning, community-informed approach that all schools need in this moment.
If you haven’t read it yet – read it.
If you’ve read it – talk about it with your team.
Not because it has all the answers, but because it frames the right questions. It holds within it the right provocations.
In upcoming posts, I’ll explore the guide’s next sections — including how it addresses policy development, student use, teacher PD, and AI literacy.
But for now, if you’re still feeling perplexed as to how best to engage with AI in your classroom – that’s okay.
So are we all. There’s plenty of room in our tent.
Read the full guide via the MIT Teaching Systems Lab:
A Guide to AI in Schools: Perspectives for the Perplexed


You must be logged in to post a comment.