Reflections on MIT’s Guidebook to AI in Schools (Part 2)
This post has been a long time coming.
Partly because of reasons outlined in depth in my Grumpy Vince post of earlier this week – and partly because it’s easier than I’d like to admit to stay at the level of motherhood statements when we talk about AI in education. Phrases like “human-centred”, “ethical use”, and “AI literacy” get rolled out endlessly: at conferences, in policy papers, across social media threads.
I’m guilty of it like lots of others.
Far too often we don’t stop long enough to interrogate what these phrases actually mean in practice – especially in the messy middle of real schools, with real students, under real constraints.
In A Guide to AI in Schools: Perspectives for the Perplexed, Justin Reich and his team at MIT do something rare: they don’t just offer “best practice” advice – they name the uncertainty. They invite us to admit, quite candidly, that we don’t yet know what the best practices are. In a moment that feels like a direct nod to the old line about building the plane as we fly it, they write:
“Because there is not yet a consensus regarding what students should be taught about AI, if we choose to provide guidance, we should also convey that we don’t know if that guidance is correct. Experts should convey to schools that best practices for teaching about AI aren’t yet established, schools should convey that message to teachers, and teachers should convey it to students.” (Page 27)
There’s something beautifully honest about that.
It’s not an abdication of responsibility – it’s an invitation to begin the work with humility, to be reflective, and to be transparent with our communities.
If we want students to practise discernment, honesty, and intellectual maturity in an AI-saturated world, then we should be willing to demonstrate what that looks like: naming what we know, naming what we don’t, and staying present to the questions.
If we want our students to be open and transparent, let’s demonstrate to them what that looks like.
Reich and the MIT team don’t pretend to have all the answers. But they do offer something just as valuable: questions worth asking, frameworks worth adapting, resources worth exploring, and perspectives worth holding onto — as we find our way through this new season of teaching.
Ethics isn’t an add-on
In an earlier post (which now feels like it’ belongs to a different season’s from an eternity ago), I lingered over the guidebook’s ethical framing in Chapter 2 (pp. 5–8). I still think it’s one of the most useful entry points I’ve seen for schools that are only just beginning to grapple with AI in a serious way.
Not because it offers easy answers.
But because it insists on asking the right kind of questions: about transparency, fairness, privacy, non-maleficence, teacher wellbeing, and the rights of children.
What strikes me most, though, is how this framing can operate as a kind of bridge — between the abstract ideals of “ethical AI” and the lived realities of school practice.
It nudges us to consider the human impact of these tools at every layer of the school:
- not just as a productivity boost
- not just as a shiny learning aid
- but as something that reaches into the emotional, social, and developmental lives of young people
In other words: ethics isn’t the compliance checklist we attach at the end.
It’s the through-line – the thread that should run through every decision we make about AI, from procurement to policy to the everyday relational work of teaching.
Recentring the learner and learning
Too often, the conversation around AI in education collapses into a single, anxious cluster of concerns: assessment, plagiarism, academic integrity.
These matter. Of course they do.
But they’re not the whole story.
At its heart, teaching is about students, not just their outputs. And far too little of the AI conversation has stayed with the social and emotional impact of these tools on learners — on attention, confidence, belonging, identity, and the quiet (often unseen) ways young people make meaning.
Chapter 3 of the guide rightly flags this:
“There is only limited research on how AI tools impact student learning. And, given the rapid evolution of these tools, research is difficult to conduct: by the time a study is peer reviewed and published, there is usually a newer version of the tool that may or may not have the same impact on learning.”
This resonates deeply with what I’m seeing in my own research and experience in schools. The pace of change makes rigorous longitudinal research hard. But it also underscores why schools need a dedicated and skilled person in-house who can read widely, reflect deeply, and help guide others through this complex terrain.
This person doesn’t have to be a technologist.
In fact, I’d argue that the ideal “AI guide” for a school is often someone with a generalist mindset – someone who understands teaching, student development, research, and ethics.
It may be my own bias talking, but many history or humanities teachers are well-placed for this role.
These are disciplines where:
- critical reading is normal
- long-view thinking is expected
- ethical reasoning is unavoidable
- perspective-taking is the daily bread
In a moment like this, those capabilities aren’t “soft”. They’re essential.
Further, as a former pastoral care middle leader, I found this line particularly important:
“Because generative AI is so new, its impacts on social and emotional wellbeing are not yet well understood – more research is definitely needed.”
This is especially true for K-12 schools, where our responsibilities for student welfare far exceed those in higher education. In schools, we do not just teach students – we have legally enforced standards of care for them. That duty of care must shape how we engage with AI. It must also shape what we prioritise, and who we listen to.
That duty must shape how we engage with AI: what we adopt, what we restrict, what we explicitly teach, and what we refuse to normalise.
Because when we recentre the learner, ethics isn’t theoretical.
It becomes pastoral. It becomes relational. It becomes the work.
Human-centred use starts with student voice
One of the most arresting moments in the guide isn’t a framework or a policy recommendation.
It’s a student.
A young person trying to put language around what it feels like to make something with their own mind and hands. To paraphrase the student’s reflection:
There’s something powerful about something that comes directly from a human’s personal mind… something beautiful about someone just sitting down and having to crank out their own personal story, with their brain actually working.
That’s the kind of perspective we need more of.
Not because it’s nostalgic or anti-tech, but because it reminds us what education is really for.
If we’re going to call our approach “human-centred,” then we need to centre actual humans – their values, their experiences, their stories, their growth.
Reclaiming the parts of teaching that matter
Chapter 4 is short, but important. It urges teachers to ask a deceptively simple question: Which parts of our work benefit most from outsourcing to AI, and which parts are best retained by humans?
My advice? Don’t give away the good stuff.
There’s no doubt AI can help with time-consuming admin, draft lesson plans, or generate quizzes. Great. But the soul of teaching is not in such busywork – it’s in the community, the connections, the creativity, the moments of insight and humanity. Don’t delegate those.
Let AI shift the burdens of performativity from your work so that you can amplify the authentic and richness of the humanity that you bring to your work. Let AI free you so you can be ‘your best version of yourself’ in classes!
Perhaps do this by working in hybrid, or cyborg, mode. Use AI tools to clear the performative river gravel that often clogs our day-to-day, so that the educational gold – real learning, real thinking, real connection, real meaning making – can shine through.
To do this well, teachers must know their professional “why”.
- Why do you teach?
- Why does your presence matter?
- What is it that only you can bring – in relationship with these particular students, in this particular community, in this particular moment?
AI should help you express that humanity more fully — not smother it, suppress it, or replace it with something slick and synthetic.
And here’s the part we can’t ignore: teachers won’t get there by accident.
Teachers need support — not just technically, but professionally and personally — to navigate this shift with confidence and integrity. Schools have a moral and professional responsibility to build that support.
- Not with one-off workshops.
- Not with generic PD days.
- Not with a slide deck from someone who doesn’t know your students.
But with ongoing, thoughtful engagement – AI as part of a school’s broader professional learning architecture, linked to pedagogy, wellbeing, curriculum, and values.
This shift isn’t going away.
And we can’t afford to leave staff behind.
A quick note on policy
Chapter 5 of the guide explores AI policy development in depth. That’s a topic for another time – and one that’s already well-covered in other places including here in the archives of my blog.
But let me flag this: the checklist on page 18 is an excellent resource for schools yet to develop internal policies or guidelines. If you’re starting from scratch, start there.
Making sense of AI literacy
Chapter 7 makes a crucial point: If schools want to develop AI guidelines, they first need a shared understanding of what “AI literacy” even means.
The guide provides a helpful list of what AI literacy might include – from understanding how generative models work, to recognising limitations, biases, and the human decisions embedded in every output. More importantly, it explains why we must teach this.
We must remember that we’re not just preparing students to use AI tools in the classroom. We’re preparing them to live in an AI-infused world. They need conceptual clarity, critical thinking, and ethical grounding. They need to be AI literate not just for jobs, but for life.
Beyond academic integrity — towards authentic learning
The guide’s final chapter addresses AI and assessment, but I want to largely deliberately step past the plagiarism and detection conversation. We’ve already spent too much time there. Instead, let’s ask this: What does authentic evidence of learning look like in an AI age?
Too many of our existing assessment tasks were already flawed, performative, and disconnected from genuine thinking. AI hasn’t broken them – it’s exposed them. And that’s a gift for educators.
This is a historical moment in which we are challenged to reimagine assessments that are engaging, meaningful, process-rich, connected to our human experience, and rooted in actual learning.
Let’s also acknowledge that AI detection tools are widely rejected by teachers – and for good reason. They’re unreliable, often biased, and pedagogically unhelpful.
In short, it’s my view that teachers don’t need AI detectors. We need to know our learners and how they learn… and that need comes with pedagogical and resourcing implications!
A treasure trove of resources
Finally, don’t miss the extensive hyperlinked resources at the end of the guide. While primarily American in nature, they’re an absolute Aladin’s cave – perfect for teams wanting to dig deeper, explore nuanced perspectives, or design their own professional learning pathways.


You must be logged in to post a comment.