← Back to blog

Your Child Is Already Living in an AI World. Are We Helping Them Navigate It?

Katy Lamb·14 March 2026

There's a conversation happening in schools right now that sounds very familiar to anyone who lived through the calculator debate.

In the 1970s and 80s, educators argued fiercely about whether children should be allowed to use calculators. The fear was reasonable: if you let them use a machine to do arithmetic, they'll never learn to think mathematically. They'll become dependent. They'll lose something essential.

What actually happened was more complicated. Children who learned alongside calculators — who understood what the tool was doing and why, who used it to explore bigger problems rather than avoid smaller ones — developed stronger mathematical intuition than those who were kept away from them. The tool, used well, made them better thinkers. Used badly, it made them dependent. The difference was in how it was introduced, and by whom.

We're having the same conversation about AI. And I think we're at risk of making the same mistake — treating it as a threat to be managed rather than a capability to be taught.


What we actually want for our children

When I think about the skills that will matter most for children growing up now — not just professionally, but as humans navigating a complex world — a few things come up consistently.

The ability to direct a process toward a meaningful outcome. To know what they want, articulate it clearly, evaluate what they get back, and refine until it's right. To collaborate — with people, with tools, with systems — without losing their own voice in the process. To create things that are genuinely theirs, even when they didn't do every part alone.

These aren't new skills. They're ancient ones. What's new is that AI is becoming one of the primary tools through which these skills will be expressed.

A child who learns to work with AI — to steer it, interrogate it, push back on it, use it as a starting point rather than an ending point — is developing exactly the kind of agency that will serve them for the rest of their lives.

A child who learns to let AI do things for them, without engagement or direction, is learning something quite different.


The difference is creative ownership

Here's what I've observed, both in thirty years working with children and in building something that puts AI in children's hands: the magic isn't in what the AI produces. It's in what the child brings to it.

When a parent sits down with their child to create a personalised story — when the child gets to decide who the characters are, what the world looks like, what happens, what matters — something interesting is happening. The AI is the engine. But the child is the author.

They're practising the single most important skill for the AI age: intentional direction. Knowing what you want. Expressing it. Shaping the output until it matches the vision in your head. Owning the result.

This is not a small thing dressed up in big language. Watch a five-year-old insist that the dragon is purple, not blue and that it needs to be friendlier, and that actually the dragon and the child are best friends and that has to be in the story. That child is directing. Evaluating. Revising. Advocating for their creative vision against a system that doesn't automatically know it.

That's a transferable skill. A significant one.


What "collaboration with AI" actually looks like at five

I want to be concrete about this, because the phrase "AI literacy" sounds abstract and slightly daunting.

It isn't. At five, it looks like this:

A child telling you what their story should be about and watching it take shape. Noticing when something isn't quite right — that doesn't look like our dog — and learning that their feedback matters, that the system responds to their input. Understanding, at the most basic level, that this tool is something you work with, not something that works for you.

At eight, it looks like drafting and refining. Understanding that the first version is a starting point. Developing the critical eye to see what's missing, what's wrong, what could be better — and the vocabulary to say so.

At twelve, it's deeper: understanding what the tool is good at, where it falls short, when to use it and when not to. Using it to amplify their own thinking rather than replace it.

None of this is complicated. All of it starts early. And all of it is built on the same foundation: experiences where the child is genuinely in charge of making something meaningful, and AI is the tool that helps them do it.


The thing worth being careful about

I want to be honest here, because I think a lot of the AI-and-children conversation is either naively utopian or needlessly fearful, and neither is useful.

The risk with AI is real. It's just not the risk we usually talk about. The risk isn't that children will use AI — they will, and they should. The risk is that they'll use it passively. That they'll accept the first output. That they'll outsource their thinking rather than augmenting it. That they'll mistake the tool's fluency for their own.

The antidote to that isn't banning AI or pretending it doesn't exist. It's giving children early, positive experiences of using AI as a creative collaborator — where they are unmistakably the author, the director, the one whose vision shapes the outcome.

When a child holds a book that exists because they decided what went in it — their character, their world, their story — they're not just holding a keepsake. They're holding evidence of their own creative agency.

That's the foundation. Build it early.


The calculator debate ended, eventually, with a more nuanced understanding: the tool wasn't the problem. The pedagogy was. How you introduce a powerful tool to a child — whether you teach them to depend on it or to direct it — determines almost everything about what they get from it.

We have the same choice now, with AI.

I know which way I'd go.

Ready to create their story?

Put your child at the heart of a book made just for them.