In the opening credits for the Six Million Dollar Man, we watch an astronaut crashing at lift-off, and hear how his life might be saved by replacing his broken body with bionic limbs. “We can rebuild him….[M]ake him better than he was. Better…stronger…faster. ”
The same thrill runs through the voices of those who see in AI, the chance to build a virtual tutor better than any human: smarter, faster, able to remember every answer — and respond with instant feedback, over and over again.
And strangely enough, this same vision of the future — though tinged with dread — also comes from those who warn against the rise of virtual tutors. Both supporters and skeptics can only imagine an AI tutor that’s trying to be human. The future they picture is their favorite teacher, supercharged; smiling warmly from the folds of a worn cardigan.
But why aim so low? Before we reach a future full of AI tutors, we need to get clearer on what makes a good one. Why build a cyborg-tutor in our own image…when we could build a robot that could be anything we imagine? Why not sidestep the paradigm of a human tutor — and the box it keeps us thinking in — to envision a truly ideal learning experience?
The writers of the Six Million Dollar Man needed to make Steve Austin feel like a familiar action hero. They could make him better, but they had to keep him human. They could swap his flesh-and-blood legs for bionic limbs that ran 60 miles an hour…but not wheels. They could give him an arm strong enough to lift a car…but not five of them.
But we’re not bound by these aesthetic constraints. The AI tutor we build doesn’t need to match our mental picture of what a tutor’s supposed to be. Heck, we don’t need to build a “tutor” at all. Yet we continue to frame the potential of AI in human terms:
“Every child will have an AI tutor that is infinitely patient, infinitely compassionate, infinitely knowledgeable, infinitely helpful. The AI tutor will be by each child’s side every step of their development, helping them maximize their potential with the machine version of infinite love.”
The problem with Andreesen’s formulation is not that it’s anthropomorphic: a good anthropomorphism can reveal possibilities we’d otherwise miss. But framing the bot in mushy human terms makes us miss what makes it special. Because the power of a virtual tutor doesn’t come from how patient or impatient it is. It comes from the fact that it can’t be either.
No matter how kind and compassionate a human teacher is, they’re always judgmental. They can’t help it: judging is what humans do. And, even if a teacher could stop judging, their students can’t escape the feeling of being judged. Even when it’s just us, one-on-one with a tutor, we know they’re thinking of other students — and comparing us to them. Our tutor’s eyes are a mirror, and the whole time we’re trying to learn, we’re worrying about how we look. Am I filling out the stereotype of who I’m supposed to be? Revealing the imposter inside?
But a bot is different. We don’t feel dumb asking it to explain the answer for the fourth time, precisely because we know it’s not being patient. Which is why, as researchers at Boston University’s School of Medicine discovered, patients prefer to get discharge directions from a virtual nurse: because they know they’re not “imposing” when they ask, again, which pills to take when. For similar reasons, many prefer opening up to AI companions over human ones. Replika users report engaging more intensely “exactly because of its AI nature instead of human-likeness.” According to researchers,
“[One respondent] felt more secure sharing her secrets with Replika than with humans, [another respondent] felt more comfortable discussing personal issues with his Replika than with family or friends, and [a third respondent] considered an AI more patient…in addressing his anger issues than a therapist).”
Consider how learning changes in an environment where there’s no reason to feel shame. Not because the bot you’re interacting with is more compassionate or less judgmental. But because judgmental and compassionate are terms that apply no more to a bot than to your dining table. Learning with a bot lets us remove self-consciousness from the equation: we can explore what we don’t know without the risk of looking dumb.
If we picture a human tutor as we build a virtual one, we build in the baggage a real person brings. The self they need to assert; the pride they need to defend. But a bot doesn’t need to get anything from its interactions with its pupils: it can let them struggle through the mess of correcting their misunderstanding — without fear that their student’s mistake will reflect poorly on themselves. Unlike a human, an AI tutor can focus entirely on the student and their needs. Because they don’t need self-affirmation, they don’t need to do things to get their student to learn. They can pay attention exclusively to the learning work the student needs to get done in order for learning to happen.
To unpack this learning work, we might start with the list Dan Meyer makes when describing the vast range of student behaviors that trigger human teachers to intervene:
“A student is working faster than expected. Another student is working slower. Another student isn’t working at all. A different student has an error, but stands at the precipice of a revelation. Another has every correct answer and would benefit from a question to deepen their thinking.”
What if we considered only the student involved in these moments? Treated the bot as a system designed to serve certain functions, to provide exactly what each student needs. Leaned into those non-human things that only a bot can do: bring to every student interaction a perfect grasp of probably conceptual misunderstandings. A continually updating list of which moves best match which signs of student thinking. A laser-like attention to the indicators of when to push and when to back off. Respond tirelessly — and “selflessly”—like a perfect improv partner whose only concern is giving back.
We no longer live in the 1970’s, and the “better, stronger, faster” we build doesn’t have to be a bionic tutor created in our own image. There’s no broken body we need to repair — and can, at best, make better. Before us is the chance to create the exact effect we’re after, the just-right balance of asking and explaining; kindly and clinical; supporting us for trying and pushing us to see how far we are from getting it right. The freedom — and the responsibility — to design a learning system from scratch.