The Box We Think In: How educational technology might escape its bad history

Edtech started with pigeons pushing levers, and we’ve spent the past hundred years stuck in the model of learning that Thorndike and Skinner built more than a hundred years ago.

The problem they were solving was—and still is—a real one: you’ve got many students and just one teacher and a lot of stuff that needs to get learned. If you confront  all the students in a class with the same content at the same time, most won’t get what they need. But what if you could build a system that treats students one at a time, allowing  each to move at their own pace, responding with feedback immediately?

Skinner had seen how well it worked with pigeons: push the right lever and you get your reward; with each round, the connection between the correct behavior and the tasty food grooves a little deeper. But the technology available to him meant coming up with all the answers ahead of time—and made learning the same as getting better at choosing the correct one from the not-quite. All the options had been arranged ahead of time, and choosing was the only thing left for the pigeon to do. Edtech, from its inception, was built on the model of learning-by-multiple-choice. And the only kind of landscape the student could travel came with all the roads already mapped out.  

In the hundred years since, educational technology has struggled to move beyond the mindset framed by these fundamental constraints. Past the assumption that education looks like students facing a set of prefab options, and learning proceeds by choosing from among them. 

And the place where learning happened, we got used to believing, was in behavior. In the lever the pigeon pushed, or the choice the student made. The bubble they filled in—or tapped or swiped—was where educators could operate. That was your chance to get learning to happen, the site where intervention was possible. 

But that’s not true. Real learning doesn’t occur in your fingertips, and choosing correctly between A or B or C isn’t how you demonstrate it. Learning is about changing your mind. The way you think. 

Which is why teaching is so hard. You can’t see—much less access—the thing you’re looking to change. The really important part happens inside your student’s head. Behind the wall of their skull. And there’s no good way to get at it. Because as hard as we try to “make thinking visible,” we can’t; much as we may ask our students to “think out loud,” they can’t. The act of observing changes the thing we’re trying to see. At the deepest level, teacher and student are two gears spinning independently.

But the LLM moment we’re in the middle of might make some room for a radically different idea of what a teaching machine might do.  An instructional agent that can follow a student’s thinking, wherever it leads. That enables a kind of “off-roads learning,” where a student can move as naturally through an in-school problem as they do when they’re thinking-in-real-life. The learning experience opens by putting a provocation in front of them—something worth thinking about—and then coaches them through the process of thinking back to it, figuring out what they see, unpacking what it gets them thinking, building on the solutions they develop. A model of learning that looks more like Socrates and less like Skinner: a conversation that unfolds naturally, in the middle of the market, while people all around you are getting their shopping done.  

There’s still no technology that’s able to engage directly with what people are thinking—in the way that a fMRI can map the brain’s activity in real time—but tools like ChatGPT can get us much closer to a learner’s actual thinking processes.  Our interactions with these bots—when we’re talking to bot-therapists, for example, or bot-discharge-nurses—suggest that we’re willing to share something more like our “real thinking.” The questions we need to ask. Maybe because the bot doesn’t stir our fear of being judged; our self-consciousness; our shame. Maybe that’s what it takes to change those deep assumptions that constrain our sense of what learning looks like and how educational technology might make it happen.