Can’t ride the AI bicycle? AI companies blame you.

I recently shared the video below about how AI interfaces for learning might be pedalling back on how computers were like “a bicycle for the mind”. It’s the famous metaphor by Steve Jobs that highlights how they help us move faster and go father.
For learning with a computer, the metaphor is beautiful because of how intuitive bicycles are to people of all ages. Not only would they enhance our abilities without atrophying our muscles, they are actually still a form of exercise! Productive friction is fantastic for learning.
Do we still feel that’s true with the chat interfaces we have for AI?
Some will say AI is like an e-bike for the mind. It’s faster and a little bit more comfortable, maybe a little bit less resistance, so we don’t use our minds as much. But maybe that’s worth it because it’s got so much speed and power.
Yet when I really started to break down that metaphor, I realized that it’s missing a few key pieces. If you ride a bike, I want you to think about every single muscle that’s engaged when you’re turning the handlebars to the right. Maybe someone just jumped onto the street and you were able to swerve away real quick. How did you do that?
Your body intuitively moved. It was able to shift your weight, turn the handlebars, make sure that you don’t fall off the bicycle. All of this stuff happens at once without your brain giving any explicit signals that you’re aware of while using the form of the bicycle and all the mechanics it provides. A bicycle and many other actual tools have things that modern AI chat interfaces simply don’t. With an AI, everything has to be explicit. Every time it comes back to you with something, you have to give it more words. You have to actually think about the explicit thing that you want next.
There is no room for quick course correction or happy accidents that arise from pushing something further than you expected. Instead, it’s always just doing exactly what you wished you could see. That leaves out a lot of room for exploration, for the things that you didn’t know that you didn’t know.
Chat-only interfaces reduce fluidity even further by putting all the accountability on users who may not be aware that they are now juggling multiple roles while chatting. Users have to toggle between being a novice and an expert. They are expected to explicitly ask the AI when to be critical, creative, nurturing, defensive, etc.
But how can we possibly expect all of these things of one person? Last week I shared a post about the absurdity of expecting students to click through ChatGPT’s little + icon to explicitly demand a study mode. What I didn’t dig into is how we then expect them to know how to be a good student within that mode. Not everyone is clear on their knowledge limits, especially not a novice. Sometimes the best way to nurture someone’s learning is to push back… a lot!
Without an interface or some clear guardrails that make sure that this happens, OpenAI can just as easily blame the students for not knowing how to ask for pushback! After all, the tool is just neutral… right?
I’m honestly not sure how to process all of this information. It’s at once overwhelming to think about the scale of AI use, the varieties, definitions, and all other concerns that we have. And yet, it can also feel exciting to have a beast like this to wrangle. My current naivety still has me thinking this AI situation is an opportunity to have an impact in education, not with AI but in reaction to it.
Education isn’t going to be revolutionized over night by anything, and this is an approach I don’t favour anymore. Today I want to see what small steps we can start taking with technology or otherwise, to make learning for young adults inspiring and impactful. Maybe we can uncover beautiful opportunities if we had spaces and interfaces that allowed us to learn and be creative with more than just words and screens.