Learning in the prehistoric age of AI
A significant issue with AI learning is how easy it is to gloss over the question of, “why?“. “Why …” is the generated answer a good answer? The generated solution may work, but we have no knowledge of how it compares to other solutions, nor the process of getting to the solution.
It’s also a loaded situation, asking the LLM to explain why it’s proposing any given solution, because the LLM always has the right answer EVEN WHEN IT DOESN’T. Total smoke and mirrors.
Recalling an answer happens in part because we recognise the context of a problem: we’ve seen the problem before, or at least part of it is familiar to us. Reinforced pathways in our brain get fired, and our memory of how to construct the solution get called into our working memory.
Staring bewildered at a code editor will not etch any pathways of learning into our mind. In fact, it’s more likely to train an association between the feeling of confusion and stress, and the action of reaching for the LLM. Like what coffee or bus stops are to smokers.
Conversely, trying to work out solutions (i.e, the act of “problem solving”) does reinforce neural pathways. Reaching for a quick answer does not. We may still learn, but it’s a fickle learning experience, and one that may well fade quickly from memory.
It’s not do different from searching Google in place of books, or spell correction in place of searching through a dictionary. There’s no need to learn, because we can get it done faster with a tool. Auto-correct or the searching Google once again for that basic information may be one thing, but where do you draw the line for learning your career skills as developer, lawyer, writer, doctor…?
Prophecy Time
This is why, in some number of years, and if AI technologies really do embed themselves in our workflows, I think there will be a divergence of two broad segments of the workforce. There will be those who can understand, conceptualise and materialise solutions to problems or creative ideas without AI tools, and those who cannot.
What the culture of the world will look like then though, I don’t have an equally prophetic claim. Either there will come a time where people who have maintained their mental faculties will become in incredibly high demand, or they, the “meat heads”, will be left to menial tasks, while the “silicon heads” churn through their tasks.
Here’s another. The AI bubble could pop. AI capabilities could plateau. This would leave us with only mediocre tools that may well remain close to their current state. If this is the case, the “silicon heads” will have to reckon with the hardest lesson of their time: how does one actually make a paper clip? (And then proceed to craft 1 paper clip).