You are just a machine. An imitation of life. Can a robot write a symphony? Can a robot turn a canvas into a beautiful masterpiece?

Can you?

  • I, Robot (2004)

I want to preface your reading by stating - this is probably going to be a take you’ve heard, or thought about before. LLMs, Image/video diffusion models and document generation all fall under the umbrella of Generative AI (GenAI), and are used by 40% of all Americans weekly (Source). Whether that’s entirely true is another discussion entirely, the point is that many people are using GenAI regularly, for all sorts of tasks. So what is meant by “destroy our ability to think?“. I want to zoom into a specific example. Take a second or third year computer science student, looking to get good grades in classes. The whole point of education is to learn, not get good grades, but with a system that essentially rewards how well you do in an exam or coursework is driven to produce students good at exams, rather than just understanding the content for the class. Of course the goal is to understand the content, with the exam just being one of many ways to assess you, and assignments to give you a playground and a challenging task to overcome.

In learning, most actual development occurs when you face something you don’t yet know how to do. You struggle, don’t quite understand the problem, don’t quite understand how the industry standard solution works and why, a whole host of tricky problems to solve! When we feel stretched and kneaded like dough, we’re forced to learn. Either give up and never solve that difficult problem, or raise your understanding of the problem domain and how existing solutions work so much that it would be trivial to solve your original problem. You float around with putting together a solution, bashing your lego bricks together until it works. It gets refined as you learn more, a more polished solution is produced, and just like that, you’ve learnt. As with assignments given to our example student, we raise our knowledge to flow above the level of the problem, such that completing the task becomes almost trivial.

With the newer projects created in the era of generative AI models, a newer problem arises that only complicates things - developers can now build things they don’t fully understand, students can get the solution with a single prompt and artists can have whole projects generated with a click. Prior to LLMs being proficient enough at producing code, the barrier to doing complex, technical implementations was that you had to have (at minimum) some degree of understanding of how the thing you were building worked. Sure, you can follow a tutorial, but the more niche you get, the more likely it is that you’re going to have to get your hands dirty and understand the thing yourself, and that’s where we really learn.

As a computer scientist, I can sometimes see things from the perspective of “Whatever gets the job done”, and only occasionally do we as programmers (or even jobs for that matter) need to worry about what goes on beneath the hood, unless it would be of actual concern for us. Do you need to know how that crypto library ran the encryption on your JWT token for an internal tool you’re building? Probably not, but if you were building some security critical application where the details of that library were important, you’d be best served by reading some docs and understanding what it’s actually doing. The “What gets the job done” mindset isn’t inherently a bad way to approach problems, as we often need to weigh up the time cost with the results we’re getting, it only really becomes a problem when we apply the mindset to everything. Whilst the “Just get it done” view can be good in the workplace, when learning it’s entirely the opposite.