I’ve never been a tech savvy or a guy with strong knowledge in coding, but since I was a child, I always loved the tech world: I wondered how machines work and speak to each other. To me it seemed like magic! Life goes on and I never learned deeply how to code well and more in general how to leverage the magic capabilities of computers. Some months ago things changed: the first time I tried genAI tools to code was with GPT, I was really wondering how to transform speech into text with a free software , but I could not find anything and GPT suggested me to use “whisper”, an OpenAI model that is capable to leverage AI for the comprehension of audio. But it was not ready to use, so at a first glance, It seemed too complicated: you needed to know how to download it, set it, talk to it and give it parameters to work with, everything in a language I don’t even speak (python). So i asked GPT to do it for me and… BOOM, it felt like someone gave me the keys to do whatever is possible, the keys to that magic world I always admired from outside, I could write a simple request in natural language and see the machine translate it into code that was practically ready to use. Since then I started to experiment a lot: functions, APIs connections and even full working web applications made from scratch came to life in front of me; and I was the one triggering them! For the first time I felt capable of mastering something that I always considered as rocket science.
Soon I realized that this power comes with a paradox: on one side it makes an extremely complicated and sophisticated world accessible to people who do not have the ability or the time to learn it from the fundamentals. It lowers the barriers to entry and lets anyone build and test their own ideas. On the other hand, I realized something deeper: many people and companies think that with these tools they can replace developers, that coding is a job with a defined due date; they believe a non tech person, armed only with prompts, can do the same job as someone with years of coding background. To be honest, at first, I thought the same, but the more I experimented, it became clear that this illusion creates huge problems.
When everything works correctly it seems like magic, but the moment the program runs into a problem (and believe me, there are always many of them) the sad reality emerges: without understanding the fundamentals of how that code works (what inputs it takes, what it does with them, and how is the logic ) you are lost. The code becomes something monolithic and inaccessible, a black box you can neither fix nor interact with, you just stare at it, hoping an AI will solve the issue.
This experience made me understand that even if GenAI tools can lower the barriers of code and empower everybody with the possibility to put in place their creativity, it cannot replace the depth of knowledge required to actually OWN the code. It’s like being given a shiny rocket with a full tank and a strong autopilot: as long as the system runs, everything is perfect, but the moment a warning light flashes, without being able to go under the hood, you risk a crash landing.
To conclude, I found this in coding but I think that it correctly summarises the guardrails that we have to remember in using genAI: you can externalise to it the execution, not the knowledge!