GenAI in coding: giving everyone a rocket, but risking a crash landing

8

October

2025

5/5 (1)

I’ve never been a tech savvy or a guy with strong knowledge in coding, but since I was a child, I always loved the tech world: I wondered how machines work and speak to each other. To me it seemed like magic! Life goes on and I never learned deeply how to code well and more in general how to leverage the magic capabilities of computers. Some months ago things changed: the first time I tried genAI tools to code was with GPT, I was really wondering how to transform speech into text with a free software , but I could not find anything and GPT suggested me to use “whisper”, an OpenAI model that is capable to leverage AI for the comprehension of audio. But it was not ready to use, so at a first glance, It seemed too complicated: you needed to know how to download it, set it, talk to it and give it parameters to work with, everything in a language I don’t even speak (python). So i asked GPT to do it for me and… BOOM, it felt like someone gave me the keys to do whatever is possible, the keys to that magic world I always admired from outside, I could write a simple request in natural language and see the machine translate it into code that was practically ready to use. Since then I started to experiment a lot: functions, APIs connections and even full working web applications made from scratch came to life in front of me; and I was the one triggering them! For the first time I felt capable of mastering something that I always considered as rocket science.

Soon I realized that this power comes with a paradox: on one side it makes an extremely complicated and sophisticated world accessible to people who do not have the ability or the time to learn it from the fundamentals. It lowers the barriers to entry and lets anyone build and test their own ideas. On the other hand, I realized something deeper: many people and companies think that with these tools they can replace developers, that coding is a job with a defined due date; they believe a non tech person, armed only with prompts, can do the same job as someone with years of coding background. To be honest, at first, I thought the same, but the more I experimented, it became clear that this illusion creates huge problems.

When everything works correctly it seems like magic, but the moment the program runs into a problem (and believe me, there are always many of them) the sad reality emerges: without understanding the fundamentals of how that code works (what inputs it takes, what it does with them, and how is the logic ) you are lost. The code becomes something monolithic and inaccessible, a black box you can neither fix nor interact with, you just stare at it, hoping an AI will solve the issue.

This experience made me understand that even if GenAI tools can lower the barriers of code and empower everybody with the possibility to put in place their creativity, it cannot replace the depth of knowledge required to actually OWN the code. It’s like being given a shiny rocket with a full tank and a strong autopilot: as long as the system runs, everything is perfect, but the moment a warning light flashes, without being able to go under the hood, you risk a crash landing.

To conclude, I found this in coding but I think that it correctly summarises the guardrails that we have to remember in using genAI: you can externalise to it the execution, not the knowledge!

Please rate this

AI with AR: from knowledge transactions to cognitive symbiosis

19

September

2025

No ratings yet.

If we think about last decades of innovations in computing, the stunning progress relates to faster processors, bigger datasets, stronger algorithms and a better hardware/software in general; one thing remained incredibly static during this whole period: the way we interact with those powerful machines remained impressively similar. From punch cards to keyboards, from mice to touchscreen, even if the tech changed the paradigm remained the same: humans input a command, machines output a response and the dialogue stops there; the boundary between thought and action remains rigid, lengthening the interaction time and efficiency of the transaction. In my opinion the convergence of AI and AR has the potential to disrupt this model completely.
AR alone provides a new way for interaction: it overlays digital information on the physical world turning empty space into an interface. Anyway, AR alone, risks being just another screen, only closer to our eyes. At the same time AI alone gives machines abilities to learn and predict, but its power remains confined in the machines and in order to extract it inefficiencies arise.

In their convergence revolution can happen.

Imagine a device where AI and AR works together, information is not just displayed,but it’s understood and adapted to the user in real time. With AR, AI can escape from the bare metal and enter in our living environment, transforming the transactional dialogue limited in speed, into an immersive cognitive system, where the machine does not just wait for the input: it perceives, it processes and it acts together with us, no longer waiting for our commands. Imagine having your agenda, your acquaintances, the needed wikipedia information or the best thing to say, everything in the blink of an eye. This is a new challenge in which the aim is to make interaction become instinctive, shaped by gestures, gaze and perhaps even intentions. For the first time technology has the capability to align itself with the natural rhythm of human cognition. In this new battlefield, the focus will be more and more on the portability and the smoothness of interaction: glasses, lenses or other interfaces should be as light, comfortable and easy-to-use as possible
In my view, this is more than just a technical revolution, it is a philosophical one. Machines no longer aim to be a separate assistant able to help you on call, but to become extensions of perception itself. AI and AR together do not create just a new device, but a new dimension of experience.
This awareness could change the interpretation of users’ needs, competitive dynamics and could introduce a whole new era where humans and machines can work together towards a common objective.

References:

  • Chenna, R. (2023). Augmented Reality and AI: Enhancing Human-Computer Interaction. SSRN Electronic Journal.
  • Li, M., Wang, Z., Zhang, Y., & Liu, H. (2025). Augmenting Human Cognition through Everyday AR. arXiv preprint.
  • GeeksforGeeks (2023). AR and AI: The Role of AI in Augmented Reality. Available at: https://www.geeksforgeeks.org/artificial-intelligence/ar-and-ai-the-role-of-ai-in-augmented-reality/

Please rate this