The Unfulfilled Promise of an AI that can take my Job

30

September

2025

5/5 (1)

With a background in Computer Science, I was able to enter the job market as a software engineer early on. This way I started to work as a programmer at a medium-sized Dutch software company after my first year of studying as a Bachelor. At that time, AI and Generative AI would not have an impact on our line of work for another one and a half years when ChatGPT 3 would launch for the first time.

When working on a large enterprise system for industries with unique and complex processes the complexity in software architecture and class structure increases exponentially. Where in class a coding exercise might have entailed creating a few classes, implementing a few constructors and running a specified set of methods all within a predefined programming language, coding at a software company involves countless additional steps. Even the simplest bug fixes and feature developments require deep knowledge over how a niche subset of the source code functions, a great ability in reading and understanding complex calculations/algorithms ran by the backend and intricate knowledge in not just multiple coding languages (Java, JavaScript, TypeScript…), but also frameworks that run on these languages such as React, Node or Ember.

It was no surprise therefore that all of us were quite intrigued by the potential of AI-assisted coding right from the release of ChatGPT 3. Coding plugins and extensions built into the Integrated Development Environments (IDEs) were already widely utilized within the company for many years and they helped focus on the underlying logical fallacies that need to be solved. With the new Generative AI, however, the premise was that the assistant could assist or take over even this work. After much experimentation and the implementation of an enterprise version of Google Gemini we quickly reached the limits of AI’s coding capabilities in today’s world. After so much drama in the news and a public perception of AI as the coding-killer we found that although Gemini could analyze and correct a couple of individual lines of code, it is not yet able to navigate or store a large codebase to analyze and understand the context of the problem or feature.

Even companies like Microsoft, Google and Meta, who are some of the only organizations on earth able to afford to train their own Gen AI models on their own code, are unable to rely on their AI to fix small bugs autonomously. Too much risk is involved in incorrect design choices, edge case bugs and most importantly the verification process. This process is critical and still requires testing by real humans, who are skilled and competent enough to assess the end results based on chosen requirements.

For us and the rest of the development world, AI coding assistants will stay limited to “chunking” code into deliberately chosen fragments, selected by the developer, that can help the AI in assessing a coding task. A great improvement that can yield automatic generation of “boilerplate code” (repeated code that is commonly used throughout a project), the generation of a “stub implementation” to go off or a list of suggested corrections in case a developer gets hard stuck. Though, generative AI does not nearly constitute a true job killer and even if it could, it will take additional years until full capabilities will be available to the large majority of software companies on earth.

Please rate this

2 thoughts on “The Unfulfilled Promise of an AI that can take my Job”

  1. Great take on the benefits GenAI tools have for software developers. I agree with most of the points you mentioned as I also believe that GenAI tools have their limits, especially in large-scale software development process.
    Additionally, I would like to reflect further on the points you partly touched upon already. I believe it is safe to say that GenAI tools lead to more code that is being generated, but it ultimately reduces the quality of the code generated. From my own experience as a software developer, I have seen these tools struggle understanding project scope and project dependencies, which leads to code that sometimes contains more than one error.
    That makes me think about integration possibilities where you create one specific LLM that is trained on past software project data and has a live connection to the whole software project. If GenAI tools can accurately capture and understand the current state of the whole project, it might be able to create reliable and high-quality software components.

    1. Even with the project background knowledge, I think GenAI will still struggle with understanding the expected human result of new feature additions that can vary so widely that it will be unlikely to find exact matches from previous software projects. Testing even via extensive prompts is difficult for the LLM.

      I dont doubt that it will become better in coming up with better suggestions and slowly get better in code quality. Though, future software engineers still need to tackle the challenge of understanding project expectations/requirements and which part of the code fails. Perhaps they will no longer need to be able to write the code themselves however.

Leave a Reply to Linus B Cancel reply

Your email address will not be published. Required fields are marked *