Coding with Generative AI: From Syntax to Systems Thinking

10

October

2025

No ratings yet.

When I first started coding econometrics models, ChatGPT/Claude.ai felt like a supercharged Stack Overflow – fast answers, clear syntax, instant fixes (Anthropic, 2024). But somewhere between debugging a Stata regression and replicating a structural VAR in MATLAB, I realized it wasn’t just writing code for me, it was shaping how I think about code.

Generative AI changes the learning curve. Instead of memorizing syntax, I try to focus on logic: why a variable needs differencing, how lag order affects impulse responses, what Cholesky decomposition really implies and most importantly: how does the code achieve the result. In that sense, coding with AI is like pair programming with a very patient mentor – one who will always happily re-explain the Augmented Dickey-Fuller test.

Still, it’s not magic. AI will happily generate a clean script that runs perfectly (or at least in theory), though after some analysis, you realize that you produced meaningless results. It tends to invent lag lengths out of thin air or mix variable names which (often) sabotage the validity of the results. The model can simulate competence but not comprehension (Dutta et al., 2022). The burden of understanding still sits comfortably on the human side.

Yet, I think what’s emerging is a shift from syntax-driven to concept-driven coding. With the help of generative AI, the skill isn’t knowing every command – it’s knowing what to ask for, how to verify it, and when to intervene. It’s a move toward systems thinking: treating code as an interface between reasoning and execution.

The next step (what I lacked in my experience), is continuity. AI tools should remember analytical context – past models, data definitions, even prior assumptions. Of course there are privacy concerns, but until then, coding with AI will remain powerful yet disconnected. Or maybe I just can’t prompt accurately…

Anyway, maybe the bigger question isn’t whether AI can code (because it can!) – but whether it can ever truly understand what it’s coding.

References

Anthropic. (2024, March 4). Introducing the next generation of Claude. Www.anthropic.com. https://www.anthropic.com/news/claude-3-family

Dutta, S., Linder, R., Lowe, D., Rosenbalm, R., Kuzminykh, A., & Williams, A. C. (2022). Mobilizing Crowdwork:A Systematic Assessment of the Mobile Usability of HITs. CHI Conference on Human Factors in Computing Systems, 1–20. https://doi.org/10.1145/3491102.3501876

Please rate this

1 thought on “Coding with Generative AI: From Syntax to Systems Thinking”

  1. I like how you’ve described a shift from coding as syntax recall to coding as conceptual reasoning. AI really does really make it so that suddenly the hard part isn’t getting the ADF test to run, it’s knowing whether it should be run, with which assumptions, and what the results even mean. But the continuity problem you mention is exactly the bottleneck. Without memory of prior models or definitions, AI ends up acting like a very smart TA with amnesia. I’m also not convinced deeper context retention would automatically deepen understanding. It might just help us produce cleaner code while masking shallow reasoning. So yeah, AI can code, but whether it can ever really understand econometrics is still an open question.

Leave a Reply to szabomar24 Cancel reply

Your email address will not be published. Required fields are marked *