Coding with Generative AI: From Syntax to Systems Thinking

10

October

2025

No ratings yet.

When I first started coding econometrics models, ChatGPT/Claude.ai felt like a supercharged Stack Overflow – fast answers, clear syntax, instant fixes (Anthropic, 2024). But somewhere between debugging a Stata regression and replicating a structural VAR in MATLAB, I realized it wasn’t just writing code for me, it was shaping how I think about code.

Generative AI changes the learning curve. Instead of memorizing syntax, I try to focus on logic: why a variable needs differencing, how lag order affects impulse responses, what Cholesky decomposition really implies and most importantly: how does the code achieve the result. In that sense, coding with AI is like pair programming with a very patient mentor – one who will always happily re-explain the Augmented Dickey-Fuller test.

Still, it’s not magic. AI will happily generate a clean script that runs perfectly (or at least in theory), though after some analysis, you realize that you produced meaningless results. It tends to invent lag lengths out of thin air or mix variable names which (often) sabotage the validity of the results. The model can simulate competence but not comprehension (Dutta et al., 2022). The burden of understanding still sits comfortably on the human side.

Yet, I think what’s emerging is a shift from syntax-driven to concept-driven coding. With the help of generative AI, the skill isn’t knowing every command – it’s knowing what to ask for, how to verify it, and when to intervene. It’s a move toward systems thinking: treating code as an interface between reasoning and execution.

The next step (what I lacked in my experience), is continuity. AI tools should remember analytical context – past models, data definitions, even prior assumptions. Of course there are privacy concerns, but until then, coding with AI will remain powerful yet disconnected. Or maybe I just can’t prompt accurately…

Anyway, maybe the bigger question isn’t whether AI can code (because it can!) – but whether it can ever truly understand what it’s coding.

References

Anthropic. (2024, March 4). Introducing the next generation of Claude. Www.anthropic.com. https://www.anthropic.com/news/claude-3-family

Dutta, S., Linder, R., Lowe, D., Rosenbalm, R., Kuzminykh, A., & Williams, A. C. (2022). Mobilizing Crowdwork:A Systematic Assessment of the Mobile Usability of HITs. CHI Conference on Human Factors in Computing Systems, 1–20. https://doi.org/10.1145/3491102.3501876

Please rate this

The AI Startup Prompt Middleman Problem

19

September

2025

No ratings yet.

When Lithuanian startup Sintra.ai announced a $17M seed round earlier this year (Lawrence, 2025), the pitch sounded familiar: “AI helpers” for small businesses – one for customer support, one for growth, one for analytics. Slick demos, bundled workflows, and a subscription model. But under the hood? They all run on OpenAI’s models.

This isn’t unique to Sintra. Dozens of “AI agent” startups are popping up with verticalized offerings, but most are essentially wrapping prompts, UX, and integrations around the same core LLM. It’s a clever way to package generic intelligence into workflows that SMBs actually understand and pay for. In a way, this is bundling: not trying to outsmart ChatGPT, but packaging it like Microsoft once did with Office.

The problem is sustainability. History shows what happens when platforms see their complementors thriving. Apple copied successful iPhone apps. Amazon launched its own versions of top-selling marketplace products. Jasper, once the darling of AI marketing copy (Thakur, 2025), was quickly sidelined once ChatGPT added similar tools. What stops OpenAI from doing the same with “SMB helpers” inside ChatGPT? The platform owner always has the guillotine.

That doesn’t mean Sintra and its peers are doomed. Their defensibility might come not from prompts, but from data flywheels – proprietary customer data, feedback loops, and integrations that OpenAI can’t easily replicate. Perplexity AI is thriving not because its model is better, but because it owns the search experience and retrieval layer.

In my view, most LLM based AI agent startups are temporary wrappers. The real moat will come from owning unique data or creating a sticky ecosystem around workflows. Until then, they’re renting intelligence from OpenAI – valuable for now, but precarious in the long run.

Do you think AI agent startups can build a lasting moat, or will they all eventually be swallowed by the platforms they depend on?


References

Lawrence, C. (2025, June 10). Lithuanian AI startup Sintra secures $17M Seed: empowering SMEs with AI helpers. Tech.eu. https://tech.eu/2025/06/10/lithuanian-ai-startup-sintra-secures-17m-seed-empowering-smbs-with-ai-helpers/

Thakur, T. (2025, August 15). Jasper AI Statistics 2025: Growth & Usage Revealed • SQ Magazine. SQ Magazine. https://sqmagazine.co.uk/jasper-ai-statistics/

Please rate this