History and Disposition

I have spent my career on large, legacy systems, and that informs an awful lot about me—including my views on LLMs.

Assumed audience: People who aren’t already totally bought into a specific view of the goodness of these systems.

Epistemic status: Thinking out loud, soliciting responses.

There are, no doubt, many factors contributing to why individuals like or dislike, gravitate toward or away from, LLM-based AI tools for authoring software. Increasingly, though, I wonder if one of the biggest factors is simply this:

How much of your work has been, and is, about building new things vs. maintaining existing things? (For a very broad definition of maintaining”: I do not mean stasis.)

Put another way, I strongly suspect that a great deal of my own suspicion of wide deployment of LLM-authored code and specifically of making that the norm is that I have spent nearly the entirety of my career working on large, complex existing systems. The ability to generate a lot of new code to deliver a feature has almost never been at a premium. The ability to deeply understand existing code, to make a targeted and narrow just so kind of fix or change that fixes a weird bug,1 to make a significant architectural change and bring along the people who have to work on the system after that change:2 those are the things my career has mostly been about.

That leaves me with a bit of a different disposition than many — not all — of the folks I know and respect who are most bullish about building software with LLMs. As I said at the top: there are many factors, so this isn’t a universal by any means. It does seem to recur a fair bit, though!


Notes

  1. Yes, I know that LLMs can help with debugging — there’s a reason I have written in this post about authoring. ↩︎

  2. You can no doubt do some cool things with LLMs in conjunction with classic AST-based tooling to make major changes. But can you teach people how to think about that that way? No. ↩︎