AI coding tools are making this problem worse in a subtle way. When an agent can generate a "scalable event-driven architecture" in 5 minutes, the build cost of complexity drops to near zero. But the maintenance cost doesn't.
So now you get Engineer B's output even faster, with even more impressive-sounding abstractions, and the promotion packet writes itself in minutes too. Meanwhile the actual cost - debugging, onboarding, incident response at 3am - stays exactly the same or gets worse, because now nobody fully understands what was generated.
The real test for simplicity has always been: can the next person who touches this code understand it without asking you? AI-generated complexity fails that test spectacularly.
To be fair, a lot of the on call people being pulled in at 3am before LLMs existed didn't understand the systems they were supporting very well, either. This will definitely make it worse, though.
I think part of charting a safe career path now involves evaluating how strong any given org's culture of understanding the code and stack is. I definitely do not ever want to be in a position again where no one in the whole place knows how something works while the higher-ups are having a meltdown because something critical broke.
People on call will use AI as well. As long as the first AI left enough documentation and implemented traceability, the diagnosing AI should have an easier time proposing a fix. Ideally, AI would prepare the PR or rollback plan. In a utopia, AI would execute it and recover the system until a human wakes up.
Or at least there is something to chat with about the issue at 3am.
This. I've been a sysadmin for a quarter of a century and have professionally written next to no software. I've debugged every system I've had to support at some point though. It's a very different skill set.
True, but I think the implication (as I read it) is that AI may be providing more complex solutions than were needed for the problem and perhaps more complex than a human engineer would have provided.
it's MUCH worse now, not just because of the massive amount of code generated with zero supervision or very little supervision, but also because of the speed at which the systems grow in function
I think we'll see a decline of software as a product for this reason. If your job is to solve a problem, and you use AI to generate a tool that solves that problem, or you use money to buy a tool that solves that problem, well then it's still your job to solve that problem regardless of which tool you use.
But given how poorly bought software tends to fit the use case of the person it was bought for... eventually generate-something-custom will start making more and more sense.
If you end up generating something that nobody understands, then when you quit and get a new job, somebody else will probably use your project as context for generating something that suits the way they want to solve that problem. Time will have passed, so the needs will have changed, they'll end up with something different. They'll also only partially understand it, but the gaps will be in different places this time around. Overall I think it'll be an improvement because there will be less distance (both in time and along the social graph) between the software's user its creator--them being most of the time the same person.
I wrote something similar in a Claude Code instructions.md: "minimize cyclomatic complexity" What happened next? It generated an 8 line wrapper function called only once from a different file. So, I told it to inline that logic in the caller. The result? One. Line. Of. Code.
So, I asked it to modify its instructions.md file to not repeat that mistake. The result was the new line "Avoid single-use wrapper functions; inline logic at the call site unless reused"
It reminds me a lot of people who take Code Complete too seriously. "Common sense" is not an objective or universal statement unfortunately - plus, speaking for myself, what I consider "common sense" can change on the daily, which is why I can't be trusted adding features to my own codebase long term <_<.
Not sure if you're kidding or not, but to write great maintable code, you need a lot of understanding that a LLM just doesn't have, like history, business context, company culture etc. Also, I doubt that in it's training data it has a lot of good examples of great maintainable code to pull from.
Awesome! However, the corporate is excited with using AI, making the coder the one who's at risk at getting fired for writing the exact same lousy (for the sake of the argument) code.
Or worse: for not relying as much as possible to the AI who apparently can write just as bad code but faster!
A subtle detail: you speak of coders, not software engineers. A SWE's value is not his code churning speed.
Admitting you've spent two decades on a career stuck working in the kind of sweatshops that hire people who can't actually code isn't much of a flex, and certainly doesn't lend a whole lot of credence to your argument.
This says more about you and the people you work with. I find engineers that have been at the company for a while are quite invaluable when it comes to this information, it's not just knowing the how but the when + why that's critical as well.
Acting like people can't be good at their job is frankly dehumanizing and says a lot about your mindset with how you view other fellow devs.
Sometimes, as in the bilsbi's top level comment, the solution is to use a free tool/library/product that already exists. The solution is not always to write new code, but the agent will happily do it.
Maybe that's "the manager's job", but that's just passing the buck and getting a worse solution. Every level of management should be looking for the best solution.
I wrote something similar in a Claude Code instructions.md: "minimize cyclomatic complexity" What happened next? It generated an 8 line wrapper function called only once from a different file. So, I told it to inline that logic in the caller. The result? One. Line. Of. Code.
So, I asked it to modify its instructions.md file to not repeat that mistake. The result was the new line "Avoid single-use wrapper functions; inline logic at the call site unless reused"
I think this is at the moment the practical limitation to using AI for everything (and what the coding agents themselves also optimize for to some degree, or it's the slider they can play with for price vs quality, the "thinking" models being the exact same, but just burning more tokens).
Am waiting for the next Mac Studio to come out to experiment with the "AI for everything" approach. Most likely, the open source distilled models will lower quality. So, another "price vs quality" tradeoff. Still, will be fun to code like I'm at a foundation lab.
This seems like a perfect use case for a local model. But I've found in practice that the system requirements for agents are much higher than for models that can handle simple refactoring tasks. Once tool use context is factored in, there is very little room for models that perform decently.
Whatever agent I tried would include thousands of tokens in tool-use instruction. That would use up most available context unless running very low-spec models. I've concluded it's best to use the big 3 for most tasks and qwen on runpod for more private data.
This is something I keep thinking while coding with AI, and same with introducing library dependencies for the simplest problems. It’s not whether how quickly I can get there but more about how can I keep it simple to maintain not only for myself but for the next AI agent.
Biggest problem is that next person is me 6 months later :) but even when it’s not a next person problem how much of the design I can just keep in my mind at a given time, ironically AI has the exact same problem aka context window
There's also the operational cost of running whatever is churned out. I wouldn't exactly blame that on AIs, but a large contingency of developers optimize for popular tech-stacks and not ease of operations. I don't think that will change just because they start using AI. In my experience the AI won't tell you that you're massively overbuilding something or that if we did this in C and used Postgresql we'd be able to run this on an old Pentium III with 4GB of RAM. If you want Kubernetes and ElasticSearch, you'll get exactly that.
It’s not even about perfectionism. Code’s value is about processing data. Bad code do it wrongly and if you have strange code on top of that, you cannot correct the course. Happy path are usually the low hanging fruits. What makes developing software hard is thinking about all the failure and edge cases.
AI generators only generate that if you tell them to though - as a developer (especially senior) it's your job to know what you want and tell the AI coding tools that.
There's different kinds of abstraction. There's abstraction like Jackson Pollock and there's abstraction like what Dijkstra as suggesting: elegance. Which personally made the article a very weird read to me
tbh codebases like that predate AI code generators. I had one job where my predecessor was not a very good developer by modern standards, but he was productive... a dangerous combination.
I also kind of respect it, it bothers me endlessly when everything isn't perfect and this guy just threw caution to the wind. Jokes on me as I'm working for him now. But it's not like anything that predates AI, I couldn't write this type of slop if I tried lol. Zero formatting, linting, or anything. Just straight goulash.
So now you get Engineer B's output even faster, with even more impressive-sounding abstractions, and the promotion packet writes itself in minutes too. Meanwhile the actual cost - debugging, onboarding, incident response at 3am - stays exactly the same or gets worse, because now nobody fully understands what was generated.
The real test for simplicity has always been: can the next person who touches this code understand it without asking you? AI-generated complexity fails that test spectacularly.