The process of setting up a camp for the first few times and attempting new art installations is positively gargantuan. There are a lot of considerations happening and MOOP is just one of them. Unless you expect people to get camps and installations right the first time, you need this.
It's not "flow state" but working on three features in parallel requires focus that's equally as fragile, at least if you want to follow the output of the LLMs and steer it if it makes mistakes.
> Same story with tests. Who decides when it's the test that should be changed/deleted or the implementation?
Claude is remarkably good at figuring this is out. I asked it to look at a failing test in a large and messy Python codebase. It found the root cause and then asked whether the failure was either a regression or an insufficiently specified test, performed its own investigation, and found that the test harness was missing mocks that were exposed by the bug fix.
If you point it at a specific thing and ask a specific question, yes, it will figure it out.
But I never have "fix this test" as a task. What happens when you task it with a feature implementation and test breaks in the middle of the session? It will not behave the same way.
I care a lot about software and I use LLMs extensively. There are some things I deeply understand yet I don't care for doing anymore because I've done them for years and there's nothing to be gained from doing them manually.
That’s just you using the tools responsibly. Not using LLMs to perform well defined virtually deterministic tasks that you fully understand is simply a waste of time. There’s a big difference between that and just letting agents go wild and do your design for you.
I am not really sure. I wrote some scripts that aggregated data from several APIs with an LLM and the LLM had the foresight to create a caching layer for the API responses as it properly inferred that I would need the results over and over again as well as using asyncio to accelerate fetch speed. This would have been a v2 or v3 and it one-shotted it perfectly.
Yeah, they are good at applying generic patterns, but often it can be overkill/YAGNI that lead to more maintenance work in places that are fine with a much simpler/straightforward solution. But this is what the engineer can decide and with LLMs they wont be forced to make the trade off because it takes longer to build, but rather whether it is really necessary or not.
When it works, it feels genuinely miraculous. Working in a common problem space, like gluing together APIs, it generally does well. Doing something novel or even a little complicated, it can really lead you astray.
> Also, companies are pressuring employees towards adoption in novel ways. There was no such industry-wide pressure by employers in the 90s, 2000s or 2010s for engineers to use a specific tech.
Companies have been enforcing technology mandates since time immemorial. In the early 2000s there were definitely a lot of mandates to move away from commercial UNIX to Linux. Lots of companies began enforcing the switch to PHP, Ruby and Python for new projects.
Yes, but the entire industry was not pushing any one single tool at the same time. If you disliked Django, you could go to Rails. If you disliked Rails, you had Phoenix. Etc.
Hard disagree. LLMs are fantastic for fixing bad architecture that's been around for a decade because nobody was willing to touch it. I can have it write tons and tons of sanity checks and then have it rewrite functionality piece by piece with far more verification than what I'd get from most engineers.
It's not immediate, it still takes weeks if you want to actually do QA and roll out to prod, but it's definitely better than the pre-LLM alternatives.
Because there is a certain point where barrier to entry prevents meaningful competition once winner-take-all power laws start kicking in, and stability hitherto has been predisposed on having a plurality of non interrelated competitors to ensure no one man's quirks drives too much of societies theoretical output.
AI will make this dynamic worse, and it's got the extra danger of the default banal way of applying the technology in fact encourages it's application to that end.
I don't really see it that way because most software companies overestimate the importance of fantastic software vs merely adequate software, and most times good sales development, support, and negotiation skills are what helps actually sell.
I also don't think that the commodification of programming is a substitute for things like understanding your customers, having good taste for design, and designing software in a way that is maximally iterable.
The degenerate side is clueless upper management and fad-driven engineering. We have talked extensively about this.
There is a more rational side to it that I've seen in my org: some engineers absolutely refuse to use AI and as a consequence they are now, clearly and objectively, much less productive than other engineers. The thing is, you still need to learn how to use the tool, so a nontrivial percentage of obstinate engineers need to be driven to use this in the same way that some developers have refused to use Docker or k8s or whatever.
Ah yes, we must force these obstinate engineers to the right path! Only after getting everyone to see the light will they understand and thank us for boundless productivity!! /s
Perhaps these “obstinate” engineers have good reason in their decision. And it should be their decision!
To be so confident in what is “the right way (TM)” and try to force it onto others is... revealing.
You would be absolutely shocked how many software projects are still run, to this day, without source control at all. Or automated (or manual) testing. And how many hand crafted artisanal servers are running on AWS, never to be recovered if their EC2 instance is killed for some reason.
Sure, but that’s a small and shrinking market. Not a source of economic security or growth for its employees, nor for most of its companies (though some have defended niches).
I've seen growing companies running multiple million ARR through systems like that. It's way more common than you'd think if you're a professional software developer.
I seriously don't see how version control and LLMs are comparable. A deterministic way to track code changes over time, versus an essentially non-deterministic statistical code generator that might get you what you want, and might do it in a reasonable time frame, and that might not land you in a minefield of short-term-good/long-term-bad design points.
> an essentially non-deterministic statistical code generator that might get you what you want, and might do it in a reasonable time frame, and that might not land you in a minefield of short-term-good/long-term-bad design points.
Sounds like a human? The ‘statistical’ part is arguable, I suppose.
There is an absolute embarrassment of modern tooling in other categories I have no problem whatsoever embracing. I'm not a holdout for being stuck in my ways. Maybe I value things other than expediency at massive cost. Maybe I speak just as well to computers as I do to humans.
I'm sure I will have no problem whatsoever remaining in the employ of a firm that trusts me to make products and tooling that still push the envelope of what's possible without having to resort to the sheer brute force of trillion parameter-scale models.
There is no massive cost. For 80% of the brute work that needs to be done day in and day out LLMs provide code as good as a senior engineer provided you have sufficient competency in steering the model, but done at breakneck pace.
I ran the statistics myself and my company is spending 40% less time doing feature development since AI agents began to be used en masse and pushing 50% more tickets without any noticeable increase in regressions.
After 18 months the hard evidence is in place. And much like replacing bare-metal servers for many use cases where evidence shows that the burden of k8s or the substitution of shell scripts for Terraform, it's time to move on.
I don't really see a place for no AI usage in line-of-business software apps anymore.
Faster feature development, more strategic thinking in how to keep the dev pipeline full, doing braindead mechanical improvements that pay off tech debt that would have otherwise not have management sign-off to justify, writing GUI-based tools for support teams that previously had to scour reams of shell scripts, spending more time on refining specifications and estimations, writing throwaway concepts of different design ideas in order to have better architetuce discussions based on real code instead of pseudocode, clearing out the backlog of bugs that used to be terribly annoying to reproduce and that now I can just throw brute compute for resolving.
Sounds awful. Just filling the time with worthless stuff. You are basically a liability. Wouldn’t like to have you in my team. Less is more (nowadays more than ever)
Having worked with LLMs, you absolutely can golf most (>50%) lines of code out of existence. I regularly do, because it picks the wrong abstractions and sticks with them.
Not the OP but I'd wager to say that while many (and maybe most?) people are limited in their potential violent tendencies by basic human norms that only break down in times of crisis, sociopathic CEOs constantly test and break these norms whenever there is even a slight upside.
reply