And which side is that? I mean, from my point of view, it seems like it’s probably the ones who are having a magic robot write a thousand lines of code that almost, but not quite, does something sensible, rather than using a bloody library.
(For whatever reason, LLM coding things seem to love to reinvent the square wheel…)
> the ones who are having a magic robot write a thousand lines of code that almost, but not quite, does something sensible
Gee, I wonder which "side" you're on?
It's not true that all AI generated code looks like it does the right thing but doesn't, or that all that human written code does the right thing.
The code itself matters here. So given code that works, is tested, and implements the features you need, what does it matter if it was completely written by a human, an LLM, or some combination?
Do you also have a problem with LLM-driven code completion? Or with LLM code reviews? LLM assisted tests?
Oh, yeah, I make no secret of which side I’m on there.
I mean I don’t have a problem with AI driven code completion as such, but IME it is pretty much always worse than good deterministic code completion, and tends to imagine the functions which might exist rather than the functions which actually do. I’ve periodically tried it, but always ended up turning it off as more trouble than it’s worth, and going back to proper code completion.
LLM code reviews, I have not had the pleasure. Inclined to be down on them; it’s the same problem as an aircraft or ship autopilot. It will encourage reduced vigilance by the human reviewer. LLM assisted tests seem like a fairly terrible idea; again, you’ve got the vigilance issue, and also IME they produce a lot of junk tests which mostly test the mocking framework rather than anything else.
LLM code reviews are completely and utterly worthless.
I do like using them for writing tests, but you really have to be careful. Still, i prefer it to doing all the testing by hand.
But for like, the actual code? I'll have it show me how to do something occasionally, or help me debug, but it really just can't create truly quality, reliable code.
I’m not sure where you’ve been the last four years, but we’ve come a long way from GPT 3.5. There is a good chance your work environment does not permit the use of helpful tools. This is normal.
I’m also not sure why programmatically generated code is inherently untrustworthy but code written by some stranger who is confidence in motives are completely unknown to you is inherently trustworthy. Do we really need to talk about npm?
Dependencies aren't free. If you have a library that has less than a thousand lines of code total that is really janky. Sometimes it makes sense like PicoHTTPParser but it often doesn't.
Not saying left pad is a good idea; I’m not a Javascript programmer, but my impression has always been that it desperately needs something along the lines of boost/apache commons etc.
EDIT: I do wonder if some of the enthusiastic acceptance of this stuff is down to the extreme terribleness of the javascript ecosystem, tbh. LLM output may actually beat leftpad (beyond the security issues and the absurdity of having a library specifically to left pad things, it at least used to be rather badly implemented), but a more robust library ecosystem, as exists for pretty much all other languages, not so much.
So, first of all “but it’ll get better” has been the AI refrain since the 1950s. Voice recognition rapidly went from “doesn’t work at all” to “kinda works” in the 80s-90s, say, and in recent years has reached the heady heights of ‘somewhat useful’, though you still wouldn’t necessarily trust your life to it.
But also… okay, so maybe AI programming tools get good enough at some point. In which case, I suppose I’ll use them then! Why would I use a bad solution preemptively on the promise of jam tomorrow? Waiting for the jam surely makes more sense.
Web3, Metaverse and NFTs all failed to stand on their own two legs as a technology. It feels fair to call them products, none of them ever attained their goal of real decentralization.
Ah, yes. That’s why we all have our meetings in the metaverse, then go back home on the Segway, to watch 3d TV and order pizza from the robotic pizza-making van (an actual silly thing that SoftBank sunk a few hundred million into). And pay for the pizza in bitcoin, obviously (in fairness, notoriously, someone did do that once).
That’s just dumb things from the last 20 years. I think you may be suffering from a fairly severe case of survivorship bias.
(If you’re willing to go back _30_ years, well, then you’re getting into the previous AI bubble. We all love expert systems, right?)
NFTs lost because they didn't do anything useful for their proponents, not because people were critical of them. They would've fizzled out even without detractors for that reason.
On the other hand, normal cryptocurrencies continue to exist because their proponents find them useful, even if many others are critical of their existence.
Technology lives and dies by the value it provides, and both proponents and detractors are generally ill-prepared to determine such value.
Okay, but during the NFT period, HN was trying to convince me that they were The Future. Same with metaverses, same with Bitcoin. I mean, okay, it is Different this time, so we are told. But there’s a boy who cried wolf aspect to all this, y’know?
Baseline assumption: HN is full of people who assume that the current fad is the future. It is kind of ground zero for that. My HN account is about 20 years old and the zeitgeist has been right like once.