Hacker Newsnew | past | comments | ask | show | jobs | submit | senko's commentslogin

To do that, you need to read the article first, which is the point of click-bait titles. The point of the defense is to avoid exposing your neurons to that stuff.

i would hope that people are reading articles first and submitting them to hn because they are interesting, rather than submitting articles to hn blindly.

I agree with you on that, but that just holds true (we hope) for the OP.

HN already editorializes the title, to help everyone other than the OP (not all people agree over what's interesting to them). Now we're just arguing over the degree.


You're absolutely right! (errm...oops....anyways...)

The fact that LLMs usually generate anodyne summaries is actualy a benefit here.

I used my website-to-markdown tool[0] to get the text, piped the output to claude -p and got a pretty decent "Patching Copy Fail at scale: how bpf-lsm bought us time before the kernel reboot" result.

[0] https://markshot.dev


It is telling that this piece of art (yes, it is art, and it is fun) is getting defaced by actual people, some metaphorically spraying the "fuck this AI slop" grafitti.

Same facs [derogatory shorthand for factual person] doing the antisemitic slurs.

Do not ignore the marketing and (especially) sales part. You're going to be running a business and just the fact that your software is genuinely useful and popular will not bring in the money.

Don't shy away from pitching companies you see using your tools to pay (note it's a sales pitch: a "here's actual value you get for your spend", not "support us, it's the moral thing to do")

Thanks for working on open source and good luck with this!


> Can agentic engineers adhere to a similar code of ethics that a professional engineer is sworn to uphold?

Can software engineers?


> If the problems didn’t revolve around load

GitHub is not a mom&pop operation.

I expect the $3T company to handle the load, or at least place a prominent "only for hobby use" warning on top.


My understanding of the parent is more charitable: If your thinking process relies on being told only the truth, you are going to fare lousy in this world.

LLMs are an example, but so are random pages on the internet, a buch of stuff we get served by the media (mainstream or otherwise), "expert opinions" by biased or sponsored experts or experts in a different field, etc, etc.

As the popular quip goes: It ain’t what you don’t know that gets you into trouble. It’s what you know for sure that just ain’t so.

With LLMs, we actually do get the warnings: Here's the ChatGPT footer: ChatGPT can make mistakes. Check important info. For Claude: Claude is AI and can make mistakes. Please double-check responses.

Such disclaimers, if written, are usually hidden deeply in terms of use for a random website, not stated up front.


Yes, you understand me 100%. vhantz is just picking a fight.

More importantly, we don't need to live in a world where every presentation of a fact comes with a disclaimer that it can be wrong.


> Humans WILL anthropomorphize the AI, humans WILL blindly trust their outputs, and humans WILL defer responsibility to them.

Humans ARE doing this with classical computer software as well.

It's impossible to make anything fool-proof because fools are so ingenious!

> Nothing that can be described as "intelligent" can be made to be safe.

Knives aren't safe. Cars are deadly. Hair driers can electrocute you. Iron can burn you. There's a million ordinary household tools that aren't safe by your definition of the word, yet we still use them daily.


> This isn’t a coincidence. It’s the same SDLC every functioning engineering organisation runs, just in different vocabulary. [...] Amazon calls it the working-backwards memo and the bar raiser. Every healthy team has some version of this loop.

This (sdlc == working backwards & bar raiser) is so horribly wrong, that I hope this was an LLM hallucination.

In general, I'm starting to see these agent scaffolding systems as an anti-pattern: people obsess over systems for guiding agents and construct elaborate rube-goldberg machines and then others cargo-cult them wholesale, in an effort to optimize and control a random process and minimize human involvement.


All of these articles about setting up the perfect agent environments with skills, plugins, MCP servers, markdown files, etc. etc. reminds me so much of the culture around setting up the perfect "productivity stack". You need the perfect note-tacking app, ticketing app, calendar integrations, yada yada before you can really do anything meaningful. The reality is that you're going to get beat by someone with a few things written down on a piece of paper who is just getting stuff done.

The problem is it’s so rarely A/B tested, definitely not at scale. An engineer, who writes all these my-workflow-but-for-agents skills, proceeds to get the good outcome, while also seeing affirmations that the agent did follow the prescribed processes - that is considered a victory. In reality the outcome could’ve been just as good if they fed Claude a spec + acceptance criteria, or even a basic prompt for the simpler tasks.

Yeah, I Blind A/B test everything, and a lot.

But I don't expect anyone to every use my stuff. It's complicated as hell. But it's for me, and it works without me having to remotely think about the complexity.

I love that.


This is how similarly we collectively approach Taylorism, isn't it? However, the world favors capitalism, of which Taylorism becomes a handy scaffolding.

The accidental vs essential difficulty argument ignores the fact that you can abstract away (some) essential difficulty if you're willing to take a performance hit.

Design patterns in an older (programming) language become core language features in a newer one. As we internalize and abstract away the best patterns for something, it becomes accidental but it's only obvious in retrospect.

The article quotes Brooks (quoting Parnas) about just that (later, in context of LLMs):

> automatic programming always has been a euphemism for programming with a higher-level language than was presently available to the programmer. [...] Once those accidents have been removed, the remaining ones are smaller, and the payoff from their removal will surely be less.

Considering this was written when C was the hot new stuff, let's compare the ability to code a CRUD web app in Python/Django vs C. What Brooks and Parnas are saying that Python/Django cannot bring big improvements in building a CRUD web app when compared to C because they can only make it easier to program, reducing accidental complexity. But we've since redefined "accidental" and I would argue that you can write a CRUD web app in Python/Django at least 100x faster than in C (and probably at least 100x more secure), although it may take 1000x as more CPU and RAM while running.

So "we removed most of the accidental difficulties and the most that remains is essential" is a kind of "end of history" argument.

> I’d be surprised if there’s even a doubling of productivity still available from a complete elimination of remaining accidental difficulty.

It's good that this statement has a conditional subjective guard, because that's just punditry.

> LLM coding does not represent a silver bullet

Here I agree with the author completely, but probably not for the same reasons. The definition of "silver bullet" the article uses (quoting Brooks):

> There is no single development, in either technology or management technique, which by itself promises even a single order-of-magnitude improvement within a decade in productivity, in reliability, in simplicity.

AI-assisted development is not a single technique, the same way "devops" or "testing" or "agile" is not a single technique. But more importantly, I agree it will take time to find best practices, for the technology change to slow down, and for the best approaches to diffuse across the industry.

The article's conclusion:

> You should be adopting and perfecting solid foundational software development practices like version control, comprehensive test suites, continuous integration, meaningful documentation, fast feedback cycles, iterative development, focus on users, small batches of work… things that have been known and proven for decades, but are still far too rare in actual real-world software shops.

These are great and I'm gonna let him/her finish, but it's curious actual coding isn't mentioned anywhere. The author doesn't suggest "polish your understanding of C pointer semantics" or "Rust ownership model" or "Django ORM" or to really, deeply, understand B-trees. Looks like pedestrian detailes like those are left as an excercise for the reader ... or the reader's LLM.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: