Hacker Newsnew | past | comments | ask | show | jobs | submit | namuol's commentslogin

Sure. You know what Putin did the last time oil prices were this high?

I see the word “enshittify” being thrown around casually about Claude Code. We’re far from that part of the Enshittification cycle still. This is just a mismanaged product and the result of an extremely competitive market that moves too fast.

Never attribute to malice that which can be adequately explained by incompetence, etc.


Their third part harness move seems like more than incompetence.

Yeah, I'm none too happy with anthropic right now, but what's happening to Claude code is just your typical garden variety mismanagement of a project that grew way too fast for its owners to reasonably handle.

A bold claim to suggest that LLMs aren’t prone to biases of their own which are less understood.

LLMs are having pretty consistent studies into their biases. Obviously this doesn't mean we know all the biases, but it's being actively worked on.

Meanwhile with human doctors, every one of them is a unique person with a completely different set of biases. In my experience, getting a correct diagnosis or treatment plan often involves trying multiple doctors, because many of them will jump to a common diagnosis even if the symptoms don't line up and the treatment doesn't actually help.


> The registry grows with use. Every session is smarter than the last.

This feels a bit like one of those “now you have two problems” solutions. After a few dozen sessions I would expect the tool registry to be full of “noise” for most prompts. I would also expect most tools to be extremely specific to the task at hand, leading to redundancy and ultimately poor programmability due to inconsistencies between tool APIs.


It's an open experiment, the utility of tendril is the concept. I am more curious about how good can the tool making get. Frontier models tend to be very specific about what they build so don't get specific bloat (yet).


> sample solutions from the model with certain temperature and truncation configurations, then fine-tune on those samples with standard supervised fine-tuning

It’s all moonspeak to me. I tried reading other comments that explain this and they all sounded different or contradictory. I’ve studied ML as a hobby years ago but this was before the LLM explosion. Guess I need to start over again?


Apropos to nothing, PC Builder Simulator on Steam costs $19.99 USD and it requires a Windows machine with just 4GB RAM and a GPU with 2GB.


These LLM prompting tip articles write themselves if you just take the last decade of project management articles and replace “IC” with “agent”.


The timing of the release and the phrasing used in the headline: Woof.


It’s high time for regime change in the US.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: