Hacker Newsnew | past | comments | ask | show | jobs | submit | danielbln's commentslogin

Prevention paradox.

I'll ignore the bait and answer: NFTs were gambling in disguise, these claws are personal/household assistants, that proactively perform various tasks and can be genuinely useful. The security problem is very much unsolved, but comparing them to NFTs is just willfully ignorant at best

Once a business takes on VC and/or goes public, enshittification will inevitably follow.

Wow I had no idea ngrok had raised $50M, that's wild!

Yes, but also antibiotics, vaccinations, child mortality down down down, life expectancy up up up. I wouldn't trade for living even 100 years prior compared to today, or 500-200k years ago for that matter.

With everything wrong and sick with today's world, let's not take the achievements of our species for granted.


You wouldn't make that trade because you are part of the last generation (loosely speaking, a collection of generations) before it all comes crumbling down. We are living unbelievably privileged lives because we are burning all of the world's resources to the ground. In the process, we're destroying the ecosystem and driving a mass extinction event. Nothing about the way we live is sustainable long-term. We're literally consuming hundreds of millions of years worth of planet-wide resource buildup over a span of a couple of centuries. Even if we avoid the worst case scenario, humans 200 years from now will almost certainly not be able to live anywhere near as luxuriously as we do now, unless there's a culling of billions. In the actual worst case scenario, we may render the planet uninhabitable for anything we regard as intelligent life.

In that sense, we have just enough collective intelligence to be dangerous and not enough intelligence to moderate ourselves, which may very well result in an evolutionary deadend that will have caused untold damage to life on Earth.


You lost me when you started narrating the fossil doom visage.

With the current progress in solar, as well as the remaining coal, gas and uranium reserves, energy is not going to be what finishes our civilization.

While I don't think we are going to get true collapse, I think we are going to get a lot of technical progress compensating for biosocial deterioration.

The demographics, mental health and dysgenics are all real, quantified trends, and we are going to face the reality of less capable, less taxable population for the rest of this century. It's baked in at this point.


We also live in an era we can create hydrocarbon fuel DIRECTLY from the atmosphere and desalinate fresh water in unlimited supply, from power derived directly from the sun or atomics.

We also live in a time where the human population, where it is most concentrated, is declining rather than growing, so far without too disastrous consequences.

Greening of the earth has been happening since the 1980s- i.e. about a .3% coverage increase per year in recent decades.

Places that were miserable and poor, like China, have been lifted to prosperity and leading out in renewable tech.

There is much to celebrate and after the recent passing of Paul Ehrlich, we should pause and consider just how wrong pretty much every prediction he made was.


That seems both fatalistic and doomerist to me, but time will tell. I would assume germ theory would survive regardless, as would immunology, so I'd hold on to those two at least.

Doomerism is a kind of religion that goes back as far as they eye can see. What's interesting about it is that in spite of being perpetually incorrect in its myriad predictions, it continues to adapt and attract new adherents.

See also (recent only):

- Paul Ehrlich's Population Bomb (Malthusian collapse)

- The Club of Rome's The Limits of Growth (resource exhaustion)

- Thomas Malthus' Population growth / famine cycle

- James Lovelock's Global warming catastrophe predictions

- Hubbert's (et al) Peak oil economic disaster

- Molina & Rowland's Ozone catastrophe

- Metcalfe's internet collapse


I am not a doomer, nor a Malthusian, merely a realist. There are a few points I could make briefly:

- Everything lasts forever, until it doesn't. Ancient Egyptian civilization lasted for thousands of years, until it didn't. Any Egyptian could point to thousands of years of their heritage and say it hasn't ended yet, therefore any prediction that it will end is clearly bad and dumb. Then it was conquered by Romans, and then by Islam, with its language, culture, and religion extinguished, extant only in monuments, artifacts and history books.

- We have nuclear weapons now. Any prediction of an imminent end of human civilization before then would be purely religious, but there is a real reason to believe things have changed. We are currently in a time of relative peace secured by burning resources for prosperity, but what happens when those resources run out and world conflict for increasingly scarce resources is renewed with greater vigor?

- Note that I did not outright predict the end of human civilization, merely noted it as a plausible worst-case scenario. If civilization continues on more-or-less as it is, in the next couple of hundred years, we will drive countless more species to extinction. We will destroy so much more of our environment with climate change, deforestation, strip mining, overfishing, pollution, etc. We will deplete water reservoirs and we will deplete oil, helium, phosphorus, copper, zinc, and various rare earth elements. Not a complete depletion, but they will become so scarce as to not be widely available or wasted for the general population's benefit. If billions of people are still alive then, which I explicitly suggested was a possibility, they will as a simple matter-of-fact live much less comfortably prosperous lives than us. It will not take a great catastrophe to result in a massive reduction in living standards, because our current living standards are inherently unsustainable.


Not really, it varies a lot by region. UK and Ireland, absolutely. In Germany or France it's waaaay more mixed. Overall by employee count, most tech jobs in Europe are domestic, not by American FDI

Who says you can't iterate on a design just because an LLM does the manual typing?

I meant to write “tactile”, not “tactical”, but missed it before the edit window expired.

Anecdotally, ask people who knit whether their brain is stimulated. Physically engaging with the thing you are making is part of the process that makes it actually good.


Yet someone who knits can do so without spinning the yarn themselves, or shaving the sheep.

Our engineers deliver concise outputs because we have settled internally that that's what we want. Fluffy verbosity serves no one if there's little signal in there, so just give me the no-purple prose, no emojis, tight and concise bullet version without all the chaff.

dotenv came out 2012, the .env convention predates LLMs and agents by quite some time.

.env was designed for local development ... not for storing production secrets, and user credentials are exactly that

Hey, maintainer of GitAgent here.

Fair criticism, and I want to address it directly rather than dodge it.

The `.env` pattern is intentionally scoped to local development — a developer running their own agent with their own keys on their own machine. For that use case, the threat model is 'don't accidentally commit secrets,' which `.gitignore` does solve.

_pdp_ is right that this breaks down the moment you're handling credentials that belong to someone else — OAuth tokens, multi-tenant keys, anything production-adjacent. That's a real gap in the current spec.

What we're planning: a `secrets:` block in `agent.yaml` supporting pluggable backends — OS keychain, 1Password CLI, Vault, AWS SSM — so the spec has a first-class path for production secret management instead of implicitly blessing `.env` for all contexts.

But I'd genuinely love more input from this thread — if you were designing secret management for a git-native agent spec, what would you want it to look like? What patterns have worked well in your setups? This is an open spec and the best ideas should win.


Check out https://varlock.dev for a modern take on .env that gets your secrets out of plaintext. Free and open source - works with tons of tools. Adds validation, type safety, lots of nice features.

But but but this is just a fig leaf. The agent will usually have file level access, and even if by some miracle you manage to feed the envvars into your program without LLMs looking over your shoulder, they can edit the files to add print statements.

If you want LLMs to work on your code, and be sure not to have them leak your secrets, you need a testing or staging environment to which they get credentials instead of prod. Now, if only that had been best practice before... Oh wait it was...


I don't get your point. Web tools have been doing A/B feature testing all the time, way before we had LLMs.

This is very different from the A/B interface testing you're referring to, what LLMs enable is A/B testing the tool's own output — same input, different result.

Your compiler doesn't do that. Your keyboard doesn't do that. The randomness is inside the tool itself, not around it. That's a fundamental reliability problem for any professional context where you need to know that input X produces output X, every time.


It’s exactly the same as A/B testing an interface. This is just testing 4 variants of a “page” (the plan), measuring how many people pressed “continue”.

You've groupped LLMs into the wrong set. LLMs are closer to people than to machines. This argument is like saying "I want my tools to be reliable, like my light switch, and my personal assistant wasn't, so I fired him".

Not to mention that of course everyone A/B tests their output the whole time. You've never seen (or implemented) an A/B test where the test was whether to improve the way e.g. the invoicing software generates PDFs?


> LLMs are closer to people than to machines.

jfc. I don't have anything to say to this other than that it deserves calling out.

> You've never seen (or implemented) an A/B test where the test was whether to improve the way e.g. the invoicing software generates PDFs?

I have never in my life seen or implemented an a/b test on a tool used by professionals. I see consumer-facing tests on websites all the time, but nothing silently changing the software on your computer. I mean, there are mandatory updates, which I do already consider to be malware, but those are, at least, not silent.


Why are you calling it out? You are interpreting the statement too literally. The point is probably about behavior, not nature. LLMs do not always produce identical outputs for identical prompts, which already makes them less like deterministic machines and superficially closer to humans in interaction. That is it. The comparison can end here.

They actually can, though. The frontier model providers don't expose seeds, but for inferencing LLMs on your own hardware, you can set a specific seed for deterministic output and evaluate how small changes to the context change the output on that seed. This is like suggesting that Photoshop would be "more like a person than a machine" if they added a random factor every time you picked a color that changed the value you selected by +-20%, and didn't expose a way to lock it. "It uses a random number generator, therefore it's people" is a bit of a stretch.

You are right, I was wrong. I think anthropomorphizing LLMs to begin with is kind of silly. The whole "LLMs are closer to people than to machines" comparison is misleading, especially when the argument comes down to output variability.

Their outputs can vary in ways that superficially resemble human variability, but variability alone is a poor analogy for humanness. A more meaningful way to compare is to look at functional behaviors such as "pattern recognition", "contextual adaptation", "generalization to new prompts", and "multi-step reasoning". These behaviors resemble aspects of human capabilities. In particular, generalization allows LLMs to produce coherent outputs for tasks they were not explicitly trained on, rather than just repeating training data, making it a more meaningful measure than randomness alone.

That said, none of this means LLMs are conscious, intentional, or actually understanding anything. I am glad you brought up the seed and determinism point. People should know that you can make outputs fully predictable, so the "human-like" label mostly only shows up under stochastic sampling. It is far more informative to look at real functional capabilities instead of just variability, and I think more people should be aware of this.


What other tool can I have a conversation with? I can't talk to a keyboard as if it were a coworker. Consider this seriously, instead of just letting your gut reaction win. Coding with claude code is much closer to pair programming than it is to anything else.

You could have a conversation with Eliza, SmarterChild, Siri, or Alexa. I would say surely you don't consider Eliza to be closer to person than machine, but then it takes a deeply irrational person to have led to this conversation in the first place so maybe you do.

Not productive conversations. If you had ever made a serious attempt to use these technologies instead of trying to come up with excuses to ignore it, you would not even think of comparing a modern LLM coding agent to some gimmick like Alexa or ELIZA. Seriously, get real.

Not only have I used the technology, I've worked for a startup that serves its own models. When you work with the technology, it could not be more obvious that you are programming software, and that there is nothing even remotely person-like about LLMs. To the extent that people think so, it is sheer ignorance of the basic technicals, in exactly the same way that ELIZA fooled non-programmers in the 1960s. You'd think we'd have collectively learned something in the 60 years since but I suppose not.

I really don't care where you've worked, to seriously argue that LLMs aren't more capable of conversation than ELIZA, aren't capable of pair programming even, is gargantuan levels of cope.

I didn't make any claims about their utility. I said that they are not like people. They are machines through and through. Regular software programs. Programs that are, I suppose, a little bit too complex for the average human to understand, so now we have the Eliza effect applying to an entirely new generation.

"I had not realized ... exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people." -- Eliza's creator


I would doubt that they are just “regular software programs” as explainable ai (or other statistical tracing) has been lagging far behind.

If this is the case and the latest models can be explained through their weights and settings, please link it. I would like to see explainable ai up and coming.


As far as I can tell, llms never give the exact same output every time.

> same input, different result.

What is your point? You get this from LLMs. It does not mean that it is not useful.


Yes! And it was bad then too!!

I want software that does a specific list of things, doesn’t change, and preferentially costs a known amount.


Opus is not an acronym.

I know, but its certainly a new paradigm.

O.P.U.S OutProgram U Soon

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: