Hacker Newsnew | past | comments | ask | show | jobs | submit | scarmig's commentslogin

I end up shrugging. For a Claude Code power user, today, a day's use uses less electricity than a morning commute in an electric car. To say nothing of the costs to keep your workstation running, your building heated or cooled, etc. Not quite a rounding error, but a relatively minor component of overall usage.

At least for programming usage the power usage seems worth it. For starting up 1 million bots to argue with each other on facebook it's obviously a total waste.

At any rate, the power usage will become more apparent when these products stop being subsidised, where power usage is being charged to the end user.


Between Anthropic, the military, and Congress, I have the least faith in Congress to make knowledgeable policy around tech.


> you'll never find altman saying anything like "our agreement specifically says chatgpt will never be used for fully autonomous weapons"

To be fair, Anthropic didn't say that either. Merely that autonomous weapons without a HITL aren't currently within Claude's capabilities; it isn't a moral stance so much as a pragmatic one. (The domestic surveillance point, on the other hand, is an ethical stance.)


They specifically said they never agreed to let the DoD use anthropic for fully autonomous weapons. They said "Two such use cases have never been included in our contracts with the Department of War, and we believe they should not be included now: Mass domestic surveillance [...] Fully autonomous weapons"

Their rational was pragmatic. But they specifically said that they didn't agree to let the DoD create fully automatic weapons using their technology. I'll bet 10:1 you won't ever hear Sam Altman say that. He doesn't even imply it today.


Dario said in an interview with CBS that they're not against fully autonomous weapons but their technology is there yet: https://www.youtube.com/watch?v=MPTNHrq_4LU&t=17m47s

Not sure how that's relevant. I never said Dario was taking an ethical stand. I said they did not agree for Claude to be used for fully autonomous weapons. Now, compare that to OpenAI, whose agreement does allow fully autonomous weapons.

> it isn't a moral stance so much as a pragmatic one

Agreed, the moral stance is saying no to DoJ and the US government


Why do you suppose OpenAI's deal led to a contract, while Anthropic's deal (ostensibly containing identical terms) gets it not only booted but declared a supply chain risk?

> it's also unprecedented for a contractor to suddenly announce their products will, from now on, be able to refuse to function based on the product's evaluation of what it perceives to be an ethical dilemma

That is a deeply deceptive description of what happened. Anthropic was clear from the beginning of the contract the limitations of Claude; the military reneged; and beyond cancelling the contract with Anthropic (fair enough), they are retaliating in an attempt to destroy its businesses, by threatening any other company that does business with Anthropic.


>Anthropic was clear from the beginning of the contract the limitations of Claude

No, that's not what they said.

"Two such use cases have never been included in our contracts with the Department of War, and we believe they should not be included now".


Going by what Hegseth said, it bans them from relationships or partnering with Anthropic at all. No renting or selling GPUs to them; no allowing software engineers to use Claude Code; no serving Anthropic models from their clouds. Probably have to give up investments; Amazon alone has invested like $10B in Anthropic.

It bans them from using all open source software unless they have signed an agreement with the developer to prohibit use of Claude Code.

What open source software ? Anthropic doesn't make open source software?

All open source software, because the developers might use Claude Code.

The military already has access to Grok, but doesn't want it, because it's an inferior model, even compared to open source ones. So the military would probably choose to replace supply chain risk Claude with Qwen or Kimi before Grok.

It would be untouchable irony for the US to cut all ties with Anthropic and replace them with models developed by Chinese labs. The Onion becomes more irrelevant with each passing day.

How many generations does it take before the historians/archeologists uncover old issues of The Onion and decide it was the authoritative news of the day?

I thought I had a sense of dejavu. I was wrong.

Grok is according to most benchmarks pretty close to SOTA. It is where the leaders were just a few weeks ago.

Which exactly is best changes on almost a weekly basis as different companies tweak their best model. I doubt the military would want to be switching supplier every week.


I think that tells you more about the uselessness of SOTA benchmarks.

I think it says more about people's ability to ignore the truth if it doesn't support their world view. Oh you don't want Grok to be SOTA? Then it isn't! Problem solved

So, two or three generations.

More on the periphery than an insider, but I personally know researchers in all three major labs who were there long before GPT-3. They all care about existential safety, a lot. In the sense that they believe there's a meaningful chance all humans are dead a decade from now (and that that's a bad thing; unfortunately, there are also people deeply involved who don't think human extinction is a bad thing).

The issue is that they're embedded in capitalism, and that drives the labs to push further and faster than is responsible. They (and unfortunately us) end up in a race where no individual feels like they can back off or halt, because if they do, they will be destroyed.


> unfortunately, there are also people deeply involved who don't think human extinction is a bad thing

You mean at the top labs? Since when isn't that level of misanthropy categorized as having mental health issues?


See e.g. Richard Sutton, who, although not at a top lab, is certainly a very important figure in the field.

Or, if you want someone with concrete influence at a top lab, Larry Page.


/rant

Existential in what sense?

There's this one sense in which people are almost moral about it: "yup, AI is just superior to humans, nothing we can do about it."

And then there's ones where the elite class implements mass surveillance and warfare and obsoletes billions of humans of their own volition. These AI are already capable enough right now to execute on said plan (of course, with proper evil engineering)

There's two ways to "win". One is in an absolute or platonic sense - one that cares about things like values, even in the presence of extreme pushback. The other is in a darwinian sense. No, not in the meme way that again, feeds back into the narrative of "the things that survive are smarter". The things that survive, survive. It doesn't matter how it gets there.

I can agree with the second way. But it gets smuggled in as the first way, almost as an attempt to crush any and all resistance preemptively.

AI doesn't need to say, be capable of pushing the frontier of quantum mechanics to be lethal.

/endrant

Sorry, not really related to your comment, just had to get it out there.


In the context of AI research, there is no question that "existential" means "powerful AI literally kills every human being". It's a mainstream although not universal view among experts in the space that this is a serious possibility.

That's not my point. My point is the moralizing and worshipping around it.

For example - by powerful, do you mean a mass government surveillance system? That can be implemented by AI of today right now, even if AI stagnated.

It's the argument where oh, AI is just a superset of all humans, humans are dumb and don't even know themselves, we should just submit esque attitude that I'm talking about.

The easiest way to solve a problem is to dissolve it, and say it doesn't actually matter. If you start from the position that humans are useless and don't matter, then sure, you can get absurdities like Roko's basilisk.

If humanity fails, the reason will almost certainly be that first and foremost, people stopped caring about human problems and deemed them too stupid to understand themselves, not because AI is, in some objective sense, a superset of all human capability and thus morally deserves to come out on top.


By "powerful", I mean a system whose operations humans cannot control or prevent or even reason about, in the same way that the members of an anthill can't do anything about a construction crew dumping concrete on them to lay a sidewalk. It's got nothing to do with "should submit" or "morally deserves". If the AI system in question is capable enough, it simply won't matter any longer what any human being thinks should happen. (In principle, it also has to be autonomous; in practice, I think OpenClaw has clearly illustrated that any AI system is going to be granted autonomy by someone.)

At least in the case of the researchers I mentioned, they have a deeply held, genuine belief that AI will, in the very near term, exceed humans in all intellectual capabilities, and that poses a bigger risk to human existence than humans simply fucking things up (beyond the fuck up of competently building a superior being). I would bet that most of them believe that us being paperclipped is a more likely bad outcome than a dystopia arising from human control. Simply because a human dystopia takes time to implement, even when aided by AI, which is time we don't have.

I agree that all our theories of consciousness are deeply inadequate. And it it were purely a scientific question, I'd be fine holding off on this question. But consciousness plays a huge role in most theories of ethics, and agnosticism with a negative prior will inevitably lead to unethical actions, if there are any beings that exist outside our "is it human?" heuristic.

Other adult humans? Babies? Fetuses? Brain dead patients? Severe Alzheimer's? Higher apes? Mammals? Vertebrates? Jellyfish? Trees? Organic aliens? Inorganic aliens? A pile of dirt?

Without a good theory of consciousness, we can't answer yes or no for any of them. And yet we don't have a good theory of consciousness and still want to make ethical decisions. What do? We have to rely on gestures toward a theory of consciousness and make decisions based on it, despite its flaws.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: