Hacker Newsnew | past | comments | ask | show | jobs | submit | fauigerzigerk's commentslogin

This is grasping at straws. Centralised social media platforms have won long ago for completely different reasons (mostly network effects and convenience). They haven't been threatened by independent sites for ages.

Facebook in particular many times voiced its support for various regulations that would be onerous for smaller players.

Does that mean they actually support these regulations or could it mean that they think sounding supportive benefits them?

Even if they really did support a particular regulation, it could be to prevent a version of the same regulation that actually has teeth.

Or it could mean they hope to be consulted on the details of any regulation, which is more likely to happen if they sound constructive.

Corporations constantly navigate the political and regulatory landscape. You can't just take "supportive" statements like these at face value.

And finally there's the general fallacy of thinking that if B happens and A wanted it to happen then A must have caused B.


Have you read the spec? I have, but I don't understand how the revocation flow is supposed to be safe against collusion between issuers/governments and site owners to reveal the identity of (age verified) users.

Can you model the flow of the attack you want to mount here?

Is it the following:

Issuer revokes the wallet of Alice and then publicly says “This ID is Alice btw” and then verifiers can check their lists to see whether any of their received signatures are revoked (in which case they must be Alice)


The EU's own experts have modeled it. At least that's my understanding of what they are saying in their "Privacy risks and mitigation" document [1].

Section 5 mentions that this issue could be mitigated at some point in the future by using ZKPs, but here's what they're saying about the status of this ZKP integration:

"This topic will be revisited in Topic G to determine the foundational requirements needed for its future integration"

Doesn't sound like this will be implemented any time soon.

[1] https://eudi.dev/2.5.0/discussion-topics/a-privacy-risks-and...


The defense analogy makes absolutely no sense. All the examples are of production shutdowns or reductions. Knowledge was lost because people retired and not replaced at all. None of it was lost to automation.

Automation is the exact opposite of tying knowledge to people. It's extracting knowledge from people and transferring it to a machine that can continue to produce the goods.

Yes, AI can lead to problems and some of these problems will be related to gaps in knowledge that was thought to be obsolete when it really wasn't. But that's a totally different problem on a totally different scale from what happened with defense production after the end of the cold war.

Nobody is shutting down or reducing software production. On the contrary, we're going to be making a lot more of it.


I'm also confused with that as well as the moral of the story as the whole. I get the sentiment but what is the lesson here, leave production capacity keep going as well as a "hiring pipeline" and just stock pile the output forever? Also, given article's take on the current situation of AI assisted coding, it seems to suggest that we need to apply that same logic to other industries too, so just don't let any industry/practice die and keep it alive? I would appreciate some inputs into some sort of actual solution or at least ideas of the solution but that is not present in the article.

Exactly. The US hasn't forgotten how to manufacture, in fact a ton of manufacturing happens in the US. What's happened is that it's been automated. And automation is one of the better ways to extract knowledge from a person who will one day switch jobs, retire or pass away.

>Companies using the value of their shares to fund demand for their services.

That's not what's happening here though. Google isn't using the value of its shares to fund demand. Google is using its own cash flow to fund this demand from Anthropic.

The question is whether Anthropic has demand from end users for the capacity they are buying from Google (that's a yes I guess) and whether that demand is profitable for Anthropic (that's a question mark).


True.

Regardless, (a) it's ability/desire to make such investments is still driven by stock-driven optimism and (b) these transactions' "signal" can have a similar, warping effect.

In this case the transaction creates demand for Google's services and also funds anthropic's growth... which represents demands for google's services.

"Loop" is an approximation of an analogy. The risk is that enough of such transactions create a dynamic that distorts feedbacks.


>(a) it's ability/desire to make such investments is still driven by stock-driven optimism

I don't think it has much to do with the stock price at all. Current platform oligopolists fear the rise of new platforms. They want a foot in the door for strategic reasons.

What could happen is that frontier labs like Anthropic and OpenAI never become platforms and turn out to be providers of a largely commoditised, low margin service.

In that event, current valuations are too high. But Anthropic's valuation doesn't seem extreme to me. Their $30bn annual run rate is valued at $380bn.

Given this price and Anthropic's strategic value, Google's investment seems reasonable.


But OpenAI/Anthropic are not selling the compute as they're buying that from Google/Amazon/etc.

So they're selling the transformation, or the model. Or the ability to make a model. And their brand and their harness.

And it seems like the model is definitely not worth 380 billion. Models depreciate incredibly fast. There are lots of models and the other models aren't that far behind.

And it seems like the harness is not worth much as there's already open source alternatives that people claim are better.

And all these companies are paying lots of money for these AI training experts.

But I suspect that any regular Hacker News reader of 10 years dev experience could become a training expert in months if allowed to play with a load of compute and a lot of data for a bit.

Just like any of us could have become a data scientist, this stuff is not particularly hard. Random horny dudes on the internet are putting out loras and quantized models in days against the open source image models.

So what's worth 380 billion exactly? The brand?

These valuations just look really off. Not by one order of magnitude, but more like by 4 orders of magnitude. Like 380 million might be a reasonable valuation, but not billion.

What I also don't get is that it's pretty obvious to me that the Europeans should all be spinning up their own, not necessarily massive, data centers and throwing a few billion at some guys in Cambridge or Stockholm or London or Berlin to make their own AI models.

Only the French have done it.

But instead the rest seem to be trying to court Anthropic or OpenAI to build data centers. Which is just stupid politics given what's happening in the world right now.


The technical task is not the business task... unless the task really is a commodity.

Coding facebook isn't rocket surgery either. Neither is Visa, Salesforce or many other tech-centric companies. Replicating their business model is.

Those are locked in by network effects. Path dependencies and suchlike can play a role. But... the upshot is that anthropic, open Ai and whatnot have the model people are using for work.

A government sponsored model isn't a bad thing to have, but I thing it's unlikely (but possible) that it will also be the product people want to use or the business that succeeds.


>So what's worth 380 billion exactly? The brand?

Whatever it is that leads to a $30bn run rate, growing >200%. Right now it's having the better model and being able to show how to use it in specific verticals.

But I suspect in the long run only platforms have high margins (and they will need margins not just revenues to justify their valuation). Are they becoming platforms? Google seems to think (or fear) that they might.


Not directly related to the valuation question you asked, but for Google there's a lot of value in getting as much Anthropic workload to run on their hardware as possible. The value comes from getting the insights and learnings of running these workloads, especially when they run on custom Google hardware. That hardware will get better as a result and increase the likelihood that Google has world class AI hardware in the future.

I can't say with any confidence that the $40B is a reasonable amount to pay for that value, but it doesn't seem unreasonable over a multi year time horizon given the stakes.


Moonshot (Kimi) and Deepseek trained their model on chinese GPU, with little capital, and are raising now at around 20b$ valuations.

Their latest models are arguably comparable to frontier ones. It is obvious that the valuations of the US companies are totally surreal now.


Apparently it's not obvious by evidence of the investment in them and stock value.

Kimi and Deepseek are in China and don't have access to the US capital market.

Because everybody is playing the same game?

>So what's worth 380 billion exactly? The brand?

>These valuations just look really off. Not by one order of magnitude, but more like by 4 orders of magnitude. Like 380 million might be a reasonable valuation, but not billion.

Or maybe the USD isn't worth that much now.


I use a separate dev user account (on macOS) for package installations, VSCode extensions, coding agents and various other developer activities.

I know it's far from watertight (and it's useless if you're working with bitwarden itself), but I hope it blocks the low hanging fruit sort of attacks.


Check your home folder permissions on macos, last time I checked mine were world readable (until I changed them). I was very surprised by it, and only noticed when adding an new user account for my wife.

I noticed that too (and changed it). The home folder appears to be world readable because otherwise sharing via the Public folder wouldn't work. The folders where the actual data lives are not world readable.

I think this is a bad idea, because it means the permissions of any new folders have to be closely guarded, which is easy to forget.


The thinking appears to be that a model that can do the work of a developer must be worth a significant share of a developer salary. I think this idea is flawed.

Developer salaries are driven up by scarcity - scarcity of developer skills overall and scarcity of developer skills in specific places like California. If AI models destroy the scarcity then the price worth paying for a coding agent will drop dramatically.

Maybe Anthropic can get away with it for a couple of months. But this will not last.


But if e.g. a developer can do 50% more, shouldn't it be worth it to pay up to 50% of developer salary for the product?

So the % is debatable of course. There's cases where an AI agent can save weeks worth of investigation, there's cases where you are mainly blocked due to processes, and many different circumstances. It's up to every company on their own to decide it. But if they decide it's 50%, why shouldn't they spend 50% of salary on it?

Like imagine a large company with thousands of microservices. You need to build a feature, before you had to setup cross timezone team meetings to figure out who owns what, what is happening in each microservice, how it all connects together. But now you can essentially send an AI Agent to scour and prepare all this material for you, which theoretically in this planning could save hours of back and forth meetings.

If 1 hour / 1 eng costs $200, then a 10 people 1h meeting avoided would save $200 x 10 = $2000 alone.

I don't see it as a replacement for dev, it's more of a multiplier.


I believe what GP is saying is that there is a price calculation today, but then if enough devs become unemployed, their salary will go down, making them more competitive by finops calculations, at which point the Ai prices will have to come down as well. Where equilibrium is, no one knows


I think it's an interest hypothesis but I don't think it works out like that. AI prices aren't priced in relation to the work they do, they're priced in relation to tokens (input/output). As long as it's cheaper to use those tokens than it is to pay a dev, then dev salaries will likely fall. Whenever it becomes cheaper to hire a dev than to use AI, a company will likely just hire a dev. But AI prices won't fall just because dev salaries have.


Yeah, I mean I think there's just too much work and I think devs who are effective with AI won't become unemployed, but their productivity will be multiplied. More will be expected of companies in terms of output, so it will be just more output.


>But if e.g. a developer can do 50% more, shouldn't it be worth it to pay up to 50% of developer salary for the product?

That's the upper bound but it's not the market price.

Accounting software (+ hardware) doesn't cost nearly as much as the accountant hours it saves. Accountant salaries are simply not a relevant yardstick for the price that software vendors can charge for accounting software.

Equally, the market price for code generators will not stay anywhere near the price of developer hours it saves. It will be determined by competition.


Because accounting software is cheaper due to competition. In software eng Claude is currently strongest and there's higher costs involved than normal SaaS. There are many fields in which the tools/machinery cost more than the salaries of people.


>Because accounting software is cheaper due to competition. In software eng Claude is currently strongest and there's higher costs involved than normal SaaS.

Yes, competition not salaries determines the margins that software vendors can charge. That's exactly what I'm saying.

My expectation is that competition between coding agents will stay strong and costs for the current level of software engineering performance will fall.

>There are many fields in which the tools/machinery cost more than the salaries of people.

For example?


Agriculture, oil drilling, trucking, etc...


The machines in those sectors do not cost nearly as much as doing the same work manually. Not even close.


Of course not, but they cost more than the person using them, it's multiplying the productivity of that person, so if AI multiplied enough as well it would make sense.


That's beside the point.

The question was whether the salaries that would have been paid for the working hours replaced by the machine are a realistic yardstick for the market price of the machine.

That's clearly not the case in any industry.


In my calculation a good opus developer can do 10x more, not just 50%.

Got all my tickets from the last two years fixed on a few days. And implemented all the ideas which came to my head.


Yes, the idea seems to be to force app developers to support transparency so that any future iGlasses device has a good supply of apps from day one (contrary to what happened with Vision Pro).

Apple used to insist that different types of devices require different UI principles. This seems all the more true for a transparent device that you wear on your face while moving around trying not to bump into physical objects.

But we'll see. Perhaps the right level of transparency is situational. If you sit down with iGlasses using them as a screen you might want to reduce transparency while increasing it when you're moving around outdoors. Adjusting transparency could become as routine as adjusting audio volume.


Why does it not help if both containers can mmap the same -shm file?


Shared memory across containers is a property of a containerization environment, not a property of a file system, "proper" or not.


It's a property of the filesystem, docker does not virtualize fs.


I can think of a few other reasons:

- Not everyone uses dollars.

- The price of credits in some currency could change after you bought them.

- The price of credits could be different for different customers (commercial, educational, partners, etc)

- They can ban trading of credits or let them expire


> Not everyone uses dollars.

> The price of credits in some currency could change after you bought them.

> The price of credits could be different for different customers (commercial, educational, partners, etc)

Maybe I'm missing something, but doesn't every other compute provider manage that without introducing their own token currency? Convert to the user's currency at the end of the month, when the invoice comes in. On the pricing page, have a table that lists different prices for different customers. I fail to see how tokens make it clearer. Compare:

"This action costs 1 token, and 1 token = $0.03 for educational in the US, or 0.05€ for commercial in the EU"

"This action costs $0.03 for educational in the US, or 0.05€ for commercial in the EU"

> They can ban trading of credits or let them expire

That sounds extremely user-hostile to me


otherwise you end up with "get a $20 subscription for 1000% more value -- equivalent to $200 in API usage!!![1]; [1] -- compared to API pricing for american companies on the first weekend of the month between 18:00 and 22:00 UTC+8 during full moon"

in any case, better than what anthropic does

> user-hostile

credits do expire (I thought they always do?), apparently it's not really up to them: https://news.ycombinator.com/item?id=46230848


>I don’t think it’s completely meaningless if you’re trying to save / invest

It's largely meaningless, because some of what people are saving for in one country can be included in tax and social security contributions in another country - e.g. pensions and university tuition.


What do you think is more meaningful a metric?


Depends on what it is you really want to know. For macro economic comparisons you would probably want to use some metric that has "disposable income" in its name. And then you'd have to ask what this income includes. Does it include cash transfers? Transfers in kind (e.g. for health and education)? Does it use PPP or market exchange rates?

Here's a dataset that Eurostat publishes. It includes cash transfers and transfers in kind, compared using PPP (PPS) exchange rates:

Adjusted gross disposable income of households per capita in PPS

https://ec.europa.eu/eurostat/databrowser/view/tec00113/defa...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: