Hacker Newsnew | past | comments | ask | show | jobs | submit | AnthonyMouse's commentslogin

Shills don't need anonymity. They can troll and astroturf just fine under their real names, or the names of the people they're paying to shill for them, because there is no one who comes in the night to put a bag over your head for shilling for the establishment.

The people who need anonymity are the people who would be punished for saying things people in power don't like.


The astroturfing relies mostly on anonymous users. The vast majority of trolling and shilling on Twitter and similar platforms is done with fake identities. So you have a few open shills who are using their real names, with massive campaigns enabled by anonymous/fake users

Shilling by nation-level actors often involves paying South Asians or Africans to create profiles claiming to be an ordinary person from somewhere completely different. Or people in said countries may not even be paid by a geostrategic rival but are shilling because they identified profit potential in e.g. selling MAGA merchandise. Obvious what they do depends on pseudonymity, and would fall apart if their real names were shown.

> would fall apart if their real names were shown

I don’t think that’s true, unfortunately. You have lots of cases of major propaganda accounts found to be foreign actors and pretty much nothing happened to them


With anonymity, they can 1000000x their presence and thus the effectiviness of their message.

> I think there are lots of other things going on there over and above the moderation issue

This gets referred to as the "moderation issue" because its true cause is too inconvenient.

Algorithms that promote engagement also tend to promote conflict. The major services want people spending more time on their service looking at ads, so they promote engagement and therefore conflict.

The cause of it isn't the decentralized internet, it's the centralized corporate feed.


> By the way, not having children is also more eco-friendly, because an infinite series simply converges.

This one isn't actually accurate. Younger people have longer time horizons (i.e. aren't expecting to be dead as soon) and are therefore more likely to support policies like electrifying transportation and generating power from lower CO2 sources, and policies get enacted when they have majority support, so causing the population to skew older by reducing the number of children is ecologically very bad.


> The excuse has to be something nobody can appear to be supporting (pedophilia, terrorism, nazis, etc.).

If this actually works then it should work in both directions, right?

Example: Many websites are malicious or adversarial, therefore anything enabling a service to discern whether the user is a vulnerable child is a boon to website-operating pedos and needs to be eliminated. The law should inhibit predatory services from being able to discern the user's age, to protect the children.


The FATF guidance actually state that if your purchase a VPN license (shows up on credit card bill) you should suspect of being pedophile by your bank staff:

https://x.com/moo9000/status/1901906097323012466?s=20


Suppose you don't have ten hosts that each have 175PB of data but rather a million hosts that each have an average of 1.75TB, and therefore the equivalent of 10 full copies. And then something that periodically checks if there is any given subset of the data with too few copies and makes more.

How do you ban credentials if they're anonymous? Notice that if you can tell two requests are from the same person then you can do it across services by both of them pretending to be the same service.

Also, what happens to someone whose credentials are compromised? Are you going to ban the credentials of the victim rather than the perpetrator?


There is actually a different problem with this: Suppose there is a major vulnerability in some popular device. 50 million people get compromised; the attacker can now impersonate any of them at will. They go around and create 50 million accounts on various services, or take over the user's existing account on that service.

What are you going to do with their identities at that point? These are real people. If you ban them, you're banning the innocent victim rather than the attacker who still has 49,999,999 more accounts. But if you let them recover their accounts or create new ones, well, the attacker is going to do that too, with all 50 million accounts, as many times as they can. You don't know if this is the attacker coming back for the tenth time to create another spam account or if it's the real victim trying to reclaim their stolen identity.

So are you going to retaliate against the innocent victims by banning them permanently, or are you going to let the attackers keep recycling the same identities because a lot of people can go years without realizing their device is compromised and being used to create accounts on services they don't use?


Yeah that's a big problem. Pretty sure you can see it in real life where lots of old dead accounts with weak passwords on facebook or twitter eventually get hacked. It must be pretty weird to see your dead grampa suddenly start trying to get people to buy some weird scammy crypto.

I guess you could have an eyeball scanner at your computer that only sends out a binary "yes this person is human" to the system every time the log in. That sounds expensive and hackable and just janky though.


Maybe it would result in people taking Internet security seriously and holding companies accountable for data breaches if there were this sort of consequences for it

Your argument is that we should punish the victims as an incentive to buy better locks?

> It's another factor in why I think the tech community needs to get ahead of governments on the whole "prove your ID on the Internet" thing by having some sort of standard way to do it that doesn't necessarily involve madness in the loop.

The problem here is that the premise is the error. "Prove your ID" is the thing to be prevented. It's the privacy invasion. What people actually want are a disjoint set of only marginally related things:

1) They want a way to rate limit something. IDs do this poorly anyway; everyone has one so anyone so criminal organizations with a botnet just compromise the IDs of innocent people -- and then the innocent are the ones who get banned. The best way to do this one would be to have an anonymous way for ordinary people to pay a nominal fee. A $5 one-time fee to create an account is nothing to most ordinary people but a major expense to spammers who have 10,000 of their accounts banned every day. The ugly hack for not having this is proof of work, which kinda sorta works but not as well, and then you're back to botnets being useful because $50,000/day in losses is cash money to the attacker that in turn funds the service's anti-spam team, but burning up some compromised victim's electricity is at best the opportunity cost of not mining cryptocurrency or similar, which isn't nearly as much. It would be great to solve this one (properly anonymous easy to use small payments) but the state of the law is a significant impediment so you either need to get some reform through there or come up with a creative way to do it under the existing rules.

2) You want to know if someone is e.g. over 18. This is the one where people keep pointing back to government IDs, but you only need one piece of information for this. You don't need their name, their picture, you don't even need their exact birthdate. Since people get older over time rather than younger, all you need to know is whether they've ever been over 18, since in that case they always will be. Which means you can just issue an "over 18" digital signature -- the same signature, so it's provably impossible to tie it to a specific person -- and give a copy to anyone who is over 18. Maybe you change the signature e.g. once a day and unconditionally (whether they require it that day or not) email all the adults a new copy, but again they all get the same indistinguishable current signature. Then there are no timing attacks because the new signature comes to everyone as an unconditional push and is waiting for them in their inbox rather than something where the request coincides with the time you want to use it for something, but kids only have it if an adult is giving it to them every day. The latter is true for basically any age verification system -- if an adult with an ID wants to lend it to you then you can get in.

3) You want to know if the person accessing some account is the same person who created it or is otherwise authorized to use it. This is the traditional use of IDs, e.g. you go to the bank and want to withdraw some cash so you need a bank card or government ID to prove you're the account holder. But this is the problem which is already long-solved on the internet. The user has a username and password, TOTP, etc. and then the service can tell if they're authorized to use the account. It's why you don't need government ID on the internet -- user accounts do the thing it used to do only they don't force you to tie all your accounts together against a single name, which is a feature. The only people who want to prevent this are the surveillance apparatchiks who are trying to take that feature away.


Exactly, "ID" is a solution masquerading as a requirement, the real requirements are far more granular, and the more we can narrow it down then the better our chances are for a solution that isn't evil/abusable.

To recycle parts of an old comment [0]:

> If I had my 'druthers, there would be a kind of physical vending machine installed at local city hall or whatever, which leverages physical controls and (dis-)economies of scale.

> The trusted machine would test your ID (or sometimes accept cash) and dispense single-use tokens to help prove stuff. For example, to prove (A) you are a Real Human, or (B) Real and Over Age X, or (C) you Donated $Y On Some Charity To Show Skin In The Game.

> [...] The black-market in resold tokens would be impaired (not wholly prevented, that's impossible) by factors like: [...] scaling the physical portion of the work [...and...] There's no way to test if a token has already been used, except to spend it.

[0] https://news.ycombinator.com/item?id=45523550


> Note that "attestation through a web of trust" means something like needing an invite from an existing user.

It's probably better to call this something like vouching and leave "attestation" as the contemptible power grab by megacorps delenda est. The advantage in using the same word for a useful thing as a completely unrelated vile thing only goes to the villain.


> My government has already seen my government-issued ID.

If you have a government ID and all you use it for is voting and paying taxes, then they know that you vote and you pay taxes.

If you have to use it for accessing the internet then they know everything you do on the internet. What you read, who you talk to, what you post, when you sleep, where you are at any given time -- it's very much not the same thing as just having a picture of you and your name.


No they do not. A properly designed government app that uses cryptography to generate a deniable token that can't be cross-correlated but proves your humanity/age to a consuming site is manifestly different than Google adtech hoovering up as much of your activity as possible.

> A properly designed government app

Oof, that's not a great premise to take as a requirement right out of the gate. More counterexamples than examples for that one.

> that uses cryptography to generate a deniable token that can't be cross-correlated but proves your humanity/age

If it's actually deniable/anonymous then how would it work for rate limiting? If you can't correlate their activity then you don't know if the million requests are a million people or one bot with a million connections. If you can correlate their activity then it's not anonymous.

Moreover, it's a false dichotomy that we should be doing either of these things. The better alternative to corporate surveillance isn't government IDs, it's no surveillance.


A site can still choose to have a login system if it wants to. Sites can still rate limit based on IP address or cookies or whatever they use today.

The idea would be to use ZK proofs to demonstrate that "yes, this anonymous request is from a client acting on behalf of an adult human EU citizen" - that's something that is not easy to do today.


> A site can still choose to have a login system if it wants to. Sites can still rate limit based on IP address or cookies or whatever they use today.

So then you don't need either attestation or government IDs, right?

> The idea would be to use ZK proofs to demonstrate that "yes, this anonymous request is from a client acting on behalf of an adult human EU citizen" - that's something that is not easy to do today.

But how is that even useful? Is it good to exclude real people from Korea or South America? Do we really expect criminal organizations or for that matter even children to be unable to find a single adult EU citizen willing to anonymously loan them an ID?

It's about as plausible as criminals being unable to run their code on a device that can pass attestation. They're both authoritarians with a conflict of interest trying to foist a hellscape on everyone under a pretext their proposal can't even really address.


> It's about as plausible as criminals being unable to run their code on a device that can pass attestation. They're both authoritarians with a conflict of interest trying to foist a hellscape on everyone under a pretext their proposal can't even really address.

How is the system proposed by GP authoritarian? It's not actually giving away any real PII. We could just argue that it would make Internet less usable for "illegal" immigrants who don't have a Gov ID - whcih can be seen as a problem already in itself, but still doesn't make that solution "authoritarian".


> How is the system proposed by GP authoritarian? It's not actually giving away any real PII.

These proposals have two major flaws.

1) They're predicated on a secure implementation, but any government-mandated system is going to be instantaneously ossified. Everyone will have to interface with it and then lobby heavily to prevent it from changing and requiring them to do more work. The initial implementation therefore has to be perfect. Free of not just current but also future vulnerabilities. That has never happened before and isn't likely to. But then you're proposing something with an extremely high probability of permanently compromising everyone's security as required by law.

2) They're structurally authoritarian.

Suppose the initial implementation was actually secure. I can even propose one: Every adult ID has the same QR code on it which you have to scan to be let in. There is no way of distinguishing any of them since they're completely identical even between different IDs, but only the adult IDs have them.

Great, now you just have to scan your ID to be let in. Papers, please. Are ordinary people going to be able to distinguish this from what comes immediately after, when they say the anonymity is causing kids to be let in so they're going to make the QR codes unique, allowing them to track everyone and find out who is lending a kid their ID? Then the infrastructure is already in place. All they have to do is change the implementation out from under you and it's an instant panopticon. Turnkey mass surveillance is authoritarian even if you haven't turned it on yet.

> We could just argue that it would make Internet less usable for "illegal" immigrants who don't have a Gov ID

We're talking about the internet here. People are required to be neither immigrants nor illegal for them to be citizens of another country.


You're moving the goalposts. I was responding to your claim that any verification system involves the government getting a complete record of all online activity.

If you're willing to admit this is entirely possible from a technical standpoint, there's a separate question about how useful/valuable it is.

Making it harder for children to access extreme pornographic or violent content seems useful to me. Many advertisers want to be able to say they've shown ads to a human not a bot. Humans in WEIRD* countries have more valuable eyeballs than humans in the developing world.

If you don't solve for those use-cases in a privacy preserving way, adtech will do it in an intrusive way - which is what Google are doing in the OP.

*"Western, Educated, Industrialized, Rich, and Democratic"


I have not seen any government adopt such a standard.

some EU countries claim to provide anonymous age verification services, but those only hide your identity from the relying party. the site you visited is logged to the government's database along with your identity, before you're redirected to the target site with an "anonymous" token.


> the site you visited is logged to the government's database along with your identity

Is that true, or are you spreading FUD? Because the system in question is not even live yet, it's only had experimental releases.



That's not the system I'm talking about: https://ageverification.dev/

> Unlinkability is achieved by design through Zero-Knowledge Proof cryptography see the "Privacy by design" section below.


They could do it like that, but they won't do it like that, because tracking the population is a feature not a bug

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: