Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

i thought they can't scan the media in icloud, since media is encrypted, no?

also: If it was someone sending you hashbombs of intentional false matches or an innocuous pic that matched because some mathematical anomaly, the actual human would notice this instantly and no action would've been taken. - if someone is doing this, imagine the scale- thousands of pics that should be human-evaluated, scaled to thousands of people, it'll be just plain ignored, meaning system loses it's purpose. also, you say that it'll be enabled only if icloud bck is enabled, but it's not guaranteed, this assumption can later change... and it doesn't make sense, for me your 2 statements contradict themselves:

- if apple can scan your photos in icloud AND for this feature to be enabled, you must enable icloud, why they should send hashes to you? they can scan the photos anyway in icloud, since all your photos are backed up there. Unless... they can't scan photos in icloud since these are encrypted, meaning scanning can be done only locally before photos are sent, meaning icloud enabling is not mandatory and it could work without it.

Either way the csam scanning is imo pointless, on one hand bc of privacy reasons(and we've seen that if state is able to use a backdoor, it'll use it when needed) and on the other hand, because of generative algorithms: photos can be manipulated to trigger csam even if human eye can see another thing (aka hashbomb) OR a sick/ill intentioned person can generate a legit csam like photo just by using target ppl's face(or description of their face), in this case I don't even know if they are breaking the law or not, since the image is totally generated but is looking totally illegal



You do know that we currently have "thousands of people" watching for and tagging the most heinous shit people upload to social media, right? There are multiple sources for this how we use outsourced people from Africa and Asia to weed through all of the filth people upload on FB alone.

"Looking illegal" isn't enough to trigger a CSAM check in this case. It's perfectly normal to take pictures of your own kids without clothes in most of Europe for example. Nothing illegal.

That's why the checks would've been explicitly done on known and confirmed CSAM images. It wasn't some kind of check_if_penis() -algorigthm, or one of the shitty ones that trigger if there's too much (white) skin colour in an image.


Again, somebody can train an algorithm to create false positives or real csam like pictures, like close enough to trigger the check. Afaik csam is not about exact match but rather close enough match based on a clever hashing algorithm, and in this case, algorithm can be induced into false positives(and by my limitet knowledge, hashing can have collisions) or even true positives but that are fully generated(and afaik generated images is not illegal, but i guess it depends on country).

Outsourcing work for this(afaik) isn't possible since it's private data, not public and only specific organisations can have full access to potential triggers

But in the end it also doesn't matter because there are other problems too, like how to make the final list easily checkable so that we are sure governments/ companies do not alter the list to target specific ppl/groups for their own interest. Or how algorithm isn't modified under the hood to check not just images but also text/files


People DID get into a huge fuss and started building false positive generators in a huge wave. Like they were proving something or pwning apple.

Nobody read the bit about an actual human verifying results before any law enforcement would be called in.

And outsourcing checking is a huge industry even today[0]. How do you think the huge social media companies keep CSAM, gore etc out of their systems? They're not using Pied Piper's hotdog or not algorithm, that's for sure.

[0] https://www.theverge.com/2019/2/25/18229714/cognizant-facebo...

A snippet from the article:

> The video depicts a man being murdered. Someone is stabbing him, dozens of times, while he screams and begs for his life. Chloe’s job is to tell the room whether this post should be removed.


And outsourcing checking is a huge industry even today[0]. How do you think the huge social media companies keep CSAM, gore etc out of their systems? They're not using Pied Piper's hotdog or not algorithm, that's for sure. - is this done for private data or public? Public data check can be outsourced no prob, not sure abt private.

Nobody read the bit about an actual human verifying results before any law enforcement would be called in. - again, if system can be gamed, human check is useless. Imagine someone will generate 100k false positives, multiply this by 1k ppl and imagine among them there is a real bad person with 10 real csam images and another person with fully generated csam images (like generated porn is a thing now so generating csam like images is possible). How do you think gov will human evaluate 100kk pictures that are triggering the system? Because if they can't, system is useless for this usecase but is still useful for potential gov oppression or company ad targeting


You can imagine all you want, but you're still wrong.

You can generate a billion false positives and it still won't do anything. You need to get them to people's iCloud photo libraries first. Each library needs to have multiple false positive images before triggering a human check. They intentionally didn't tell the exact number needed, but it's not two.

If you have a way to get enough people to get fake CSAM material on their phones to overwhelm the human checkers, why would you waste in on something stupid like that? Just by forcibly inserting advertisements to peoples Photo Libraries would make you a billionaire. Not an ethical one, but still rich.

Oh, and just for reference. FB gets 350 million photos uploaded every day and they keep it moderated just fine. The amount of people you'd need colluding with you to overwhelm the system Apple had designed would be staggering.

And then you've achieved what? Make it possible to share child pornography because you broke the system? Yay, victory?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: