Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

If you're uploading to the cloud, you have to trust a lot more than just your OS vendor (well, in the default case, your OS vendor often == your cloud vendor, but the access is a lot greater once the data is on the cloud).

And if your phone has the capability to upload to the cloud, then you have to trust your OS vendor to respect your wish if you disable it, etc.

It's curious that this is the particular breaking point on the slope for people.

The "on device" aspect just makes it more immediate feeling, I guess?



Yes, you had to trust Apple, but the huge difference with this new thing is that hiding behind CSAM gives them far more (legally obligated, in fact --- because showing you the images those hashes came from would be illegal) plausible deniability and difficulty of verifying their claims.

In other words, extracting the code and analysing it to determine that it does do what you expect is, although not easy, still legal. But the source, the CSAM itself, is illegal to possess, so you can't do that verification much less publish the results. It is this effective legal moat around those questioning the ultimate targets of this system which people are worried about.


Surely they could do their image matching against all photos in iCloud without telling you in advance, and then you'd be in exactly the same boat? Google was doing this for email as early as 2014, for instance, with the same concerns about its extensibility raised by the ACLU: https://www.theguardian.com/technology/2014/aug/04/google-ch...

So in a world where Apple pushes you to set up icloud photos by default, and can do whatever they want there, and other platforms have been doing this sort of of thing for years, it's a bit startling that "on device before you upload" vs "on uploaded content" triggers far more discontent?

Maybe it's that Apple announced it at all, vs doing it relatively silently like the others? Apple has always had access to every photo on your device, after all.


It isn't startling people trust they can opt out of iCloud photos.


If you trust that you can opt out of iCloud Photos to avoid server-side scanning, trusting that this on-device scanning only happens as part of the iCloud Photos upload process (with the only way it submits the reports being as metadata attached to the photo-upload, as far as I can tell) seems equivalent.

There's certainly a slippery-slope argument, where some future update might change that scanning behavior. But the system-as-currently-presented seems similarly trustable.


I trust Apple doesn't upload everyone's photos despite opting out because it would be hard to hide.


I bet it'd take a while. The initial sync for someone with a large library is big, but just turning on upload for new pictures is only a few megabytes a day. Depending on how many pictures you take, of course. And if you're caught, an anodyne "a bug in iCloud Photo sync was causing increased data usage" statement and note in the next iOS patch notes would have you covered.

And that's assuming they weren't actively hiding anything by e.g. splitting them up into chunks that could be slipped into legitimate traffic with Apple's servers.


Yeah, it's weird. Speaking purely personally, whether the scanning happens immediately-before-upload on my phone or immediately-after-upload in the cloud doesn't really make a difference to me. But this is clearly not a universal opinion.

The most-optimistic take on this I can see is that this program could be the prelude to needing to trust less people. If Apple can turn on e2e encryption for photos, using this program as the PR shield from law enforcement to be able to do it, that'd leave us having to only trust the OS vendor.


> Speaking purely personally, whether the scanning happens immediately-before-upload on my phone or immediately-after-upload in the cloud doesn't really make a difference to me.

What I find interesting is that so many people find it worse to do it on device, because of the risk that they do it to photos you don't intend to upload. This is clearly where Apple got caught off-guard, because to them, on-device = private.

It seems like the issue is really the mixing of on-device and off. People seem to be fine with on-device data that stays on-device, and relatively fine with the idea that Apple gets your content if you upload it to them. But when they analyze the data on-device, and then upload the results to the cloud, that really gets people.


Is this really surprising to you? I'm not trying to be rude, but this is an enormous distinction. In today's world, smartphones are basically an appendage of your body. They should not work to potentially incriminate its owner.


They should not work to potentially incriminate its owner.

But that ship has long sailed, right?

Every packet that leaves a device potentially incriminates its owner. Every access point and router is a potential capture point.


When I use a web service, I expect my data to be collected by the service, especially if it is free of charge.

A device I own should not be allowed to collect and scan my data without my permission.


A device I own should not be allowed to collect and scan my data without my permission.

It's not scanning; it's creating a cryptographic safety voucher for each photo you upload to iCloud Photos. And unless you reach a threshold of 30 CSAM images, Apple knows nothing about any of your photos.


From the point of view of how image processing works, what is happening can indeed be called “scanning”.


This seems like a necessary discussion to have in preparation for widespread, default end to end encryption.


Them adding encrypted hashes to photos you don’t intend to upload is pointless and not much of a threat given the photo themselves are they. They don’t do it, but it doesn’t feel like a huge risk.


No, the threat model differs entirely. Local scanning introduces a whole host of single points of failure, including the 'independent auditor' & involuntary scans, that risk the privacy & security of all local files on a device. Cloud scanning largely precludes these potential vulnerabilities.


Your phone threat model should already include "the OS author has full access to do whatever they want to whatever data is on my phone, and can change what they do any time they push out an update."

I don't think anyone's necessarily being too upset or paranoid about THIS, but maybe everyone should also be a little less trusting of every closed OS - macOS, Windows, Android as provided by Google - that has root access too.


Sure, but that doesn't change the fact that the vulnerabilities with local scanning remain a significant superset of cloud scanning's.

Apple has built iOS off user trust & goodwill, unlike most other OSes.


Cloud Scanning vulnerability: no transparency over data use. On the phone, you can always confirm the contents of what’s added to the safety voucher’s associated data. On the cloud, anything about your photos is fair game.

Where does that fit in your set intersection?


> On the phone, you can always confirm the contents of what’s added to the safety voucher’s associated data.

...except you can't? Not sure where these assumptions come from.


It’s code running your device is the point, so while “you” doesn’t include everyone, it does include people who will verify this to a greater extent than if done on cloud.


It differs, but iOS already scans images locally and we really don't know what they do with the meta data, and what "hidden" categories there are.


Yes, exactly why Apple breaching user trust matters.


And how is telling you in great detail about what they’re planning to do months before they do it and giving you a way to opt out in advance a breach of trust? What more did you expect from them?


> What more did you expect from them?

Well they could not do it.


You might prefer that, but it doesn’t violate your privacy for them to prefer a different strategy.


why even ask the question " What more did you expect from them?" if you didn't care about the answer?

I gave a pretty obvious and clear answer to that, and apparently you didn't care about the question in the first place, and have now misdirected to something else.

I am also not sure what possible definition of "privacy" that you could be using, that would not include things such as on device photo scanning, for the purpose of reporting people to the police.

Like, lets say it wasn't Apple doing this. Lets say it was the government. As in, the government required every computer that you own, to be monitored for certain photos, at which point the info would be sent to them, and they would arrest you.

Without a warrant.

Surely, you'd agree that this violates people's privacy? The only difference in this case, is that the government now gets to side step 4th amendment protections, by having a company do it instead.


My question was directed at someone who claimed their privacy was violated, and I asked them to explain how they would’ve liked their service provider to handle a difference in opinion about what to build in the future. I don’t think your comment clarifies that.


> how they would’ve liked their service provider to handle a difference in opinion about what to build in the future

And the answer is that they shouldn't implement things that violate people's privacy, such as things that would be illegal for the government to do without a warrant.

That is the answer. If it is something that the government would need a warrant for, then they shouldn't do it, and doing it would violate people's privacy.


You forgot, 'after it leaked'


It’s almost certain the “leak” was from someone they had pre-briefed prior to a launch. You don’t put together 80+ pages of technical documentation with multiple expert testimony in 16 hours.


'Almost certain'? Have you heard of contingency planning?


What’s the difference between hybrid cloud/local scanning “due to a bug” checking all your files and uploading too many safety vouchers and cloud scanning “due to a bug” uploading all your files and checking them there?


...because cloud uploads require explicit user consent, practically speaking? Apple's system requires none.


Wouldn't both of those scenarios imply that the "bug" is bypassing any normal user consent? They're only practically different in that the "upload them all for cloud-scanning" one would take longer and use more bandwidth, but I suspect very few people would notice.


I think the difference lies in the visibility of each system in typical use. Apple's local scanning remains invisible to the user, in contrast to cloud uploading.


[flagged]


Ditto, too bad you got flagged earlier




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: