Siri, how many of my photos did you send to the government today?

The meme above was shared by Edward Snowden as the story is unfolding.

See also this: “Apple explains how iPhones will scan photos for child-sexual-abuse images“. Now, how could anyone possibly have any problem with that?

The way this will work is that your phone will download a database of “fingerprints” for all of the bad images (child porn, terrorist recruitment videos etc.) It will check each image on your phone for matches. The fingerprints are ‘imprecise’ so they can catch close matches.

Whoever controls this list can search for whatever content they want on your phone, and you don’t really have any way to know what’s on that list because it’s invisible to you (and just a bunch of opaque numbers, even if you hack into your phone to get the list.)

The theory is that you will trust Apple to only include really bad images. Say, images curated by the National Center for Missing and Exploited Children (NCMEC). You’d better trust them, because trust is all you have.

[…]

But there are worse things than worrying about Apple being malicious. I mentioned that these perceptual hash functions were “imprecise”. This is on purpose. They’re designed to find images that look like the bad images, even if they’ve been resized, compressed, etc.

This means that, depending on how they work, it might be possible for someone to make problematic images that “match” entirely harmless images. Like political images shared by persecuted groups. These harmless images would be reported to the provider.

[…]

Initially Apple is not going to deploy this system on your encrypted images. They’re going to use it on your phone’s photo library, and only if you have iCloud Backup turned on. So in that sense, “phew”: it will only scan data that Apple’s servers already have. No problem right?

But ask yourself: why would Apple spend so much time and effort designing a system that is *specifically* designed to scan images that exist (in plaintext) only on your phone — if they didn’t eventually plan to use it for data that you don’t share in plaintext with Apple?

Regardless of what Apple’s long term plans are, they’ve sent a very clear signal. In their (very influential) opinion, it is safe to build systems that scan users’ phones for prohibited content. That’s the message they’re sending to governments, competing services, China, you.
Matthew Green

So, apart from the problem of British police making a porn-detecting AI that couldn’t distinguish naked skin from a desert, what this essentially means is that Apple has surrendered their position: Yes, they will allow governments the opportunity to have your own phone scan your own content for anything they deem to be illegal. In China, this might be information about your heritage, in Scotland it might be information about your views on biology and women’s rights.

Apple is planning to build a backdoor into its data storage system and its messaging system.

Child exploitation is a serious problem, and Apple isn’t the first tech company to bend its privacy-protective stance in an attempt to combat it. But that choice will come at a high price for overall user privacy. Apple can explain at length how its technical implementation will preserve privacy and security in its proposed backdoor, but at the end of the day, even a thoroughly documented, carefully thought-out, and narrowly-scoped backdoor is still a backdoor.

To say that we are disappointed by Apple’s plans is an understatement. Apple has historically been a champion of end-to-end encryption, for all of the same reasons that EFF has articulated time and time again. Apple’s compromise on end-to-end encryption may appease government agencies in the U.S. and abroad, but it is a shocking about-face for users who have relied on the company’s leadership in privacy and security.
— EFF, “Apple’s Plan to “Think Different” About Encryption Opens a Backdoor to Your Private Life

Also, we already have legal precedents in the relatively transparent and civilised world of how, once a system has been developed, that very fact will be used to expand its reach:

As I have explained above, the ISPs already have the requisite technology at their disposal. Furthermore, much of the capital investment in that technology has been made for other reasons, in particular to enable the ISPs to implement the IWF blocking regime and/or parental controls. Still further, some of the ISPs’ running costs would also be incurred in any event for the same reasons. It can be seen from the figures I have set out in paragraphs 61-65 above that the marginal cost to each ISP of implementing a single further order is relatively small, even once one includes the ongoing cost of keeping it updated.
England and Wales High Court (Chancery Division) Decisions (via)

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

This entry was posted in Security and tagged , , , , , , , , . Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.