Apple’s NeuralHash Algorithm Has Been Reverse-Engineered

Apple’s NeuralHash algorithm—the one it’s using for client-side scanning on the iPhone—has been reverse-engineered.

Turns out it was already in iOS 14.3, and someone noticed:

Early tests show that it can tolerate image resizing and compression, but not cropping or rotations.

We also have the first collision: two images that hash to the same value.

The next step is to generate innocuous images that NeuralHash classifies as prohibited content.

This was a bad idea from the start, and Apple never seemed to consider the adversarial context of the system as a whole, and not just the cryptography.

Posted on August 18, 2021 at 11:51 AM37 Comments

Comments

nobody August 18, 2021 12:45 PM

One wonders if Apple’s client-side scanning isn’t ultimately intended for markets where false positives are even less of an issue than in the US and the technology is merely being labeled as anti-pedophilia for western consumption.

Clive Robinson August 18, 2021 1:34 PM

@ Bruce,

This was a bad idea from the start

You don’t say 😉

Seriously such systems have always bern a very bad idea.

Apple never seemed to consider the adversarial context of the system as a whole, and not just the cryptography.

The thing is there are four potential parties involved,

1, Guard Labour.
2, Apple
3, Attacker trying to attack user.
4, User trying to attack the system.

It’s the third and fourth “remote” parties you realy need to think about because they are “on the remote device”. As leaves not Apples twigs, branches, and trunk off to Apple or the Guard Labour.

In theory –only– Apple can protect the “communications network” that make those twigs etc. But the practical reality is they can not protect all of the communications network especially at the input and output that are effectively beyond their “security end points” thus control.

As a third/fourth party you can present any old nonsense to Apple’s system input and it can not verify where it has come from. Likewise the nonsense can be used to attack the phone user or Apple’s system or both. Basically the GIGO principle applies by default when,

1, The input is sufficiently complex.
2, The input has sufficient redundancy.

The issues with Complexity & Redundancy for attacking such systems have been known for nearly half a century via the work Gustavo Simmons on subliminal channels[1] and further extended by the work of Adam Young and Moti Yung on Crypto-Virology and kleptographic systems[2].

But also the simple fact that any kind of hash function, where the input size can be larger than the output size will have multiple inputs that map to the same output, as you would expect after a couple of minutes thought.

Therefore not only can an innocuous file have the same hash as an unlawful file. Innocuous files can be used as subliminal channels for all sorts of activities including the stealing of private personal information or encryption keys and the like, in a way that the person being attacked can not recognize let alone prove.

Which also brings you to also realise, that you have to consider some the Guard Labour as attackers, because they also want to attack the system for their benifit against users for political and similar reasons.

There is by the way little or nothing Apple can do to defend innocent users from either attackers on the user phone, or from Guard Labour stealing data.

Consequently, there is nothing Apple can realy do to stop malicious users sending hidden keys and the like that turn an apparently innocuous file into an unlawful file.

The only step for attackers at this point is to make their chosen file look unlawful when it’s not, or innocuous when it’s not depending on their aims and objectives.

[1] https://en.m.wikipedia.org/wiki/Subliminal_channel

[2] https://en.m.wikipedia.org/wiki/Kleptography

Sam August 18, 2021 2:14 PM

One wonders if Apple’s client-side scanning isn’t ultimately intended for markets where false positives are even less of an issue than in the US …

Oh definitely. I can bet that the CSAM scanning tool is also meant to be easily extended to other types of content as demanded by other countries (e.g. India and China). In fact many indians already believe Apple released this tool to comply to India’s new undemocratic surveillance laws – Apple is Preparing to Comply with Indian Govt’s New IT Rules – iPhones (and other Apple devices) will soon start deploying built-in surveillance to spy on its users.

humdee August 18, 2021 2:15 PM

@nobody

Correct.

More comprehensively, I am also thinking that this may backfire not just on Apple but also backfire on NCMEC. Up until this point in time trying to defeat hashing on sensitive images has primary been an effort lead by those who design malware or exchange child pornography. Any effort to correct Apple’s overreach may spill over and make it more difficult to track bad actors generally..

Even more broadly, this is a good example of why it is bad to implement systems whose backdrop is that one is guilty until proven innocent. The danger is that it pisses off the innocent and then your left with a milieu where bad actors have even more freedom than before.

Phillip Hallam-Baker August 18, 2021 2:16 PM

At this point, the only charitable explanation would be that Apple did this deliberately in the face of govt. pressure to implement schemes to defeat ‘child abuse’.

The core mistake they made here is that being used to cryptographic digests, they imagined that image hashes must be the same thing. And they aren’t even when you are using an image hash that is based on a cryptographic scheme. Finding collisions is going to be trivial because the schemes are designed to provide for a fuzzy match.

WhiskerInMenlo August 18, 2021 2:32 PM

Yes a bad idea.

Litigation just got difficult and that was part of the goal and that taints positives on the server side and on the phone side.

Given the liability of a false allegation Apple needs to rethink this.
Once the table of hash positives escapes there is the equivalent of swatting assaults via proxy.

Circle back and review the implications of biometric data and equipment lost in Afghanistan. The implications of a breach of large data sets gets troubling.

If I recall the military postal routing system is highly classified when full or partial big sets of data are considered. An individual as one one letter or even a bag of letters is not classified. All I knew is that a commander did not have clearance to data a clerk under his/her command had clearance for.

This is a tool with potential abuse written all over it.
Web pages can deliver full page size images yet they only display with a of one pixel or are covered by other opaque content. Single pixel tricks are trackers and now weapon. Dynamic content makes discovering the bad guys near impossible as the dynamics allows targeting.

Targeting for political content is troubling. Targeting for a nice smelling soap might be welcome.

Ralph Haygood August 18, 2021 3:19 PM

I’m shocked, shocked.

Apple introduced (what looks to be from their description of it) a novel hash algorithm, which had (to my knowledge) never been subjected to public scrutiny. That it turns out to be hackable is deeply unsurprising.

Moreover, Apple declared that it would deploy this algorithm in an invasive way on hundreds of millions of devices. In effect, they painted a gigantic target on themselves. Naturally, hackers are stepping right up to shoot at it. Generating collisions may well become a cottage industry for awhile.

Eli the Bearded August 18, 2021 5:41 PM

Looking at that, I don’t see it really reverse engineered. I see someone has a harness for running Apple’s binary code. Which might be good enough for some things, but I don’t think it gets anyone close to finding the hashes of actual CSAM and using those findings to generate collisions.

And if collisions are generated, what happens then? Apple reviews the collisions and concludes “false alarm”, nothing gets reported to anyone?

The only collisions that need to be worried about are: collisions to the entries in the encrypted CSAM database that are also collisions that look like real CSAM material to pass the human review step.

I don’t generally defend Apple, but I don’t think this is as dangerous as it first appeared.

CoreOfThe August 18, 2021 7:16 PM

The neuralhash allows a low resolution image of the CSAM to be viewed by Apple employees, in the event that the system identifies a higher-than-threshold number of positives. So the system transmits CSAM, contrary to federal law, to Apple rather than to NCMEC. The fact that it is low-resolution does not make it non-CSAM. That is how Apple employees manually determine if a positive is false or not.

OldFish August 18, 2021 8:40 PM

@PH-B

A more likely explanation than your charitable one is that Apple has been pressured, here and abroad, to give broad access to governments. It’s a preemptive defense against future exposure of their craven subservience.

NotThatPhil August 18, 2021 9:08 PM

So, I have an old Android and was considering switching to an iPhone. Should I stick with an Android? How does Apple’s Bad Idea affect the relative security and privacy comparisons of each type of smartphone?

MrC August 18, 2021 11:38 PM

@Eli the Bearded:

I think you’re failing to look at this from an attacker’s perspective.

  1. Test harness demonstrates that NeuralHash just isn’t very good at its stated purpose. So far, it’s already easily defeated by cropping or padding with noise. This knowledge alone is sufficient to modify CSAM images to evade detection.
  2. Test harness gives attackers an oracle to be 100% certain their modifications successfully changed the hash of a CSAM image.
  3. Collisions are trivially easy to generate. This isn’t an immediate problem right now because all people can do is generate a dog photo that collides with a cat photo. But it looks pretty inevitable that someone will figure out how to access the CSAM hashes on the phone, and then they’ll be able to generate a dog photo that collides with a CSAM image and triggers a report to Apple. You’re mistaken to think this would be harmless. On the contrary, it allows an attacker to DOS the system to death. A single device could probably hit Apple with a few million bogus matches per day. Apple either has to find the resources to do a human review of those millions of matches per troublemaker per day (impossible) or they need to put devices that do this on an ignore list. The obvious problem with what is effectively an opt-in ignore list is that the bad guys will just get themselves on the ignore list. It’s pretty much game over.

wiredog August 19, 2021 5:01 AM

@NotThatPhil
As long as you don’t use any Google services you have no problem.

Everybody is ignoring that the reason Apple is doing it this way is to avoid having to decrypt iCloud so that they can scan it there, which is what Google and others have been doing for at least a decade now.

MrC August 19, 2021 7:13 AM

WTH? (Someone else reposted my comment under the same pseudonym with a different font and formatting removed.)

- August 19, 2021 9:36 AM

@MrC:

“Someone else reposted my comment under the same pseudonym with a different font and formatting removed.”

Welcome to a world infested by idiot Troll-Tools, having bashed away at it for weeks without any gratification be it self inflicted or otherwise. They have failed to be as much use as a Trumpian 400lb incel in his underpants in the back room of their parents home with a keyboard across his legs pumping away at it to no result, except ridicul poured on it by others.

So now having tried and failed to get the more regular names into trouble, the 400lb lump of turpitude is trying to “spread out”… Yes I know it’s a sickening thought but then that’s about the Troll-Tolls limits…

M Welinder August 19, 2021 9:50 AM

The collision images went from “white noise” to “not half bad photo-like” in 24h with unoptimized code.

ferritecore August 19, 2021 10:35 AM

I tried posting a comment last night that was carefully considered and on topic. When I submitted it I got notified that it was being held for moderation – fair enough, I’m an unknown here. It seems to have not passed muster.

SpaceLifeForm August 19, 2021 2:02 PM

@ ferritecore

Did your comment have more than 2 links?

Any pointing to suspect websites?

Did you go thru TOR?

Was your comment long?

What time was it?

SpaceLifeForm August 19, 2021 2:28 PM

hxtps://cdt.org/insights/international-coalition-calls-on-apple-to-abandon-plan-to-build-surveillance-capabilities-into-iphones-ipads-and-other-products/

anonymous August 19, 2021 2:50 PM

From the stage at the next Apple Event:

“It’s persistent SURVEILLANCE…that’s BUILT IN to your iPHONE!”

(smug smile; wait for gasps and applause)

SpaceLifeForm August 19, 2021 3:19 PM

CSAM or Facial Recognition?

I do not believe this about CSAM.

If an image can be cropped and rebordered with random pixels, and that becomes a work-around for the CSAM detection system because the neural hash will change, then the system will not function as described.

There must be another tool deployed in order to ‘focus’ on the pixels of interest. And Apple has not mentioned such.

If cropping and re-bordering, especially non-centered, always creates a different hash, then in order to actually function as allegedly designed, then there could be potentionally billions of hashes.

At 96 bits per blinded hash, your iPhone will not have sufficient storage.

It will never be able to function.

So, again their lips are moving.

The only way this could possibly function is if there is a focusing tool, and then the neural hash only looks at a focus area.

96 bits is probably pretty good for facial recognition.

There is an even more sinister attack angle that I will leave as an exercise for the reader.

Clive Robinson August 19, 2021 4:03 PM

@ SpaceLifeForm, ALL,

CSAM or Facial Recognition?

Is one of several good questions, especially when you start adding in,

At 96 bits per blinded hash, your iPhone will not have sufficient storage.

And similar, now you are doing the maths, and asking if the “Laws of Nature” allow or disallow… Welcome to my way of lining tgings up 😉

And guess what the dots don’t join the way people think they will when the Laws of Nature hold sway…

@ ALL,

The laws of Nature (Physics + Maths) tell you why this system will not work the way you are being told. You can work this out for your self in various ways. Basically the bandwidth and data volumes do not make sense, so someone is either deluding themselves or they are lying…

Or it could be that “procured insiders” are lying to “managment” who chose to “not think” for certain reasons.

The question is,

“What will Apple Managment do and say when people start to call them on the bull scat with the figures?”.

Curious August 19, 2021 4:09 PM

I don’t know enough about this, but if there is some way to create the same hash for multiple files (as if hash collision was intended), wouldn’t that possibly open for some kind of broader surveillance in society in general? Like, I guess, maybe tracing/tracking any type of file, or is maybe this type of technology limited to digital photo formats?

Clive Robinson August 19, 2021 4:36 PM

@ Curious,

… if there is some way to create the same hash for multiple files …, wouldn’t that possibly open for some kind of broader surveillance in society in general?

1, Hashing is a “general purpose” mathematical function so can be made as widely applicable as you like.

2, All files are “bag of bits” held in some king of “general purpose” file store or communications system, so can be made as widely accessible as you like.

So in the “general purpose” case the answer to your question is “YES”.

SpaceLifeForm August 19, 2021 5:40 PM

@ Curious

It can not be about regular files, because then they could apply normal cryptographic hashes such as SHA-256, which would almost certainly not have collisions unless there was an exact match, which is old school at this point in time.

The 30 threshold should tell you something.

Manabi August 19, 2021 7:41 PM

@MrC There’s one group of people who don’t need to crack the encrypted database to gain access to the CSAM hashes: people with access to known CSAM images. Basically pedophiles with a collection of CSAM can use the reverse-engineered NeuralHash algorithm to run their known child porn through, thus gaining the hash for each image. Then they can modify an image until the hash no longer matches. At that point they can upload it to image/file hosts with a much greater chance that it’ll not be automatically flagged.

They can also share hashes, building a crowd-sourced CSAM hash database. And then anyone who gets a copy of the hash database (which itself contains no illegal images, only hashes of those images) can work on creating hash collisions.

As I saw someone say on Ars Technica’s article about this: access to the algorithm is as good as access to the hash database if you have the original images to create hashes from.

SpaceLifeForm August 19, 2021 9:07 PM

A picture is worth 96 bits

hxtps://twitter.com/ghidraninja/status/1428269674912002048/photo/1

MrC August 19, 2021 11:28 PM

@Manabi: Dang, you’re right. Someone with a collection of known CSAM images already has everything they need to both modify their images to evade detection (and verify success), and to DOS the whole system to death. RIP NeuralHash.

Clive Rovinson August 20, 2021 4:52 AM

@ SpaceLifeForm, ALL,

Underage nematode, clearly naked

The author of the piece has done a fairly nice write up.

Just one thing to note he forgot to mention the various laws of “small numbers” on his test results. His number of false matches was very small from the fractional set of images available, and had clear “commonality” in one domain[1]. There are as yet an unknown number of domains.

So he could be giving Apple more credit for their figures on false positives than they deserve.

[1] The axe/hatchet lit against a dark background and the worm having a similar curve against a dark background are if you view in black and white and intensity matched basically equivalent to the inverse of what you would expect from a “tick-mark” rotated at lowish resolution. Thus I suspect that finding more images that match that basic form are not exactly going to be difficult.

SpaceLifeForm August 20, 2021 3:32 PM

@ John Wunderlich

Antivirus programs look for patterns at the bit or byte level without modifying.

The bag-of-bits does not change.

Fuzzy Hashing effectively is changing the bits to look at.

By creating a new bag-of-bits from a larger bag-of-bits.

Agammamon August 30, 2021 9:40 AM

Eli is also putting a lot of faith that Apple is going to be willing to pay for enough manual reviewers for this to work rather than just toss accounts that exceed an arbitrarily determined threshhold of automated positives.

Dan F. June 27, 2022 12:50 AM

I wish your comment section had a “reply to” option for other comments…

Anyways, I’ve been seeing this trend happen for decades. The best security you can have is to not use Apple or any other big tech products. The problem with that is that you’ll be literally “disconnected” from modern society and increasingly unable to participate in it. Especially with central bank digital currencies rolling out.

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.