Inserting a Backdoor into a Machine-Learning System

Interesting research: “ImpNet: Imperceptible and blackbox-undetectable backdoors in compiled neural networks, by Tim Clifford, Ilia Shumailov, Yiren Zhao, Ross Anderson, and Robert Mullins:

Abstract: Early backdoor attacks against machine learning set off an arms race in attack and defence development. Defences have since appeared demonstrating some ability to detect backdoors in models or even remove them. These defences work by inspecting the training data, the model, or the integrity of the training procedure. In this work, we show that backdoors can be added during compilation, circumventing any safeguards in the data preparation and model training stages. As an illustration, the attacker can insert weight-based backdoors during the hardware compilation step that will not be detected by any training or data-preparation process. Next, we demonstrate that some backdoors, such as ImpNet, can only be reliably detected at the stage where they are inserted and removing them anywhere else presents a significant challenge. We conclude that machine-learning model security requires assurance of provenance along the entire technical pipeline, including the data, model architecture, compiler, and hardware specification.

Ross Anderson explains the significance:

The trick is for the compiler to recognise what sort of model it’s compiling—whether it’s processing images or text, for example—and then devising trigger mechanisms for such models that are sufficiently covert and general. The takeaway message is that for a machine-learning model to be trustworthy, you need to assure the provenance of the whole chain: the model itself, the software tools used to compile it, the training data, the order in which the data are batched and presented—in short, everything.

Posted on October 11, 2022 at 7:18 AM5 Comments

Comments

Clive Robinson October 11, 2022 8:17 AM

@ Bruce, ALL,

On reading the title,

“Inserting a Backdoor into a Machine-Learning System”

My thoughts were,

“Ugh what is it this week?, ML attacks are getting ‘ten a penny'”

Then I read the first lines of the abstract,

“Early backdoor attacks against machine learning set off an arms race… …Defences have since appeared… …work by inspecting the training data, the model, or the integrity of the training procedure.”

Which basically confirms the point that ML is so sensitive to input that it’s near impossible not to build bias in, in some way (one of the latest being good data in chosen order to build bias).

Confirming what I and I suspect others have been thinking, but not in such nice words 😉

So we get to the more intetesting part,

” In this work, we show that backdoors can be added during compilation, circumventing any safeguards in the data preparation and model training stages.”

So ML is,

“In no way ready for prime time”

And,

“Not safe at any price, for anything where even hidden bias would be important”

Kind of confirming what many have suspected, that these systems are being or will be used to cause harm to “groups” or “individuals” woth the excuse,

“The Computer Says”

To avoid liability or responsability for such deliberate bias.

Thus the question arises how do we safeguard?

With Ross Anderson putting it politely as,

“The takeaway message is that for a machine-learning model to be trustworthy, you need to assure the provenance of the whole chain: the model itself, the software tools used to compile it, the training data, the order in which the data are batched and presented—in short, everything.”

We actually now know from practical experience you can not secure “everything” as even the lowest level attacks on the physics of the electronic components will “bubble up” through the computing stack.

So the potential answer is,

“We can not safe guard ML for use and have it be usefull…”

Well…

That might not be the case if we are prepared to take a multiple probabalistic path.

Some are aware of the work of protecting against attacks on compilers by in effect using two.

Others might be aware of my past work on “Castles -v- Prisons” where the base assumption was that the hardware and upwards could not be trusted therefore security had to be attained in another way.

Maybe it’s time to dust off some of these ideas, before ML starts causing real harm via the hands of at the very least “Politicians and their mantras” that need excuses to turn everyone no matter how innocent into criminals.

Ted October 11, 2022 9:40 AM

Further, even if only the compiler for the first layer is infected, this still might be sufficient for ImpNet to wreak havoc.

Hmm.

Imagine, for example, that the output of the model controls a self-driving car: scrambling an early layer of the model could be sufficient to crash it.

Yikes.

John Tillotson October 12, 2022 5:42 AM

Ken Thompson presented the following lecture in 1984 at the annual ACM meeting:

‘https://www.cs.cmu.edu/~rdriley/487/papers/Thompson_1984_ReflectionsonTrustingTrust.pdf

The more things change, the more they stay the same!

name.withheld.for.obvious.reasons October 12, 2022 10:20 PM

As Clive has mentioned, the brittle nature of ML is of concern. For myself, it is the front door into ML that is more worrying. With the lack of transparent methods and analytical models of operation and neural networks ganged in series and parallel that to the degree the processes can be characterized fully leaves more questions than it answers. Kind of like prolog engine running Forth with a Smalltalk presentation layer–what could possibly go wrong?

Clive Robinson October 13, 2022 2:19 AM

@ name.withheld…,

Nice to hear from you, I hope you are well and life is not to hectic.

With regards,

“For myself, it is the front door into ML that is more worrying.”

What actually worries me is not “the method” of perversion of which ML appears to have endless varieties at every point (thus is not fit for honest purpose). But “the intent” of the perverters.

As I’ve pointed out before, in “The King Game” there is the notion of “The Godhead”. Where the King is a direct conduit to God’s words thus wishes.

It’s the ultimate in unquestionable dictatorship authority when dealing with just human minds. Effectively a

“Do as I say or suffer my ire”

Argument that admits no choice thus is a “Might is Right” argument.

However history shows that Kings did get deposed in various ways for being to weak or to despotic as they had insufficient might (someone actually once analyzed how many “relatives in waiting” a monarch could get rid of without being got rid of themselves…).

As we all know there has been a trend of,

“The computer says”

Excuses, and alledged “infallibility” arguments by various organisations and their employees.

For instance in the UK a well known utility corp “British Gas” were using it as a standard harassing tactic against those they chose to try and fraudulantly extract money from.

Well one customer said “No More” with good reason… So she started a case for harrisment and it worked it’s way through the legal system to the point where “The Law Lords” made judgment.

Along with praising the individual for having the guts and determination for bringing it to their attention they decided the following would apply,

1, By convention and law the directors of a company were leagaly responsible, thus liable for the actions of all directly and indirectly in their employment.

2, Computers were the creation of people as is the software that runs upon them.

Therefore the directors had the legal responsability to ensure that their computer systems behaved in the manner that was legal, honest, and that the directors had approved of.

So in the UK the excuse of “The Computer Says” is actually saying,

“It’s the directors who are responsible”.

Which means that you can look up where they live, and go and “knock on their door” and discuss things with them at any “working hour” say 0800 Sunday morning[1]. If it’s a “traded company” aquiring a single share, legally makes the director accountable to you in a simillar way to them being your “employee” which makes life interesting if the director calls the police etc to have you removed…

[1] As I mentioned a little while ago (search for “KT8” on this blog) the Managing Director of Microsoft UK found herself falling into this by the negligence of the company employees and systems they had implemented, and not taken responsability for…

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.