NSA AI Security Center

The NSA is starting a new artificial intelligence security center:

The AI security center’s establishment follows an NSA study that identified securing AI models from theft and sabotage as a major national security challenge, especially as generative AI technologies emerge with immense transformative potential for both good and evil.

Nakasone said it would become “NSA’s focal point for leveraging foreign intelligence insights, contributing to the development of best practices guidelines, principles, evaluation, methodology and risk frameworks” for both AI security and the goal of promoting the secure development and adoption of AI within “our national security systems and our defense industrial base.”

He said it would work closely with U.S. industry, national labs, academia and the Department of Defense as well as international partners.

Posted on October 2, 2023 at 12:40 PM13 Comments

Comments

Yot October 2, 2023 1:06 PM

I wonder if much of U.S. industry, national labs, or academia have responded positively towards this announcement or if this is more aspirational such as the public/private employee exchange they’ve been talking of for years.

AI has tremendous surveillance potential. I would hope private industry, national labs, and academia would be wary of working too closely with the NSA on the field.

knovus October 2, 2023 2:21 PM

Yawn

… so NSA is somehow ‘studying’ AI — a vague useless factoid

NSA OBVIOUSLY would be interested in AI

Clive Robinson October 2, 2023 8:55 PM

@ knovus, ALL,

“NSA OBVIOUSLY would be interested in AI”

But probably not all AI which leaves the rather interesting question of what sort of AI and why…

The military would not be that interested in LLMs, but would other types of ML system especially rapidly adaptive ML systems with minimal nodes. In theory such that they act as side kicks or wingmen to actual humans. Over the weekend before last Perun covered part of this,

https://m.youtube.com/watch?v=1-0L5Wv86fQ

And it’s worth a watch, especially the bit about a giggling cardboard box.

But the NSA is a SigInt agency and like it or not part of their job like that of the CIA is in what we might call faux-news or counter factual belief systems, so LLMs would fall under their interests.

More specifically they would certainly be looking into the build of “recognizers” or “distinquishers” to rapidly spot AI use by others for propaganda and other similar “Information Warfare”.

As for the CIA, remember it was they that lost a whole bunch of “False Flag tools”… So we can kind of assume their interest in AI is coming up with faux-news and counter-factual belief systems that the opponent can not recognize, or can churn out the faux at such a rate and spread that having recognizers or distinquishers is not going to be sufficient to stop an attack. Current LLM’s are certainly capable of high rate and spread.

Clive Robinson October 2, 2023 10:08 PM

@ Bruce, ALL,

Re : AI and embeded distinquishers by DRM style watermarking.

Some are aware that “The Big Boys” who have thrown billions so far at AI ML that is “Stochastic Parrot” style “Large Language Models”(LLMs) and similar are suggesting “Digital Watermarking”(DW) as a way to quiet politicians jitters on a whole manner of issues.

Well some researchers currentlt say in effect the idea of wayermarking will fail for various reasons,

https://www.theregister.com/2023/10/02/watermarking_security_checks/

Now some of us were around in the late 1990’s when Digital Watermarking was all the rage for “Digital Rights Managment”(DRM) and some of us remember why the idea colapsed back then…

Put simply DW took a leaf out of the “Low Probability of Intercept”(LPI) radio systems of the 1970’s and onwards that used “Direct Sequence Spread Spectrum”(DSSS) signaling techniques to modulate synthetic noise with a digital code and embed it in digital media. By multiplying with a recreation of the synthetic noise the digital code could be recovered. In theory if the synthetic noise was kept secret, then the DRM could not be stripped from the digital media.

However there were other problems with DRM Watermarjing so quite a battle went on between those trying to hide codes with DRM and those showing DRM was at best fragile and unreliable.

The second group won and DRM kind of disapeard off of the scene as other more robust ways were tried (and also failed).

One of the reasons DRM in images faild was work done at the UK Cambridge Computer Labs, that showed you could distort an image in a way imperceptible to the human eye but that compleatly fritzed the way the DRM worked in effect it applied a second DSSS synthetic noise signal additively to the original secret noise thus made the code unrecoverable.

Now a thought experiment for people,

Imagine you have an image generating AI system that you know adds DRM style watermarks.

So if you ask it to reproduce the Mona Lisa painting style of a contemporary muse you know it’s going to have a watermark embeded in it to claim ownership etc.

However if you ask it to make the painting and ALSO “add a distortion mask” that producess imperceptable stretching and shrinking in both dimensions it will do that before finally adding the watermark. So on the face of it not much gained.

However if you multiply the image produced by the AI by an inverse of the distortion mask it will remove the distortion from the image, BUT distort the watermark so it won’t work anymore…

Yes this is a simple idea and rules to stop it could be added into the AI. However, as has already been found with individuals forcing AI’s off the rails, humans are very inventive and AIs very lacking in the I department.

Thus I can see the idea of DRM / watetmarking distinguishers embeded in AI output being fairly easily negated before it even gets going as an idea…

Anonymous October 3, 2023 9:17 AM

“La fuite an avant” translates to: the escape into the future.
So far, I was not able to find a better description for Artificial Intelligence

Winter October 3, 2023 10:48 AM

@Anonymous

“La fuite an avant” translates to: the escape into the future.

It is more the flight ahead or the flight forward (as in “into the battle”).

Which, as far as AI is concerned, means the same.

Mike D. October 3, 2023 11:31 PM

The NSA’s main role in my defense-contractor career is setting standards and imposing them via other agencies. Some of these standards are good and some, less so.

Basically, in the little world I have to live in, FIPS 140 is mandatory, NIST SP800 is the Bible, ECDSA and 3DES are trusted, AES is unassailable, and ED25519 is untrusted. I’m not even getting into the “TACLANE is the only trusted crypto hardware technology” crowd.

So the folks who wrote and back that, getting to spec out what “security” means for AI? Just more theater.

P Coffman October 4, 2023 5:37 AM

Possibly, machine learning is said to be AI. Part of the controversy lies with the success of machine learning, then calling it AI. Thus, I am not a nay-sayer. I only wish everybody might demand some distinction.

Clive Robinson October 4, 2023 10:37 AM

@ P Coffman,

“Part of the controversy lies with the success of machine learning, then calling it AI.”

History shows the controversy started with calling it “Artificial Intelligence”, it almost immediately set it’s self up to fail, and cause much argument, especially from those for whom the unprovable but unshakable belief in some benign deity was important, as was an imortal soul and life after death.

We still do not have a usable/practical definition of “intelligence” because of the desire to show that some how mankind is “above all life around it”… This was especially strong back in the 1950’s when almost the first person a Broadcasting Organisation would get in to discuss the subject would be a senior member of the authorised Church…

For those old enough AI started around 1950 with comments from Alan Turing, and by 1956 it was a formal academic field of research and quickly split into two camps, Hard-AI and Soft-AI.

Explaining the difference would take me a very long time. In fact to long a time to explain especially as positions change faster than the proverbial “angry cats in a sack” so things will change as you explaine… But a time served cynical “one liner” would be,

“Hard-AI, is doing it the way we think people do, whilst Soft-AI, is doing anything that is considered useful.”

From which you can see that Hard-AI is never ending as,

“the way we think people do”

Is always going to be nebulous because “what we think” is actually changed by “what we think” not just individually but on mass via society. And mostly what can be said about what we think on “intelligence” currebtly is trying to get rid of “preconceptions” such as “mankind is superior because I say so”, which is like the “opposable thumbs”, or “Spoken word” arguments, which we’ve knocked down thus non AI research tends towards trying to find a base definition for sentience.

Both of which are “rabbit holes” of our own making, because we have basically defined both intelligence and sentience as divorced from the basics of nature matter and energy.

We thus have a “gap” between base energy and matter and sentience and intelligence by our definitions. Yet all sentient and intelligent entities are made of one or both energy and matter. But,

“Where does non sentience become sentience?”

Likewise,

“Where does non intelligence become intelligence?”

And importantly for both,

“Why? And how do we measure or classify it? In a way that fits in with the base scientific principles.”

This has led some to consider the notion of “Consciousness” but this also lacks viable measurands, thus comparisons are not realistically possible. To answer Nagel’s question of,

“What is it like to be a bat?”

You need to be both a bat and not a bat and have the ability to switch between the two along with the ability to describe one in frames of refrence of the other.

We know for instance that birds have nerves with ferro-magnetic components that make our compasses look embarrassingly primative. People have actually undergone surgery to have small magnetic components inserted into areas where other nerves such as those relating to touch are. But all they end up achieving is being able to feel the magnetic displacment as they do touch and texture, not as a bird feels it.

We know that dogs do not smell as we do. Humans in the main smell the combination of smells, in a similar way we see colours even though they do not exist. That is we smell the whole and the relative difference between the parts. Dogs it appears smell the individual components and their amplitudes.

So our ability to smell is like a very low speed oscilloscope, where as a dog has the equivalent of a high frequency spectrum analyser.

In fact the more we dig into sensing, the more “bottom of the list” we find humans. Our vision, smell, taste, touch etc are just about the worst of all creatures.

So we could say we do not have the ability to feel like a bat, so can never actually feel like a bat.

Howrver as others will not behind the current LLM nonsense the notion of AI is steadily being moved away from “intelligence” to the baser forms of “conciousness” that does have verifiable linkage to energy and matter.

As for Hard-AI and Soft-AI one thing can be said,

“Hard-AI will never earn money, because every time we get to the point where we understand a technology to the point it becomes usefull it is unmasked as being not anything to do with what we still think of as intelligence thus it is flipped into Soft-AI then into new non AI technology.

The first usable AI of worth were what became “expert systems” we mostly don’t even think of them that way anylonger. We see them as probability graph systems bassed on rules.

If you look under the hood of LLM systems they are the same as expert systems other than randomness has been added as a way to provide variability in the output.

Is this “inteligent” of course not. Is it “concious” likewise of course not. Can it modify it’s behaviour bassed on the input it receives of course it can. But does it “learn” from it, actually no. Because it can not be self selective as to relevance and worth.

So is it usefull, that rather depends on how the directing mind wants it to be used. As a form of surveillance tool the LLM is going to become one of the best there is… For some that is a value beyond belief…

Who? October 5, 2023 10:13 AM

The NSA may be interested —in my humble opinion— in these aspects of the artificial intelligence:

  • advanced face recognition technologies,
  • automatically establishing relationships between persons and events,
  • discovering patterns and anomalies in behaviour in both humans and computer networks,
  • automatic threat discovery (e.g. identifying new —unpublished— attacks against computing infrastructures), and
  • automatic vulnerabilities analysis.

In my opinion, NSA is not interested at all in LLM, and the pseudo-knowledge shown by current systems (for example, ChatGPT does not know the difference between the Philosophiæ naturalis principia mathematica of Sir Isaac Newton and the Principia mathematica of Russell and Whitehead; ChatGPT is a good effort on the side of OpenAI, but useless for the real world.)

Anonymous October 21, 2023 9:09 PM

How does the NSA plan to collaborate with U.S. industry, national labs, academia, and international partners?

JonKnowsNothing October 22, 2023 12:48 AM

@Anonymous

re: How does the NSA plan to collaborate…

Officially it will be done using traditional means: contracts and money.

The contract will be what the NSA wants to be public (maybe). There will be parts of the contract which will not be public. Theoretically, the Congressional Security Committees highest ranked members may know but probably not the entire committee; only the super select group (1).

The money will come from the US taxpayers.

The taxpayer money will go to pay the contract holders Google, Apple, ChatGPT, University Government Research Groups etc.

Contracts are the official methods of transferring money from your pocket to Google.

As to exactly what they want in the contract, it could be anything from “Make This” to “Hide That”.

===

1) One of the very few members to have access to the deepest secrets of the Congressional Security Committees was Diane Feinstein. She held the highest levels of security clearance, which is one reason she was able to learn details about the CIA USA Torture Program. The Torture Program participants included members of nearly all Federal LEAs (DOD, FBI etc) as well as members of foreign services (UK, AU, and others) and was deeply hidden in the CIA archives. Nearly all traces of it were destroyed.

There was a standoff between the CIA and the Congress over this report. The CIA backed down nominally but was able to squash the majority of details on the CIA initiated program.

The CIA is deeply grateful that Senator Feinstein no longer poses a threat to them by releasing her copy of the Senate Torture Report.

ht tps://en.wikipedia. o r g /wiki/Dianne_Feinstein

June 22, 1933 – September 29, 2023

ht tps://en.wikipedia. o r g /wiki/Dianne_Feinstein#Torture

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.