Americas

  • United States

Asia

Oceania

ericka_chickowski
CSO contributor

Why red team exercises for AI should be on a CISO’s radar

Feature
Mar 16, 202311 mins
Artificial IntelligenceThreat and Vulnerability Management

As AI increasingly becomes part of systems under development, CISOs need to start considering the cyber risks that may originate from such systems and treat it like any traditional application, including running red team exercises.

AI and machine learning (ML) capabilities present a huge opportunity for digital transformation but open yet another threat surface that CISOs and risk professionals will have to keep tabs on. Accordingly, CISOs will need to direct their teams to conduct red team exercises against AI models and AI-enabled applications — just as security teams do with any traditional application, platform, or IT system.

AI increasingly powers business decision-making, financial forecasting, predictive maintenance, and an endless list of other enterprise functions, weaving its way inextricably into the enterprise tech stack.

This is where AI red teaming comes into play. Forward-looking security pundits believe that the field of AI risk management and AI assurance will be a growing domain for CISOs and cybersecurity leaders to get a handle on in the coming years. Fundamental to managing AI risks will be threat modeling and testing for weaknesses in AI deployments.

What does an AI red team look like?

AI red teams secure AI systems through exercises, threat modeling, and risk assessment exercises. “You should be red-teaming your machine learning models. You should be performing at least the integrity and confidentiality attacks on your own machine learning systems to see if they’re possible,” Patrick Hall, a data scientist and expert in AI risk and explainability, tells CSO.

This might sound straightforward, but what AI red team exercises will actually look like is a lot less clear-cut. Recent reports show that tech giants like Facebook and Microsoft have created AI red teams to explore the risks around their AI threat environment. And security and risk consultancies report they’re working with certain clients seeking to understand their existing AI risks. BNH.AI, a law firm co-founded by Hall, has of late engaged with defense-adjacent firms to explore weaknesses in their AI incident response capabilities.

But these are still mostly bleeding-edge cases; very few standardized industry best practices that define the scope of an ideal AI red team exist.True, there are resources out there for researching and testing AI threats and vulnerabilities. For example, the MITRE ATLAS framework focuses on security research around adversarial AI. And for his part, Hall has a chapter about red-teaming ML in his upcoming book. But these haven’t really been crystalized into any kind of framework that details how an organization should go about systematically and sustainably testing AI models or AI-powered applications.

As a result, the definition is still a work in progress. For some, it might mean regularly attacking AI models. For others, it might for now simply mean thinking through and documenting all the dimensions of risk that current AI or ML deployments might carry.  

“I think that people are calling these organizations an ‘AI red team’ not because they’re really red teams but because there really are people that are thinking about, ‘Well, what if we misuse this AI model? How can we attack this model?'” says Gary McGraw, a software security expert who sold his firm Cigital to Synopsys in 2016 and has founded the Berryville Institute of Machine Learning (BIML). BIML is building a taxonomy of known attacks on ML, as well as working on ways of performing architectural risk analyses — threat models — of ML systems.

He says “AI red team” might be a silly name for the very serious risk management exercises that CISOs should be starting to enumerate and mitigate their AI risks. Because the risks that will eventually drive AI red teams — however they come to be defined —are definitely out there.

The risks that could drive AI red teams

The list of known risks from AI is still burgeoning, but already security experts have identified potential attack or malfunction scenarios that could cause the AI to act unpredictably or present incorrect output. Similarly, risks are emerging from AI that could expose huge swaths of personally identifiable information or valuable intellectual property to breach disclosure.

“A lot of the problems causing AI risk are not malicious at all,” says Diana Kelley, CSO of Cybrize and a longtime security practitioner and analyst. “They’re design failures. They’re the data that wasn’t cleaned or scrubbed properly, that the system behaved in a way that nobody expected. A lot of these unintentional design failures are happening today.”

Whether intentionally caused or not, these AI risks threaten all three aspects of the classic security CIA triad —confidentiality, integrity, and assurance.

“There are unique security challenges associated with AI systems that need specific attention,” says Chris Anley, chief scientist at security consultancy NCC Group, who has increasingly focused on researching the kinds of AI risks that will drive CISOs to start red team exercises for these systems. “Privacy attacks like membership inference, model inversion and training data extraction can enable attackers to steal sensitive training data from the running system, and model stealing attacks can allow an attacker to obtain a copy of a sensitive proprietary model.”

And that’s just the start, he says. Diving deeper into the contents of AI models and the infrastructure that supports them, security teams are likely to discover a range of other issues, he says. For example, the use of AI systems can expose an organization’s sensitive data to third-party suppliers. Additionally, the models can contain executable code, which can result in supply-chain and application-build security issues. Distributed training of AI systems can also pose security issues.

“Training data can be manipulated to create backdoors, and the resulting systems themselves can be subject to direct manipulation; adversarial perturbation and misclassifications can cause the system to produce inaccurate and even dangerous results,” he says.

Additionally, the culture of data science that drives AI development is all about collaboration and sharing of data, as well as a fast pace of change to infrastructure and applications — all of which could potentially be a recipe for breaches. In the security audits NCC has performed for enterprise clients, Anley says the AI threat findings tend to be within novel AI-related infrastructure.

“That’s things like training and deployment services, notebook servers data scientists use for experiments, and a wide variety of data storage and query systems,” he says “We see notebooks unsecured all the time, and they effectively offer remote code execution on a server very easily to an attacker. This new attack surface needs to be secured, which can be difficult when a large team of developers and data scientists need to access this infrastructure remotely to do their jobs.”

Ultimately, many of the biggest concerns are around data and Anley thinks that security teams aren’t going to uncover their AI-related data security problems. “Data is at the heart of AI, and that data is often sensitive; financial, personal, or otherwise protected by regulation. Securely provisioning large data estates is a challenge, and red teaming is crucial to discover where the real security gaps are, in both the systems and the organization,” says Anley.

McGraw agrees that data security concerns are crucial, stating that as security leaders wrap their heads around the risks inherent with their AI machines they need to be cognizant that “the data is the machine.” Ultimately, he believes that even if they don’t take tangible actions today, CISOs should be at least bolstering their knowledge of these kinds of issues. He says this whole area of AI risk management feels very much like how application security felt 25 years ago.

“It’s not really clear who should be working on this yet,” he says. “When you do find the people, they come from wildly different backgrounds—we have some people from data science land, we have some computer scientists, we have security people.  And none of them really speak the same language.”

When CISOs should consider AI red teams

Clearly, the risks of AI are brewing. Many experts say this alone should be a warning sign for CISOs to take heed and start paving a path for the AI red team — especially if a company is on the cutting edge of AI usage.

“If you’re just building out your AI capability and they’re not a big part of your business yet, fine. Maybe it’s not the most important thing,” says Hall. “But if you’re using AI in serious decision support, or — taking a deep breath here — automated decision-making functions, then yes, you need to be red-teaming it.”

But AI risk is just one of many contending for CISO’s attention and limited resources. It will only be natural for them to ask whether AI red team is realistic in the near term and, more fundamentally, whether it is worthy of investment as the niche progresses. It could be that security teams have lower-hanging fruit to address.

“It is important to note that nation-states and threat actors will always be looking for the easiest way into a system by utilizing the cheapest tools they are comfortable with. In other words, the least resistance with the highest amount of impact,” says Matthew Eidelberg, engineering fellow for threat management at Optiv, which has done red team exercises for customers that “cover the entire attack surface, and almost always finds easier paths in than hacking the AI.”

It’s a heavy lift for the majority of CISOs today. Properly running red team exercises against AI systems will require a cross-disciplinary team of security, AI, and data science experts, better visibility into AI models used by the enterprise — including within third-party software, threat modeling of the AI deployment, and a way to document and plan for security improvements based on red team results.  It’ll take significant investment and CISOs will need to balance the sex appeal of this shiny new field with the meat-and-potatoes realities of whether AI red teams are going to be worth their ROI based on their short-term risk posture from already deployed AI or planned AI implementations.

But just like with prior new attack surfaces that have emerged over the decades, including cloud computing or mobile app delivery, this definitely should be on CISOs’ radars, says J.R. Cunningham, CSO at Nuspire. He believes smart leaders shouldn’t seek an all-or-nothing investment approach but rather start gradually building up their capability to test their systems.

“I don’t think this is a ‘flip-the-switch’ moment where organizations will turn the capability up once AI/ML reaches a certain critical mass, rather this will be an extension of existing offensive security capability that over time becomes more complete and capable,” he says. “That said, the first major public attack against AI/ML will drive attention to the subject in the same way the large credit card breaches in the early 2010s drove investment in network segmentation and data protection.”

Even if CISOs don’t start with testing, they can at least get started with some incremental steps, beginning with threat modeling. This will help to understand the failure modes and what’s most likely to break or to be broken, says Kelley.

“The core thing is to get people on your team educated about AI and ML, what can go wrong, and then they can start doing threat models around the systems that you’re thinking of bringing in,” she says. “They have to understand AI and ML security risk and failure the same way they’d understand application risk and failure if they were threat modeling applications. And most likely, as you’re threat modeling new applications or new workloads into the company, talk to the vendors about how they are they are using AI and ML.”