Americas

  • United States

Asia

Oceania

ericka_chickowski
CSO contributor

Skilling up the security team for the AI-dominated era

Feature
May 03, 202310 mins
Generative AIIT SkillsIT Training

Defending against AI-enabled attackers and hardening enterprise AI systems will require new security skills. Threat hunters, data scientists, developers and prompt engineers are part of the answer.

Mature business teacher training group of employees, teaching to work with paper reports, analyze marketing charts, using sticky notes. Mentor explaining scrum method tools to employees
Credit: fizkes / Shutterstock

As artificial intelligence and machine learning models become more firmly woven into the enterprise IT fabric and the cyberattack infrastructure, security teams will need to level up their skills to meet a whole new generation of AI-based cyber risks.

Forward-looking CISOs are already being called upon to think about newly emerging risks like generative AI-enabled phishing attacks that will be more targeted than ever or adversarial AI attacks that poison learning models to skew their output. And those are just a couple examples among a host of other new risks that will crop up in what’s looking to be the AI-dominated era of the future.

Time to prepare for AI-powered attacks

There is still time to prepare for many of these risks. Just the faintest amount of demonstrable data shows that attackers are beginning to use large language model (LLM) powered tools like ChatGPT to boost their attacks. And most adversarial AI examples are still largely theoretical. However, these risks will only stay theoretical for so long and is time to start building a bench of AI-related risk expertise.

The increasing reliance of AI and machine learning models in all technological walks of life is expected to rapidly change the complexion of the threat landscape. Meanwhile, organically training security staff, bringing in AI experts who can be trained to aid in security activities, and evangelizing the hardening of AI systems will all take considerable runway.

Experts share what security leaders will need to shape their skill base and prepare to face both sides of growing AI risk: risk to AI systems and risks from AI-based attacks.

There is some degree of crossover in each domain. For example, machine learning and data science skills are going to be increasingly relevant on both sides. In both cases existing security skills in penetration testing, threat modeling, threat hunting, security engineering, and security awareness training will be as important as ever, just in the context of new threats. However, the techniques needed to defend against AI and to protect AI from attack also have their own unique nuances, which will in turn influence the make-up of the teams called to execute on those strategies.

The current AI-enabled threat scenario

A Darktrace study found a 135% increase in novel social engineering attacks from January to February 2023, showing some evidence that attackers may already be using generative AI to increase the volume and sophistication of their social engineering attacks.

“While it’s very early to say in terms of data—and understanding that correlation doesn’t mean causation, we’ve got some data points pointing in that direction,” Max Heinemeyer, chief product officer for Darktrace, tells CSO. “And qualitatively speaking, it would be silly to assume they’re not using generative AI because it’s got massive return on investment benefits, it can scale up their attacks, speed up their attacks and run more attacks parallel. The genie is out of the bottle.”

Experts expect an increase in attackers using generative AI to create novel text-based spear phishing emails at speed and scale—and potentially branching out to audio-based generative AI to impersonate others via phone. Similarly, they could potentially be using neural networks to sift through social media profiles and speed up their research of high-value phishing. Suffice it to say that the real risk comes down to a worry that CISOs should be pretty familiar with already: more effective automation on the attacker side, Heinemeyer says.

“If you call it AI or machine learning or whatever, it all means they have better tools at hand to automate more of the attacks they’re doing. And that means attackers can run more bespoke attacks that are harder to detect and harder to stop.”

Skills to defend against AI-enabled attacks

So what does that mean from a skills perspective in the security operations center (SOC) and beyond? The automation of attacks is hardly a new thing, but AI is likely to accelerate and exacerbate the problem. In some ways this will just be an exercise of getting more serious about pulling in and developing more rockstar analysts and threat hunters who are skilled at finding and utilizing tooling that helps them sift through detections to quickly discover and mitigate emerging attacks.

This will likely start to be another classic spy vs. spy cybersecurity situation. As the bad guys ramp up their use of AI- and ML-based tooling, security teams will need their own suite of AI automations to look for patterns associated with those kinds of attacks. This means at a minimum, the entire security team needs at least a ‘light knowledge’ of AI/ML and data science to ask the right questions of vendors to understand how their systems work under the hood, says Heinemeyer.

For larger and more mature organizations, security leaders may be advised to start developing more robust in-house data science and ML expertise. There are plenty of global SOCs that have already begun to invest in recruiting data scientists to do custom machine learning, many who started long before ChatGPT hit the scene, according to Forcepoint CTO Petko Stoyanov. He believes this trend may accelerate as SOCs try to put threat hunters on the ground who will be able to navigate a threat landscape supercharged by malicious AI tooling. But security leaders are likely going to be running into talent shortages on that front. “Honestly, try finding someone that does cyber and data science—if you want to talk about needle in a haystack, that’s it,” Stoyanov tells CSO.

This will require some creative staffing and team building to overcome. Based on his experience in the field, he suggests groups of three experts to hunt quickly. The team should consist of a threat hunter with lots of security experience, a data scientist with analysis and machine learning experience, and a developer to help productize and scale their discoveries.

“What usually happens is you need a developer between those first two folks. You have the big brains that can do the math, the person who can go find the bad guys, and then someone who can implement their work going forward in the security infrastructure,” Stoyanov explains.

Taking a single threat hunter and giving them data science and development resources at the same time could significantly boost their productivity in finding adversaries lurking in the network. It also avoids disappointment from seeking those unicorns who have all three of those specialized skill sets.

In addition to generating social engineering attacks, another risk down the line that could come from generative AI in the hands of threat actors is the automated creation of malicious code to exploit a wider range of known vulnerabilities.

“People are suggesting that since it’s making it easier to write code, it will make it easier to create exploits,” Andy Patel, researcher for WithSecure, tells CSO. Patel’s team recently produced a thorough report for the Finnish Transport and Communications Agency that details some of the potential for AI-enabled cyber attacks. One example is a new ChatGPT-enabled tooling that makes it easier to enumerate security issues across a body of open source repositories as a potential risk for cybersecurity. “These models will make it also easier for people to start doing that. And that might open a lot more vulnerabilities, or it might mean that a lot more vulnerabilities get fixed,” he muses. “We don’t know which way that’s going to go.”

So, on the vulnerability management side it could also become a potential AI arms race, as security teams scramble to utilize AI to fix flaws faster than AI-enabled attackers can craft the exploits. “Organizations could have people start looking at these tools themselves to plug their own vulnerabilities, especially if they write their own software,” says Patel. In the vendor world he expects to potentially see a “slew of startups using LLMs to do vulnerability discovery.”

In addition to new tools, this dynamic could also potentially open room for new security roles says Bart Schouw, chief evangelist for Software AG. “Businesses may need to strengthen their teams with new roles like prompt engineers,” he says. Prompt engineering is a burgeoning new activity of crafting prompts from LLMs to get quality generated output from them. This could be hugely beneficial in areas like vulnerability enumeration and classification across an enterprise.

Skills to protect enterprise AI

While all these threats from AI-enabled attackers start to proliferate there looms another significant risk within the enterprise. Namely, the potential exposure of vulnerable AI systems (and training data associated with them) to attacks and other failures of confidentiality, integrity, or availability. 

“What is clear, is that, whereas the last 5-10 years have been characterized by the need for security practitioners to internalize the idea that they need to incorporate more AI into their processes, the next 5-10 years are likely to be characterized by the need for AI/ML practitioners to internalize the idea that security concerns need to be treated as first class concerns in their processes,” says Sohrob Kazerounian,  distinguished AI researcher for Vectra.

Already there’s movement among security thought leaders to build out AI red teaming and AI threat modeling into the development and deployment of future AI systems. Organizations seeking to build that capability out will need to bolster their red teams with an infusion of AI and data science talent.

“The red team is going to have to get some expertise in how you break AI and ML systems,” explains Diana Kelley, CSO and co-founder of Cybrize. “Leaders will be called to pull in data science people that are interested in the security side and vice versa. It’ll be a matter of recruiting data science folks into security the same way that some of our best application red teaming folks started as application developers.”

This will also be an exercise of security-by-design, in which the people who are responsible for building out and deploying the AI and ML models in the enterprise should be trained by and collaborating with security teams to understand the risks and to test for them along the way. This will be the key to hardening these systems for the future.

“You need to retain the ML/AI experts that designed and built the system. You need to couple them up with your technical hackers and then with your security operations team,” says Steve Benton, vice president and general manager of threat intelligence at Anomali, explaining that together they should be creating potential risky scenarios, testing them out, and reengineering accordingly. “The answer here is purple teaming, not just red teaming. Remembering that some of this testing could involve ‘poison’ you need a reference model set-up to do this with the ability to restore and retest scenario by scenario.”