Americas

  • United States

Asia

Oceania

maria_korolov
Contributing writer

AI-fueled search gives more power to the bad guys

Feature
Mar 29, 20239 mins
Artificial IntelligenceData and Information SecurityGenerative AI

How to stay ahead of attacks by learning about the risks of AI-based search engines, what skill sets are needed to defend systems and ensuring employees can learn to use AI tools safely without inviting attackers in.

Group of people working together and collaborating in a shared office workspace.
Credit: G-Stock Studio / Shutterstock

Concerns about the reach of ChatGPT and how easier it may get for bad actors to find sensitive information have increased following Microsoft’s announcement of the integration of ChatGPT into Bing and the latest update of the technology, GPT-4. Within a month of the integration, Bing had crossed the 100 million daily user threshold. Meanwhile, GPT-4 improved the AI which now has better reasoning skills, is more accurate and has the ability to see images.

When ChatGPT was released in November 2022, hackers quickly jumped on the technology to help them write more convincing phishing emails and exploit code, but that was the old ChatGPT. According to Open AI, the new AI’s bar exam score rose from the 10% to 90%, its medical knowledge score went from 53% to 75%, its quantitative GRE score rose from 25% to 80%, and the list goes on.

In other words, it’s already better than humans at most tests of knowledge and reasoning — and it can do it in the blink of an eye, for free, for anyone on the planet, in all the major languages.

“Generative AI takes regular humans and turns them into superhumans,” Dion Hinchcliffe, VP and principal analyst at Constellation Research, tells CSO. When combined with real-time search, it will give the bad guys a very powerful weapon. “It can connect the dots and find the patterns,” he says.

That’s something that traditional web search engines can’t do. Enterprises had tools that could do this, he says, for things like threat intelligence, but they cost millions of dollars and are certainly out of reach for small-time cybercriminals.

Open-source intelligence

If an attacker wants to know what technologies a company is using, they can search job listings and resumes of former employees and manually correlate the data, then use it to create convincing lures. This is already done by attackers. With the new AI tools, however, the process can become much faster and more efficient, the lures more convincing. “ChatGPT just makes it more accessible,” Jeetu Patel, EVP and GM of security and collaboration at Cisco’s Webex, tells CSO.

In fact, there’s no shortage of publicly available information on the internet, says Etay Maor, senior director of security strategy at Cato Networks. For example, the OSINT Framework page lists hundreds of free public sources of information about people, companies, IP addresses, and much more.

“Finding material about a target is not the challenge faced by attackers today,” Maor tells CSO. The challenge is weaving this information into something useful — for example, turning publicly available information into a convincing phishing email, in an appropriate writing style. “The fact that responses by ChatGPT are so human-like definitely makes it easier to create a believable conversion with a target,” he says. “This makes social engineering much easier.”

Attackers will also be able to use AI to bring the public data together faster or look at areas that a human might not think to pursue, Mike Parkin, cyber engineer at Vulcan Cyber, tells CSO. How good it is will probably depend on the person using it. “It’s unlikely AI is going to offer up correlations between obscure data points without prompting,” he says.

In addition, public-facing tools like ChatGPT, Bing Search, or Google’s Bard will have guardrails in place to try to limit the most malicious applications. But there will probably soon be commercial subscription services that will enable fully customized and automated phishing with just a bit of creative coding on the attacker’s part, Parkin says.

As this type of attack becomes easier and quicker, malicious actors will begin to target companies and organizations that previously might not have been worth their time. “Once automated, actors will no longer limit themselves to high value targets but will leverage their investment to get as much return as possible by casting a wider net,” Pascal Geenens, director of threat intelligence at Radware, tells CSO.

Natural language search risks

Today, some of the most interesting information — interesting from a malicious actor’s point of view — requires some technical skill to get at it. With AI, however, English is the new programming language, says Yale Fox, founder and CEO at Applied Science Group. In the past, for example, an attacker wanting to scan a perimeter of a company using NMAP would type out a query. Now they could say something like, “Scan YourCorp.com and look for open ports, then identify what applications and versions they are running and check to see if there is a known exploit or vulnerability — and do this in a passive way so that it is harder to detect,” Fox tells CSO.

Risks of threat actors from enterprise use of AI chatbots

If an attacker is able to get into enterprise systems and get access to AI-powered enterprise search, the dangers are even higher. “They could search the entire data set, both structured and unstructured, quickly and find valuable information, which was not possible before, especially in unstructured data,” says Andy Thurai, VP and principal analyst at Constellation Research.

There’s also a possibility that tools like ChatGPT might give attackers access to sensitive company data because the company’s own employees shared it with the AI. In fact, many firms — including JPMorgan, Amazon, Verizon, Accenture, and Walmart — have reportedly prohibited their staff from using ChatGPT.

According to a recent survey by Fishbowl, a work-oriented social network, 43% of professionals use ChatGPT or similar tools at work, up from 27% a month prior. Of those using it, 70% don’t tell their bosses. Employees might give the AI tool access to internal company information for different reasons including to help create code, draft communications, or do any of the other useful things that these generative AI tools can do.

The reason that OpenAI has made ChatGPT free was so that the AI could learn from its interactions with users. “There have already been examples that indicate that ChatGPT has had access to some internal company communications,” Robert Blumofe, EVP and CTO at Akamai, tells CSO. He suggests that enterprises use a standalone version of the chatbot, operating in isolation within the company’s walls, to keep information from leaking out.

Using the same technology attackers use is also a way to defend a company’s systems. “What will happen is a series of products will emerge that will consume that intelligence in real time, ahead of the hackers, and apply protections across the enterprise,” says Constellation Research’s Hinchcliffe. OpenAI has already announced a developer API for ChatGPT and dropped the price to a tenth of what its previous GPT models cost.

AI-powered search engines need regulation

AI-powered search engines aren’t just faster and better than traditional ones, they also understand the context of both the question and the information they provide, which makes them potentially very dangerous in the wrong hands. “We need to lobby our elected representatives to set up some sort of oversight at the state and federal levels for responsible use and deployment of AI-based technology,” Baber Amin, COO at Veridium, tells CSO. “Self-governance should not be an option as we have already seen things go awry with Microsoft’s chatbot.”

Unfortunately, when cyber adversaries are foreign governments, then regulations in the US or Europe might not be particularly effective. In fact, according to a recent Blackberry survey of IT decision makers, 71% believe that nation-states may already be leveraging ChatGPT for malicious purposes.

In addition, once the technology is available in open-source form, it will become widely distributed quickly to attackers at all levels of sophistication. This has already happened with AI-powered image generators, with open-source alternatives to OpenAI’s Dall-E 2 available even before Dall-E 2 became widely available to the public.

Staying ahead of AI-powered attacks

Given that AI technology is out there and that regulations aren’t likely to have much of an impact on the worst offenders, it comes down to individual enterprises to learn how to defend themselves. Unfortunately, the AI space is advancing very quickly right now. It’s hard enough to keep up with the pace of change of the technology, much less with how it’s being used.

Even for analysts is hard to predict all the creative and innovative ways adversaries will use new technologies to create new threats. This is why Forrester analyst Jeff Pollard recommends cybersecurity teams assign “look ahead” responsibilities to someone. “That person should be following what comes next, when it will arrive, and what it means for the organization,” he tells CSO. He also recommends that companies talk to their existing security vendors about how they plan to use AI to defend against adversaries. “And workshop those processes to figure out how they will impact you.”

Defending against AI-powered attacks starts with education

Security teams should become educated about the realities of AI and ensure that the security and technology teams understand these technologies well so that the management team can get reliable advice, suggests David Hoelzer, director of research at Enclave Forensics.

Figuring out the expertise needed will depend on the risks being addressed. Enterprises that deploy AI systems will need to develop expertise in defending those AIs from being attacked or poisoned, in addition to learning how to defend against AI-powered attacks in general. Companies should also build early warning systems based on scenario planning, Bart Schouw, chief evangelist in the office of the CTO at Software AG, tells CSO.

Some of this education is already happening, though not in ways that enterprises might expect. According to a survey released in mid-March by Wakefield Research on behalf of Devo Technology, 80% of security pros are using AI tools not provided by their company.

That same survey also showed something troubling: 99% of respondents said that malicious actors were better at using AI than their organization was. Reasons included better knowledge of AI advancements, fewer ethical constraints on the use of AI, more flexibility and agile approaches to the use of AI, more experience with AI, and a higher willingness to take risks.

Meanwhile, cybersecurity organizations should also map out their current skill sets and figure out what AI-related security skills they’re going to need in the future. The advent of human-level AI could follow a path similar to cloud deployment. Cloud existed for over a decade, but it wasn’t until adoption accelerated rapidly in recent years that security teams started to try to learn cloud security skills.

“By thinking about skills now, you can avoid making that mistake in the future,” Pollard says.