Americas

  • United States

Asia

Oceania

Peter Wayner
Contributing writer

What is the true potential impact of artificial intelligence on cybersecurity?

Feature
Apr 10, 202310 mins
Artificial IntelligenceEncryptionMachine Learning

Greater scale and symbolic models are necessary before AI and machine learning can meet big challenges like breaking the best encryption algorithms.

Will artificial intelligence become clever enough to upend computer security? AI is already surprising the world of art by producing masterpieces in any style on demand. It’s capable of writing poetry while digging up arcane facts in a vast repository. If AIs can act like a bard while delivering the comprehensive power of the best search engines, why can’t they shatter security protocols, too?

The answers are complex, rapidly evolving, and still murky. AI makes some parts of defending computers against attack easier. Other parts are more challenging and may never yield to any intelligence, human or artificial. Knowing which is which, though, is difficult. The rapid evolution of the new models makes it hard to say where AI will or won’t help with any certainty. The most dangerous statement may be, “AIs will never do that.”

Defining artificial intelligence and machine learning

The terms “artificial intelligence” and “machine learning” are often used interchangeably, but they are not the same. AI refers to technology that can mimic human behavior or go beyond it. Machine learning is a subset of AI that uses algorithms to identify patterns in data to gain insight without human intervention. The goal of machine learning is to help humans or computers make better decisions. Much of what is today referred to as AI in commercial products is actually machine learning.

AI has strengths that can be immediately useful to people defending systems and people breaking in. They can search for patterns in massive amounts of data and often find ways to correlate new events with old ones.

Many machine learning techniques are heavily statistical, and so are many attacks on computer systems and encryption algorithms. The widespread availability of new machine learning toolkits is making it easy for attackers and defenders to try out the algorithms. The attackers use them to search for weaknesses and the defenders use them to watch for signs of the attackers.

AI also falls short of expectations and sometimes fails. It can express only what’s in its training data set and can be maddeningly literal, as computers often are. They are also unpredictable and nondeterministic thanks to their use of randomness, which some call their “temperature.”

Cybersecurity use cases for artificial intelligence

Computer security is also multifaceted and defending systems requires attention to arcane branches of mathematics, network analysis, and software engineering. To make matters more complicated, humans are a big part of the system, and understanding their weaknesses is essential.

The field is also a mixture of many subspecialties that can be very different. What works at, say, securing a network layer by detecting malicious packets may be useless in hardening a hash algorithm.

“Clearly there are some areas where you can make progress with AIs,” says Paul Kocher, CEO of Resilian, who has explored using new technology to break cryptographic algorithms. “For bug hunting and double-checking code, it’s going to be better than fuzzing [the process of introducing small, random errors to trigger flaws].”

Some are already finding success with this approach. The simplest examples involve codifying old knowledge and reapplying it. Conor Grogan, a director at Coinbase, asked ChatGPT to check out a live contract that was running on the Ethereum blockchain. The AI came back with a concise list of weaknesses along with suggestions for fixing them.

How did the AI do this? The AI’s mechanism may be opaque, but it probably relied, in one form or another, on public discussions of similar weaknesses in the past. It was able to line up the old insights with the new code and produce a useful punch list of issues to be addressed, all without any custom programming or guidance from an expert.

Microsoft is beginning to commercialize this approach. It has trained AI Security Copilot, a version of ChatGPT4 with foundational knowledge of protocols and encryption algorithms so it can respond to prompts and assist humans.

Some are exploiting the deep and broad reservoir of knowledge embedded in the large language models. Researchers at Claroty relied on ChatGPT as a time-saving assistance with an encyclopedic knowledge of coding. They were able to win a hacking contest using ChatGPT to write the code needed to exploit several weaknesses in concert.

Attackers may also use the AI’s ability to shape and reshape code. Joe Partlow, CTO at ReliaQuest, says that we don’t really know how the AIs actually “think,” and this inscrutability may be useful. “You see code completion models like Codex or Github Copilot already helping people write software,” he says. “We’ve seen malware mutations that are AI-generated already. Training a model on, say, the underhanded C contest winners could absolutely be used to help devise effective backdoors.”

Some well-established companies are using AI to look for network anomalies and other issues in enterprise environments. They rely on some combination of machine learning and statistical inference to flag behavior that might be suspicious.

Using AI to find weaknesses, break encryption

There are limits, though, to how deeply these scans can see into data flows, especially those that are encrypted. If an attacker were able to determine which encrypted packets are good or bad, they would be able to break the underlying encryption algorithm.

The deeper question is whether AIs can find weakness in the lowest, most fundamental layers of computer security. There have been no major announcements, but some are beginning to wonder and even speculate about what may or may not work.

There are no obvious answers about deeper weaknesses. The AIs may be programmed to act like humans, but underneath they may be radically different. The large models are collections of statistical relationships arranged in multiple hierarchies. They gain their advantages with size and many of the recent advances have come simply from rapidly scaling the number of parameters and weights.

At their core, many of the most common approaches to building large machine-learning models use large amounts of linear mathematics, chaining together sequences of very large matrices and tensors. The linearity is a crucial part of the algorithm because it makes some of the feedback possible for training.

The best encryption algorithms, though, were designed to be non-linear. Algorithms like AES or SHA rely upon repeatedly scrambling the data by passing it through a set of functions known as S-boxes. These functions were carefully engineered to be highly non-linear. More importantly, the algorithms’ designers ensured that they were applied enough times to be secure against some well-known statistical attacks.

Some of these attacks have much in common with modern AIs. For decades, cryptographers have used large collections of statistics to model the flow of data through an encryption algorithm in much the same way that AIs model their training data. In the past, the cryptographers did the complex work of tweaking the statistics using their knowledge of the encryption algorithms.

One of the best-known examples is often called differential cryptanalysis. While it was first described publicly by Adi Shamir and Eli Biham, some of the designers for earlier algorithms like NIST’s Data Encryption Standard said they understood the approach and hardened the algorithm against it. Algorithms like AES that were hardened against differential cryptanalysis should be able to withstand attacks from AIs that deploy much of the same linear statistical approaches.

There are deeper foundational issues. Many of the public-key algorithms rely upon numbers with thousands of digits of precision. “This is kind of just an implementation detail,” explains Nadia Heninger, a cryptographer at UCSD, “But it may go deeper than that because these models have weights that are floats, and precision is extremely important.”

Many machine learning algorithms often cut corners on precision because it hasn’t been necessary for success in imprecise areas like human language in an era of sloppy, slang-filled, and protean grammar. This only means that some of the off-the-shelf tools might not be good fits for cryptanalysis. The general algorithms might be adapted and some are already exploring this topic. (See here and here.)

Greater scale, symbolic models could make AI a bigger threat

A difficult question, though, is whether massive scale will make a difference. If the increase in power has allowed the AIs to make great leaps in seeming more intelligent, perhaps there will be some threshold that will allow the AI to find more holes than the older differential algorithms. Perhaps some of the older techniques can be used to guide the machine learning algorithms more effectively.

Some AI scientists are imagining ways to marry the sheer power of large language models with more logical approaches and formal methods. Deploying automated mechanisms for reasoning about mathematical concepts may be much more powerful than simply trying to imitate the patterns in a training set.

“These large language models lack a symbolic model of what they’re actually generating,” explains Simson Garfinkel, author of The Quantum Age and security researcher. “There’s no reason to assume that the security properties will be embedded, but there’s already lots of experience using formal methods to find security vulnerabilities.”

AI researchers are working to expand the power of large language models by grafting them with better symbolic reasoning. Stephen Wolfram, for instance, one of the developers of Wolfram Alpha, explains that this is one of the goals. “Right now in Wolfram Language we have a huge amount of built-in computational knowledge about lots of kinds of things.” he wrote. “But for a complete symbolic discourse language we’d have to build in additional ‘calculi’ about general things in the world: If an object moves from A to B and from B to C, then it’s moved from A to C, etc.”

Whitfield Diffie, a cryptographer who pioneered the area of public key cryptography, thinks that approaches like this with AIs may be able to make progress in new, unexplored areas of mathematics. They may think differently enough from humans to be valuable. “People try testing machine mathematicians against known theories in which people have discovered lots of theorems– theorems that people proved and so of a type people are good at proving,” he says. “Why not try them on something like higher dimensional geometries where human intuition is lousy and see if they find things we can’t?”

The areas of cryptanalysis are just one are a wide variety of mathematical areas that haven’t been tested. The possibilities may be endless because mathematics itself is infinite. “Loosely speaking, if an AI can make a contribution to breaking into systems that is worth more than it costs, people will use it,” predicts Diffie. The real question is how. 

Peter Wayner
Contributing writer

Peter Wayner is the author of more than 16 books on diverse topics, including open source software ("Free for All"), autonomous cars ("Future Ride"), privacy-enhanced computation ("Translucent Databases"), digital transactions ("Digital Cash"), and steganography ("Disappearing Cryptography").

More from this author