AI may not Destroy the World, but There are Other Risks

For some, AI is the stuff of nightmares. Whether it’s Hal refusing to open the pod bay doors in 2001: A Space Odyssey or the wild thought experiment of Roko’s Basilisk—or even way back to (retellings of) Frankenstein’s monster or the ancient legend of the Golem—there’s a fear that our creations will turn against us. Even Stephen Hawking worried that AI could destroy us all.

But the threats from AI, at least today, are more mundane. While many businesses are looking to the likes of ChatGPT, Baidu’s ERNIE or Google’s BARD to perform common tasks like word processing or spreadsheets, others have more nefarious intent in mind.

The New Security Risks of Intelligent Chatbots

For cybercriminals, AI will make their work easier and more efficient. While the creators of AI tools have implemented safeguards against misuse, criminals are hard at work trying to subvert and override them. According to a BlackBerry survey, more than half of IT professionals believe a cyberattack triggered by ChatGPT will take place this year. And 71% believe ChatGPT could already be used by nation-states in their hacking and phishing attempts.

How will this be put into practice?

One way will be more effective email phishing scams. Today, these emails can be recognized by poor grammar and spelling or a vague sense that the email is not quite right. With the help of artificial intelligence, this could change. AI can create deceptive phishing emails en masse, making phishing attacks far more effective.

Creating malware could also become easier with AI. ChatGPT only has limited programming capabilities, and those using it for this purpose have reported that while it tends to create something useful, the results often need further work. Cybercriminals, even those with limited programming skills, will be able to use AI tools to create malware—accelerating the need for automated, possibly AI-driven security tools to detect these attacks.

There are also risks created by how each AI is trained on a large corpus of text. This enables the bot to generate any type of content, including answers to questions. Some AI tools also learn from user input. With so many users and the potential to collect sensitive data either from input or from the large data set on which the AI is trained, a bad actor could use it to extract useful information. This doesn’t necessarily have to be private—information people are happy to share on social media could be used to create targeted phishing attacks or other scams.

Fighting Back Against AI

Some risks of AI may be overstated—at least today—but security teams should pay close attention to advances in this space and how cybercriminals are using these tools. There are also practical steps they can take to minimize risk:

• Implementing real-time network monitoring that detects and responds to malicious activity.
• Two-factor or multifactor authentication, which provides an additional layer of security, should also be mandatory, along with regular updates and patching of software to ensure that all known vulnerabilities are eliminated.
• Antivirus software is another must-have as it helps to detect and block malware, while network traffic monitoring helps to identify malicious activity.

These security measures should all be in place even without the threat of AI, of course, but this new threat means a new impetus to get this done, perhaps convincing those holding the purse strings to loosen them a little.

What about specific techniques to fight back against AI attacks? One way is with AI. We are in an AI arms race, and the side with the better algorithm will win. Businesses should be investigating and investing now in advanced AI protection technology that can recognize patterns and data and better detect potential AI-based cybersecurity threats.

But the best protection remains targeted education and training. Knowing how to spot the tell-tale signs of a malicious email or link, such as a strange sender address, suspicious language or unusual instructions, will help employees avoid a phishing attack, whether generated by AI or not. CTOs must invest in training for all staff and make sure the training is up-to-date and covers the risks of AI attacks.

AI, like any new technology, will be used for good as well as evil. Many predict that we are just at the start of an AI revolution, while others see it as overhyped and with a long way to go before it can be a part of everyday life. What is undoubtedly true is that professionals need to do all they can to keep up with how it is being used and respond to it. Understanding the potential as well as the limitations of AI—and taking the necessary steps to address its security risks—will help companies benefit from it in the best possible way.

Avatar photo

Joey Stanford

Joey Stanford brings more than 30 years of experience to his role as the VP of Privacy and Security at Platform.sh. Prior to joining Platform.sh he managed information security and devops programs for companies in the U.S., France, and the U.K. With a passion for free and open source software, Stanford is responsible for global security, data management and compliance, and ensuring Platform.sh is a trusted custodian of their customers’ data.

joey-stanford has 2 posts and counting.See all posts by joey-stanford

Secure Guardrails