Americas

  • United States

Asia

Oceania

sbradley
Contributing Writer

Let’s pump the brakes on the rush to incorporate AI into cybersecurity

Opinion
Apr 05, 20237 mins
Artificial IntelligenceData and Information Security

AI has the promise to enhance many forms of security and help implement protective measures. But hold on a minute — as its take on just one Microsoft update shows, it’s not fast enough or expert enough to be trusted yet.

It seems that everyone is rushing to embed artificial intelligence into their solutions, and security offerings are among the latest to obtain this shiny new thing. Like many, I see the potential for AI to help bring about positive change, but also its potential as a threat vector.

To some, recent AI developments are a laughing matter. On April 1, 2023, that traditional day when technology and social media sites love to pull a fast one on us and engage in often elaborate pranks, the Twitter account for the MITRE ATT&CK platform launched the #attackgpt Twitter bot, which invited users to employ the hashtag #attackgpt, which would generate an “AI” response to questions about the anti-hacker knowledge base. In reality, it was an April fool’s prank with MITRE’s social media team cranking out funny answers in the guise of a chatbot.

For many, the rise of the AI chatbots is no joke. The risks of abuse inherent in the deployment of artificial intelligence are nothing new to CISOs — companies have begun to establish whole divisions that promise to ensure that AI follows ethical principles.

I have a deeper concern: What if the information a security bot provides is just dead wrong? In cybersecurity, it often takes several resources and researchers to come to a conclusion regarding the risk of a security vulnerability. If an AI does not know about the latest threats or vulnerabilities, its contribution to security is flawed and could leave the user exposed.

The first assessment is often not the right one

Too often in this era of clickbait journalism, I will see overbroad or flat-out wrong articles about security that indicate an issue is more widespread than it is, or an attack is more widespread than it turns out to be. Intrusions are more than likely hitting specific targets and not whole industries these days, but you wouldn’t know that based on the headlines. If that’s where an AI is getting its input, the output is going to be just as wrong.

Often with technology decisions, the first determination of a security problem may not be the correct one. Case in point were the 2021 headlines regarding a cyberattack involving a Florida water treatment plant that had many concerned that attackers could remotely control water systems and damage or poison water systems with too many chemicals. The attacks led to press conferences and even alerts from CISA regarding the potential for attacks. It turns out that the root cause was human behavior and bad security hygiene. Rather than a remote cyberattack, the bad actors got in through “by exploiting cybersecurity weaknesses, including poor password security, and an outdated operating system.”

If ChatGPT based its recommendations and guidance on the initial published security information at the time, it would have told users that a remote attacker was at fault and would provide guidance for protecting systems from remote attacks. In reality, and buried in a long-forgotten follow-up, was the fact that the issue was really the human factor. Instead of actionable guidance on how to investigate human failure, you would receive guidance that, while informative, would not suggest the actual actions required to protect from the real threat at hand.

Why accurate AI information matters for Microsoft users

The current implementation of GPT-4, the latest version of ChatGPT on which Microsoft’s artificial intelligence is based, is not aware of current news and information. If you ask it about the Outlook zero-day vulnerability that was patched in the March Office updates, the official ChatGPT interface indicates that it doesn’t know about this recent security issue, generating this response:

“I’m sorry, but as of my knowledge cutoff date of 2021, there is no information available about CVE-2023-23397. This is likely because it is a hypothetical CVE that has not yet been discovered or assigned.
It is important to note that CVEs are unique identifiers assigned to publicly disclosed cybersecurity vulnerabilities, so it is unlikely that a CVE would be assigned to a vulnerability in the future without it first being discovered and publicly disclosed.”
If you have more information about this CVE, please provide it, and I will do my best to assist you.”

If you ask the AI of Microsoft’s Bing search engine about this issue, you receive a better response but still not as precise as can be. It says:

“CVE-2023-23397 is a critical elevation of privilege vulnerability in Microsoft Outlook on Windows. It is exploited when a threat actor delivers a specially crafted message to a user1. A successful exploit of this vulnerability can result in unauthorized access to an organization’s resources. Microsoft has released a patch for this vulnerability and recommends that users apply it immediately. If you think you may have been targeted or compromised by threat actors exploiting CVE-2023-23397, Microsoft has provided guidance for investigating attacks using this vulnerability.”

Where AI security guidance falls short

Here’s where I feel the guidance provided by the AI falls down. If you ask Bing AI how to apply the CVE-2023-23397 patch, it states:

“To apply the patch for CVE-2023-23397, you must install the Outlook security update, regardless of where your mail is hosted (e.g., Exchange Online, Exchange Server, some other platform). If you have a build number equal to or higher than what is shown below, you are good to go. If your build number is lower, then you need to patch via Windows Update. To find your build number, open the application (Outlook, for example) and click on File, Office Account. The build should be towards the middle of your window.”

The platform does not detail that for many people the Outlook patch is received silently and in the background. Chances are, if they have Microsoft 365 Office installation installed, the security update comes out in the background sometime after the second Tuesday of the month. If you are concerned about systems not receiving click-to-run updates, my recommendation is to use a patch management tool to monitor the versions and build numbers of Office 365 deployments in your office. An alternative means to ensure you are up to date is to deploy a script to workstations to have them check once a week that the 365 deployments are up to date or on an as-needed basis.

AI is just not expert enough

For Microsoft 365 there is no longer a “patch” to install, rather the installation process is in the background and silent to the end user. Suddenly the patches are just there. Only if you have an older installation platform that uses MSI installers do you receive a patch on the second Tuesday of the month. Thus, my concern about the use of AI is that it lacks the exactitude for proper security guidance but will provide more general information that doesn’t provide sufficiently actionable content. In short, it’s wrong and will not lead to a good outcome.

Artificial intelligence can enhance the best — and the worst — of human behavior. It can provide us with actionable information or base its findings on inaccurate conclusions based on assumptions it gathers from incorrect conclusions. Microsoft’s Security Copilot, which will include AI, has so far merely been discussed and has yet to be released. You can rest assured that I’ll be interested to see if it can gather the best, most up-to-date security guidance and cull out the worst.

sbradley
Contributing Writer

Susan Bradley has been patching since before the Code Red/Nimda days and remembers exactly where she was when SQL slammer hit (trying to buy something on eBay and wondering why the Internet was so slow). She writes the Patch Watch column for Askwoody.com, is a moderator on the PatchManagement.org listserve, and writes a column of Windows security tips for CSOonline.com. In real life, she’s the IT wrangler at her firm, Tamiyasu, Smith, Horn and Braun, where she manages a fleet of Windows servers, Microsoft 365 deployments, Azure instances, desktops, a few Macs, several iPads, a few Surface devices, several iPhones and tries to keep patches up to date on all of them. In addition, she provides forensic computer investigations for the litigation consulting arm of the firm. She blogs at https://www.askwoody.com/tag/patch-lady-posts/ and is on twitter at @sbsdiva. She lurks on Twitter and Facebook, so if you are on Facebook with her, she really did read what you posted. She has a SANS/GSEC certification in security and prefers Heavy Duty Reynolds wrap for her tinfoil hat.

More from this author