Americas

  • United States

Asia

Oceania

mhill
UK Editor

Skyhawk adds ChatGPT functions to enhance cloud threat detection, incident discovery

News Analysis
Mar 29, 20234 mins
Artificial IntelligenceCloud SecurityMachine Learning

Cloud threat detection and response vendor has applied ChatGPT features to its platform in two distinct ways — earlier detection of malicious activity and explicability of attacks as they progress.

clouds
Credit: iStock

Cloud threat detection and response (CDR) vendor Skyhawk has announced the incorporation of ChatGPT functionality in its offering to enhance cloud threat detection and security incident discovery. The firm has applied ChatGPT features to its platform in two distinct ways – earlier detection of malicious activity (Threat Detector) and explainability of attacks as they progress (Security Advisor), it said.

Skyhawk said the performance elevation achieved by integrating the AI Large Language Model (LLM) that ChatGPT offers has been significant, according to the firm. It claims its platform produced alerts earlier in 78% of cases when adding Threat Detector and Security Advisor ChatGPT scoring functionality. The new capabilities are generally available to Skyhawk customers at no additional charge. The release comes as the furor surrounding ChatGPT and its potential impact on cybersecurity continues to make the headlines, with Europol the latest to warn about the risks of ChatGPT-enhanced phishing and cybercrime.

ChatGPT features improve threat score confidence, flag anomalous behaviors earlier

The Threat Detector feature uses the ChatGPT API — trained on millions of security data points from across the web — to augment Skyhawk’s existing threat-scoring mechanisms, the firm said. These are based on proprietary machine learning technologies that use malicious behavior indicators (MBIs) to assign alert scores to detected threats. Adding ChatGPT to the scoring system is an additional parameter that improves the confidence of a given score and enables the platform to alert to anomalous behaviors earlier, Skyhawk added.

In a real example, Threat Detector was able to signal an alert before a user performed a risky data extraction, Chen Burshan, CEO of Skyhawk Security, tells CSO. “GPT raised the flag after the very first activity in the sequence [AWS API failure], which means that we were able to avoid the data extraction by alerting to this much earlier.” In this scenario, AWS API failure is something that, while malicious, would not typically be flagged as harmful — most security products will either not alert to this or send an alert that would be written off as something that is not necessarily threatening, Burshan says. “GPT, together with the MBI for this activity, gave us the confidence to alert the customer that this was a true alert that could cause a potential threat (what we have coined as Realert),” he adds.

With Security Advisor, ChatGPT functionality explains, in plain language, the steps of attack sequences found by the platform, Burshan says. The textual explanations appear in a new tab and help security teams understand incidents in language that is more accessible and easier to understand, according to Burshan. “For example, if there is an event called ‘use of ssm:GetParameter’ in step two of the attack sequence, ChatGPT helps to explain it more clearly: ‘This API allows users to retrieve sensitive information stored in the AWS Systems Manager Parameter Store…’ and then goes on to explain how that action was performed,” he says.

ChatGPT “not always accurate” when analyzing code vulnerabilities

In a recent piece of research, Trustwave SpiderLabs tested ChatGPT’s ability to perform basic static code analysis on vulnerable code snippets. The three pieces of vulnerable code it tested were examples of a simple buffer overflow, DOM-based cross-site scripting, and code execution in Discourse’s AWS notification webhook handler. Upon first glance, the responses it delivered were “astounding,” SpiderLabs said. However, after scratching the surface a little deeper, SpiderLabs found that the responses ChatGPT delivers are not always accurate.

“ChatGPT demonstrates greater contextual awareness and is able to generate exploits that cover a more comprehensive analysis of security risks. The biggest flaw when using ChatGPT for this type of analysis is that it is incapable of interpreting the human thought process behind the code,” the firm wrote. For the best results, ChatGPT will need more user input to elicit a contextualized response detailing what is required to illustrate the code’s purpose, it added.

ChatGPT/LLM-enhanced threat detection to become a security market trend

ChatGPT/LLM-enhanced security threat detection is likely to become a security market trend as vendors look to make their technologies smarter, Philip Harris, research director at IDC, tells CSO. “I think we’re going to start seeing some very interesting things happening soon along the lines of escalating the race between detecting and preventing malware from getting into organizations and the malware actually doing a better job of getting into organizations [as a result of nefarious use of ChatGPT by cybercriminals].” The concern is the extent to which potentially sensitive information/intellectual property is fed into ChatGPT, Harris says. “What confidential or secret information/intellectual property goes back to ChatGPT? Who else gets access to it, and who’s looking at it? That becomes a very, very big concern for me.”

mhill
UK Editor

Michael Hill is the UK editor of CSO Online. He has spent the past five-plus years covering various aspects of the cybersecurity industry, with particular interest in the ever-evolving role of the human-related elements of information security. A keen storyteller with a passion for the publishing process, he enjoys working creatively to produce media that has the biggest possible impact on the audience.

More from this author