Americas

  • United States

Asia

Oceania

mhill
UK Editor

6 ways generative AI chatbots and LLMs can enhance cybersecurity

Feature
May 25, 20238 mins
Application SecurityData and Information SecurityGenerative AI

Generative AI chatbots and large language models can be a double-edged swords from a risk perspective, but with proper use they can also improve cybersecurity in key ways

The rapid emergence of Open AI’s ChatGPT has been one of the biggest stories of the year, with the potential impact of generative AI chatbots and large language models (LLMs) on cybersecurity a key area of discussion. There’s been a lot of chatter about the security risks these new technologies could introduce — from concerns about sharing sensitive business information with advanced self-learning algorithms to malicious actors using them to significantly enhance attacks.

Some countries, US states and enterprises have ordered bans on the use of generative AI technology such as ChatGPT on data security, protection, and privacy grounds. Clearly, the security risks introduced by generative AI chatbots and large LLMs are considerable. However, generative AI chatbots can enhance cybersecurity for businesses in multiple ways, giving security teams a much-needed boost in the fight against cybercriminal activity.

Here are 6 ways generative AI chatbots and LLMs can improve security.

Vulnerability scanning and filtering

Generative AI models can be used to significantly enhance the scanning and filtering of security vulnerabilities, according to a Cloud Security Alliance (CSA) report exploring the cybersecurity implications of LLMs. In the paper, CSA demonstrated that OpenAI’s Codex API is an effective vulnerability scanner for programming languages such as C, C#, Java, and JavaScript. “We can anticipate that LLMs, like those in the Codex family, will become a standard component of future vulnerability scanners,” the paper read. For example, a scanner could be developed to detect and flag insecure code patterns in various languages, helping developers address potential vulnerabilities before they become critical security risks.

As for filtering, generative AI models can explain and add valuable context to threat identifiers that might otherwise go missed by human security personnel. For example, TT1059.001 — a technique identifier within the MITRE ATT&CK framework — may be reported but unfamiliar to some cybersecurity professionals, prompting the need for a concise explanation. ChatGPT can accurately recognize the code as a MITRE ATT&CK identifier and provide an explanation of the specific issue associated with it, which involves the use of malicious PowerShell scripts, the paper read. It also elaborates on the nature of PowerShell and its potential use in cybersecurity attacks, offering relevant examples.

In May, OX Security announced the launch of OX-GPT, a ChatGPT integration designed to help developers with customized code fix recommendations and cut-and-paste code fixes, including how codes could be exploited by hackers, the possible impact of an attack, and potential damage to the organization.

Reversing add-ons, analyzing APIs of PE files

Generative AI/LLM technology can be used to help build rules and reverse popular add-ons based on reverse engineering frameworks like IDA and Ghidra, says Matt Fulmer, manager of cyber intelligence engineering at Deep Instinct. “If you’re specific in the ask of what you need and compare it against MITRE ATT&CK tactics, you can then take the result offline and make it better to use it as a defense.”

LLMs can also help communicate via applications, with the ability to analyze APIs of portable executable (PE) files and tell you what they may be used for, he adds. “This can reduce the time security researchers spend looking through PE files and analyzing API communication within them.”

Threat hunting queries

Security defenders can enhance efficiency and expedite response times by leveraging ChatGPT and other LLMs to create threat-hunting queries, according to CSA. By generating queries for malware research and detection tools like YARA, ChatGPT assists in swiftly identifying and mitigating potential threats, allowing defenders to focus on critical aspects of their cybersecurity efforts. This capability proves invaluable in maintaining a robust security posture in an ever-evolving threat landscape. Rules can be tailored based on specific requirements and the threats an organization wishes to detect or monitor in its environment.

AI can improve supply chain security

Generative AI models can be used to address supply chain security risks by identifying potential vulnerabilities of vendors. In April, SecurityScorecard announced the launch of a new security ratings platform to do just this through integration with OpenAI’s GPT-4 system and natural language global search. Customers can ask open-ended questions about their business ecosystem, including details about their vendors, and quickly obtain answers to drive risk management decisions, according to the firm. Examples include “find my 10 lowest-rated vendors” or “show me which of my critical vendors were breached in the past year” — questions that SecurityScorecard claims will yield results that allow teams to quickly make risk management decisions.

Detecting generative AI text in attacks

LLMs not only generate text, but they are also working towards detecting and watermarking AI-generated text, which could become a common function of email protection software, according to CSA. Identifying AI-generated text in attacks can help to detect phishing emails and polymorphic code, and it’s realistic to assume that LLMs could easily detect untypical email address senders or their corresponding domains, along with being able to check if underlying links in text lead to known malicious websites, CSA said.

Security code generation and transfer

LLMs like ChatGPT can be used to both generate and transfer security code. CSA cites the example of a phishing campaign that has successfully targeted several employees within a company, potentially exposing their credentials. While it is known which employees have opened the phishing email, it is unclear whether they inadvertently executed the malicious code designed to steal their credentials.

“To investigate this, a Microsoft 365 Defender Advanced Hunting query can be utilized to find the 10 most recent logon events performed by email recipients within 30 minutes after receiving known malicious emails. The query helps to identify any suspicious login activity that may be related to compromised credentials.”

Here, ChatGPT can provide a Microsoft 365 Defender hunting query to check for login attempts of the compromised email accounts, which helps to block attackers from the system and clarifies if the user needs to change their password. It is a good example to reduce time to action during a cyber incident response.

Based on the same example, you could have the same problem and find the Microsoft 365 Defender hunting query, but your system does not work with the KQL programming language. Instead of searching for the correct example in your desired language, you can do a programming language style transfer.

“This example illustrates that the underlying Codex models of ChatGPT can take a source code example and generate the example in another programming language. It also simplifies the process for the end user by adding key details to its provided answer and the methodology behind the new creation,” said CSA.

Leaders must ensure the secure use of generative AI chatbots

Like many modern-day advancements, AI and LLMs can amount to a double-edged sword from a risk perspective, so it’s important for leaders to ensure their teams are using offerings safely and securely, says Chaim Mazal, CSO at Gigamon. “Security and legal teams should be collaborating to find the best path forward for their organizations to tap into the capabilities of these technologies without compromising intellectual property or security.”

Generative AI is based on outdated, structured data, so take it as a starting point only when evaluating its use for security and defense, says Fulmer. “For example, if using it for any of the benefits mentioned above, you have it justify its output. Take the output offline and have humans make it better, more accurate, and more actionable.”

Generative AI chatbots/LLMs will ultimately enhance security and defenses naturally over time, but utilizing AI/LLMs to help, not hurt, cybersecurity postures will all come down to internal communications and response. Mazal says. “Generative AI/LLMs can be a means for engaging stakeholders to address security issues across the board in a faster, more efficient way. Leaders must communicate ways to leverage tools to support organizational goals while educating them about the potential threats.”

AI-powered chatbots also need regular updates to remain effective against threats and human oversight is essential to ensure LLMs function correctly, says Joshua Kaiser, AI technology executive and CEO at Tovie AI. “Additionally, LLMs need contextual understanding to provide accurate responses and catch any security issues and should be tested and evaluated regularly to identify potential weaknesses or vulnerabilities.”

mhill
UK Editor

Michael Hill is the UK editor of CSO Online. He has spent the past five-plus years covering various aspects of the cybersecurity industry, with particular interest in the ever-evolving role of the human-related elements of information security. A keen storyteller with a passion for the publishing process, he enjoys working creatively to produce media that has the biggest possible impact on the audience.

More from this author