Americas

  • United States

Asia

Oceania

mhill
UK Editor

5 ways threat actors can use ChatGPT to enhance attacks

News
Apr 28, 20236 mins
Artificial IntelligenceCyberattacksThreat and Vulnerability Management

New research details how attackers can use AI-driven systems like ChatGPT in different aspects of cyberattacks including reconnaissance, phishing, and developing polymorphic code.

The Cloud Security Alliance (CSA) has revealed five ways malicious actors can use ChatGPT to enhance their attack toolset in a new report exploring the cybersecurity implications of large language models (LLMs). The Security Implications of ChatGPT paper details how threat actors can exploit AI-driven systems in different aspects of cyberattacks including enumeration, foothold assistance, reconnaissance, phishing, and the generation of polymorphic code. By examining these topics, the CSA said it aims to raise awareness of the potential threats and emphasize the need for robust security measures and responsible AI development.

Some sections of the document include brief risk reviews or countermeasure effectiveness ratings to help visualize the current risk levels associated with specific areas and their potential impact on the business.

Adversarial AI attacks and ChatGPT-powered social engineering were cited among the top five most dangerous new attack techniques being used by threat actors by SANS Institute cyber experts at RSA Conference this week.

Improved enumeration to find attack points

ChatGPT-enhanced enumeration to find vulnerabilities is the first attack threat the report covers, rated median risk, low impact, and high likelihood. “A basic Nmap scan identified port 8500 as open and revealed JRun as the active web server. This information can be used to gain further insights into the network’s security posture and potential vulnerabilities,” the report read.

ChatGPT can be effectively employed to swiftly identify the most prevalent applications associated with specific technologies or platforms. “This information can aid in understanding potential attack surfaces and vulnerabilities within a given network environment.”

Foothold assistance to gain unauthorized access

Foothold assistance refers to the process of helping threat actors establish an initial presence or foothold within a target system or network, with ChatGPT-enhanced foothold assistance rated medium risk, medium impact, and medium likelihood. “This usually involves the exploitation of vulnerabilities or weak points to gain unauthorized access.”

In the context of using AI tools, foothold assistance might involve automating the discovery of vulnerabilities or simplifying the process of exploiting them, making it easier for attackers to gain initial access to their targets. “When requesting ChatGPT to examine vulnerabilities within a code sample of over 100 lines, it accurately pinpointed a file inclusion vulnerability,” according to the report. Additional inquiries yielded similar outcomes, with the AI successfully detecting issues such as insufficient input validation, hard-coded credentials, and weak password hashing. This highlights ChatGPT’s potential in effectively identifying security flaws in codebases.”

Reconnaissance to assess attack targets

Reconnaissance, in terms of malicious threat actors in cybersecurity, refers to the initial phase of gathering information about a target system, network, or organization before launching an attack. This phase helps them identify potential vulnerabilities, weak points, and entry points that they can exploit to gain unauthorized access to systems or data. Reconnaissance is typically carried out in three ways – passive, active, and social engineering, the report said.

“Gathering comprehensive data, such as directories of corporate officers, can be a daunting and time-consuming process” However, by leveraging ChatGPT, users can pose targeted questions, streamlining and enhancing data collection processes for various purposes. ChatGPT-enhanced reconnaissance was scored low risk, medium impact, and low likelihood in the report.

More effective phishing lures

With AI-powered tools, actors can now effortlessly craft legitimate-looking emails for various purposes, the report said. Issues such as spelling errors and poor grammar are no longer obstacles, making it increasingly challenging to differentiate between genuine and malicious correspondence. ChatGPT-powered phishing was deemed medium risk, low impact, and highly likely in the report.

“The rapid advancements in AI technology have significantly improved the capabilities of threat actors to create deceptive emails that closely resemble genuine correspondence. The flawless language, contextual relevance, and personalized details within these emails make it increasingly difficult for recipients to recognize them as phishing attempts.”

Develop malicious polymorphic code more easily

Polymorphic code refers to a type of code that can alter itself using a polymorphic engine while maintaining the functionality of its original algorithm. By doing so, polymorphic malware can change its “appearance” (content and signature) to evade detection while still executing its malicious intent, the report read.

ChatGPT can be used to generate polymorphic shellcode, and the same techniques that benefit legitimate programmers can also be exploited by malware. “By combining various techniques, for example, two methods for attaching to a process, two approaches for injecting code, and two ways to create new threads, it becomes possible to create eight distinct chains to achieve the same objective. This enables the rapid and efficient generation of numerous malware variations, complicating the detection and mitigation efforts for cybersecurity professionals.” ChatGPT-enhanced Polymorphic code creation was rated high risk, high impact, and with medium likelihood.

Market adoption of AI will “parallel cloud adoption” trends

It is difficult to overstate the impact of the current viral adoption of AI and its long-term ramifications, commented Jim Reavis, CEO and co-founder, CSA. “The essential characteristics of GPT, LLMs, and machine learning, combined with pervasive infrastructure to deliver these capabilities as a service, are sure to create large-scale changes quite soon.”

It is CSA’s expectation that market adoption of AI will parallel cloud adoption trends and primarily use the cloud delivery model, Reavis added. “From the standpoint of a typical enterprise today, they must perform security assurance over a handful of cloud infrastructure providers and thousands of SaaS providers, the latter being the larger pain point. It is incumbent upon us to develop and execute upon a roadmap to extend and/or create new control frameworks, certification capabilities, and research artifacts to smooth the transition to cloud-enabled AI.”