The AI-based chatbot is allowing bad actors with absolutely no coding experience to develop malware.

5 Min Read
Webpage of ChatGPT is seen on OpenAI's website on a computer.
Source: Tada Images via Shutterstock

Since OpenAI released ChatGPT in late November, many security experts have predicted it would only be a matter of time before cybercriminals began using the AI chatbot for writing malware and enabling other nefarious activities. Just weeks later, it looks like that time is already here.

In fact, researchers at Check Point Research (CPR) have reported spotting at least three instances where black hat hackers demonstrated, in underground forums, how they had leveraged ChatGPT's AI-smarts for malicious purposes.

By way of background, ChatGPT is an AI-powered prototype chatbot designed to help in a wide range of use cases, including code development and debugging. One of its main attractions is the ability for users to interact with the chatbot in a conversational manner and get assistance on everything from writing software to understanding complex topics, writing essays and emails, improving customer service, and testing different business or market scenarios.

But it can also be used for darker purposes.

From Writing Malware to Creating a Dark Web Marketplace

In one instance, a malware author disclosed in a forum used by other cybercriminals how he was experimenting with ChatGPT to see if he could recreate known malware strains and techniques.

As one example of his effort, the individual shared the code for a Python-based information stealer he developed using ChatGPT that can search for, copy, and exfiltrate 12 common file types, such as Office documents, PDFs, and images from an infected system. The same malware author also showed how he had used ChatGPT to write Java code for downloading the PuTTY SSH and telnet client, and running it covertly on a system via PowerShell.

On Dec. 21, a threat actor using the handle USDoD posted a Python script he generated with the chatbot for encrypting and decrypting data using the Blowfish and Twofish cryptographic algorithms. CPR researchers found that though the code could be used for entirely benign purposes, a threat actor could easily tweak it so it would run on a system without any user interaction — making it ransomware in the process. Unlike the author of the information stealer, USDoD appeared to have very limited technical skills and in fact claimed that the Python script he generated with ChatGPT was the very first script he had ever created, CPR said.

In the third instance, CPR researchers found a cybercriminal discussing how he had used ChatGPT to create an entirely automated Dark Web marketplace for trading stolen bank account and payment card data, malware tools, drugs, ammunition, and a variety of other illicit goods.

"To illustrate how to use ChatGPT for these purposes, the cybercriminal published a piece of code that uses third-party API to get up-to-date cryptocurrency (Monero, Bitcoin, and [Ethereum]) prices as part of the Dark Web market payment system," the security vendor noted.

No Experience Needed

Concerns over threat actors abusing ChatGPT have been rife ever since OpenAI released the AI tool in November, with many security researchers perceive the chatbot as significantly lowering the bar for writing malware.

Sergey Shykevich, threat intelligence group manager at Check Point, reiterates that with ChatGPT, a malicious actor needs to have no coding experience to write malware: "You should just know what functionality the malware — or any program — should have. ChatGTP will write the code for you that will execute the required functionality."

Thus, "the short-term concern is definitely about ChatGPT allowing low-skilled cybercriminals to develop malware," Shykevich says. "In the longer term, I assume that also more sophisticated cybercriminals will adopt ChatGPT to improve the efficiency of their activity, or to address different gaps they may have."

From an attacker’s perspective, code-generating AI systems allow malicious actors to easily bridge any skills gap they might have by serving as a sort of translator between languages, added Brad Hong, customer success manager at Horizon3ai. Such tools provide an on-demand means of creating templates of code relevant to an attacker's objectives and cuts down on the need for them to search through developer sites such as Stack Overflow and Git, Hong said in an emailed statement to Dark Reading.

Even prior to its discovery of threat actors abusing ChatGPT, Check Point — like some other security vendors — showed how adversaries could leverage the chatbot in malicious activities. In a Dec. 19 blog, the security vendor described how its researchers created a very plausible-sounding phishing email merely by asking ChatGPT to write one that appears to come from a fictional webhosting service. The researchers also demonstrated how they got ChatGPT to write VBS code they could paste into an Excel workbook for downloading an executable from a remote URL.

The goal of the exercise was to demonstrate how attackers could abuse artificial intelligence models such as ChatGPT to create a full infection chain right from the initial spear-phishing email to running a reverse shell on affected systems.

Making It Harder for Cybercriminals

OpenAI and other developers of similar tools have put in filters and controls — and are constantly improving them — to try to limit misuse of their technologies. And at least for the moment, the AI tools remain glitchy and prone to what many researchers have described as flat-out mistakes on occasion, which could thwart some malicious efforts. Even so, the potential for misuse of these technologies remains large over the long term, many have predicted.

To make it harder for criminals to misuse the technologies, developers will need to train and improve their AI engines to identify requests that can be used in a malicious way, Shykevich says. The other option is to implement authentication and authorization requirements in order to use the OpenAI engine, he says. Even something similar to what online financial institutions and payment systems currently use would be sufficient, he notes.

About the Author(s)

Jai Vijayan, Contributing Writer

Jai Vijayan is a seasoned technology reporter with over 20 years of experience in IT trade journalism. He was most recently a Senior Editor at Computerworld, where he covered information security and data privacy issues for the publication. Over the course of his 20-year career at Computerworld, Jai also covered a variety of other technology topics, including big data, Hadoop, Internet of Things, e-voting, and data analytics. Prior to Computerworld, Jai covered technology issues for The Economic Times in Bangalore, India. Jai has a Master's degree in Statistics and lives in Naperville, Ill.

Keep up with the latest cybersecurity threats, newly discovered vulnerabilities, data breach information, and emerging trends. Delivered daily or weekly right to your email inbox.

You May Also Like


More Insights