Americas

  • United States

Asia

Oceania

mhill
UK Editor

Darktrace/Email upgrade enhances generative AI email attack defense

News
Apr 03, 20234 mins
Artificial IntelligenceEmail SecurityGenerative AI

Upgraded features designed to tackle novel email attacks and increasingly complex malicious communication powered by generative AI including ChatGPT and other large language models.

Tech Spotlight   >   Cloud [CW]   >   Conceptual image of cloud-based email deployment.
Credit: Oatawa / Shutterstock

Darktrace has announced a new upgrade to its Darktrace/Email product with enhanced features that defend organizations from evolving cyberthreats including generative AI business email compromise (BEC) and novel social engineering attacks. Among the new capabilities are an AI-employee feedback loop; account takeover protection; insights from endpoint, network, and cloud; and behavioral detections of misdirected emails, the vendor said. The upgrade comes amid growing concern about the ability of generative AI – such as ChatGPT and other large language models (LLMs) – to enhance phishing email attacks and provide an avenue for threat actors to craft more sophisticated and targeted campaigns at speed and scale.

“Normal” pattern knowledge key to tackling novel, generative AI email attacks

As part of the Darktrace Cyber AI Loop, Darktrace/Email’s new capabilities help it detect attacks as soon as they are launched, the firm said in a press release. That’s because it is not trained on what “bad” historically looks like based on past attacks, but instead learns the normal patterns of life for each unique organization, according to Darktrace. This feature is key to tackling novel email attacks and linguistically complex malicious communication driven by AI technologies like ChatGPT and LLMs. It also enables Darktrace/Email to detect novel email attacks 13 days earlier (on average) than email security tools that are trained on knowledge of past threats, Darktrace claimed.

With this upgrade, Darktrace Cyber AI Analyst combines anomalous email activity with other data sources including endpoint, network, cloud, apps, and OT to automate investigations and incident reporting, Darktrace said. Through greater context around its discoveries, Darktrace’s AI is now capable of more informed decision making, with algorithms providing a detailed picture of “normal” based on multiple perspectives to produce high-fidelity conclusions that are contextualized and actionable, according to the vendor.

Darktrace/Email’s new capabilities include:

  • Account takeover and email protection in a single product
  • Behavioral detections of misdirected emails, preventing intellectual property or confidential information being sent to the wrong recipient
  • Employee-AI loop that leverages insights from individual employees to inform Darktrace’s AI to provide real-time, in-context insights and security awareness
  • Intelligent mail management for improved productivity against graymail, spam, and newsletters that clutter inboxes
  • Optimized workflows and integrations for security teams, including the Darktrace mobile app
  • Automated investigations of email incidents with other coverage areas with Darktrace’s Cyber AI Analyst

Widespread concern over ChatGPT-enhanced email attacks, malicious activity

Since the launch of ChatGPT by OpenAI last year, there has been widespread debate and concern over the chatbot’s ability to make social engineering/phishing attacks more sophisticated, easier to carryout, and more likely to be successful. Darktrace data revealed a 135% increase in novel social engineering attacks across thousands of its active email customers from January to February 2023, corresponding with the mass adoption of ChatGPT.

These attacks involved the use of sophisticated linguistic techniques including increased text volume, punctuation, and sentence length, the firm said. Furthermore, 82% of 6,711 global employees surveyed by Darktrace said they were fearful that attackers can use generative AI to create scam emails that are indistinguishable from genuine communication.

Last week, Europol warned that ChatGPT’s ability to draft highly realistic text makes it a useful tool for phishing purposes, while the capability of LLMs to reproduce language patterns can be used to impersonate the style of speech of specific individuals or groups. “This capability can be abused at scale to mislead potential victims into placing their trust in the hands of criminal actors,” Europol said.

In February, a BlackBerry study of 500 UK IT decision makers revealed that 72% are concerned by ChatGPT’s potential to be used for malicious purposes, with most believing that foreign states are already using the chatbot against other nations. Furthermore, 48% of respondents predicted that a successful cyberattack will be credited to ChatGPT within the next 12 months, with 88% stating that governments have a responsibility to regulate advanced technologies such as ChatGPT.