Upgraded features designed to tackle novel email attacks and increasingly complex malicious communication powered by generative AI including ChatGPT and other large language models. Credit: Oatawa / Shutterstock Darktrace has announced a new upgrade to its Darktrace/Email product with enhanced features that defend organizations from evolving cyberthreats including generative AI business email compromise (BEC) and novel social engineering attacks. Among the new capabilities are an AI-employee feedback loop; account takeover protection; insights from endpoint, network, and cloud; and behavioral detections of misdirected emails, the vendor said. The upgrade comes amid growing concern about the ability of generative AI – such as ChatGPT and other large language models (LLMs) – to enhance phishing email attacks and provide an avenue for threat actors to craft more sophisticated and targeted campaigns at speed and scale.“Normal” pattern knowledge key to tackling novel, generative AI email attacksAs part of the Darktrace Cyber AI Loop, Darktrace/Email’s new capabilities help it detect attacks as soon as they are launched, the firm said in a press release. That’s because it is not trained on what “bad” historically looks like based on past attacks, but instead learns the normal patterns of life for each unique organization, according to Darktrace. This feature is key to tackling novel email attacks and linguistically complex malicious communication driven by AI technologies like ChatGPT and LLMs. It also enables Darktrace/Email to detect novel email attacks 13 days earlier (on average) than email security tools that are trained on knowledge of past threats, Darktrace claimed.With this upgrade, Darktrace Cyber AI Analyst combines anomalous email activity with other data sources including endpoint, network, cloud, apps, and OT to automate investigations and incident reporting, Darktrace said. Through greater context around its discoveries, Darktrace’s AI is now capable of more informed decision making, with algorithms providing a detailed picture of “normal” based on multiple perspectives to produce high-fidelity conclusions that are contextualized and actionable, according to the vendor. Darktrace/Email’s new capabilities include: Account takeover and email protection in a single productBehavioral detections of misdirected emails, preventing intellectual property or confidential information being sent to the wrong recipientEmployee-AI loop that leverages insights from individual employees to inform Darktrace’s AI to provide real-time, in-context insights and security awarenessIntelligent mail management for improved productivity against graymail, spam, and newsletters that clutter inboxesOptimized workflows and integrations for security teams, including the Darktrace mobile appAutomated investigations of email incidents with other coverage areas with Darktrace’s Cyber AI AnalystWidespread concern over ChatGPT-enhanced email attacks, malicious activitySince the launch of ChatGPT by OpenAI last year, there has been widespread debate and concern over the chatbot’s ability to make social engineering/phishing attacks more sophisticated, easier to carryout, and more likely to be successful. Darktrace data revealed a 135% increase in novel social engineering attacks across thousands of its active email customers from January to February 2023, corresponding with the mass adoption of ChatGPT.These attacks involved the use of sophisticated linguistic techniques including increased text volume, punctuation, and sentence length, the firm said. Furthermore, 82% of 6,711 global employees surveyed by Darktrace said they were fearful that attackers can use generative AI to create scam emails that are indistinguishable from genuine communication. Last week, Europol warned that ChatGPT’s ability to draft highly realistic text makes it a useful tool for phishing purposes, while the capability of LLMs to reproduce language patterns can be used to impersonate the style of speech of specific individuals or groups. “This capability can be abused at scale to mislead potential victims into placing their trust in the hands of criminal actors,” Europol said.In February, a BlackBerry study of 500 UK IT decision makers revealed that 72% are concerned by ChatGPT’s potential to be used for malicious purposes, with most believing that foreign states are already using the chatbot against other nations. Furthermore, 48% of respondents predicted that a successful cyberattack will be credited to ChatGPT within the next 12 months, with 88% stating that governments have a responsibility to regulate advanced technologies such as ChatGPT. Related content news FBI warns Black Basta ransomware impacted over 500 organizations worldwide CISA advisory includes indicators of compromise and TTPs that can be used for threat hunting. By Lucian Constantin May 14, 2024 6 mins Ransomware Phishing Healthcare Industry news Australian federal budget outlines investment in cybersecurity The Australian government announced its 2024-25 federal budget and CSO has selected highlights that indicate how much will go towards cybersecurity and in what areas. By Samira Sarraf May 14, 2024 5 mins Fraud Protection and Detection Software Data and Information Security brandpost Sponsored by Microsoft Security New threat trends emerge out of East Asia With total vigilance concerning the latest East Asian developments in the threat landscape, security leaders can enhance their readiness to safeguard against the most imminent dangers. By Microsoft Security May 14, 2024 5 mins Security news Equipped with AI tools, hackers make apps riskier than ever The odds of attacks are growing as attackers can now easily access code modification and reverse engineering tools. By Shweta Sharma May 14, 2024 4 mins Application Security PODCASTS VIDEOS RESOURCES EVENTS SUBSCRIBE TO OUR NEWSLETTER From our editors straight to your inbox Get started by entering your email address below. Please enter a valid email address Subscribe