Upgraded features designed to tackle novel email attacks and increasingly complex malicious communication powered by generative AI including ChatGPT and other large language models. Credit: Oatawa / Shutterstock Darktrace has announced a new upgrade to its Darktrace/Email product with enhanced features that defend organizations from evolving cyberthreats including generative AI business email compromise (BEC) and novel social engineering attacks. Among the new capabilities are an AI-employee feedback loop; account takeover protection; insights from endpoint, network, and cloud; and behavioral detections of misdirected emails, the vendor said. The upgrade comes amid growing concern about the ability of generative AI – such as ChatGPT and other large language models (LLMs) – to enhance phishing email attacks and provide an avenue for threat actors to craft more sophisticated and targeted campaigns at speed and scale.“Normal” pattern knowledge key to tackling novel, generative AI email attacksAs part of the Darktrace Cyber AI Loop, Darktrace/Email’s new capabilities help it detect attacks as soon as they are launched, the firm said in a press release. That’s because it is not trained on what “bad” historically looks like based on past attacks, but instead learns the normal patterns of life for each unique organization, according to Darktrace. This feature is key to tackling novel email attacks and linguistically complex malicious communication driven by AI technologies like ChatGPT and LLMs. It also enables Darktrace/Email to detect novel email attacks 13 days earlier (on average) than email security tools that are trained on knowledge of past threats, Darktrace claimed.With this upgrade, Darktrace Cyber AI Analyst combines anomalous email activity with other data sources including endpoint, network, cloud, apps, and OT to automate investigations and incident reporting, Darktrace said. Through greater context around its discoveries, Darktrace’s AI is now capable of more informed decision making, with algorithms providing a detailed picture of “normal” based on multiple perspectives to produce high-fidelity conclusions that are contextualized and actionable, according to the vendor. Darktrace/Email’s new capabilities include: Account takeover and email protection in a single productBehavioral detections of misdirected emails, preventing intellectual property or confidential information being sent to the wrong recipientEmployee-AI loop that leverages insights from individual employees to inform Darktrace’s AI to provide real-time, in-context insights and security awarenessIntelligent mail management for improved productivity against graymail, spam, and newsletters that clutter inboxesOptimized workflows and integrations for security teams, including the Darktrace mobile appAutomated investigations of email incidents with other coverage areas with Darktrace’s Cyber AI AnalystWidespread concern over ChatGPT-enhanced email attacks, malicious activitySince the launch of ChatGPT by OpenAI last year, there has been widespread debate and concern over the chatbot’s ability to make social engineering/phishing attacks more sophisticated, easier to carryout, and more likely to be successful. Darktrace data revealed a 135% increase in novel social engineering attacks across thousands of its active email customers from January to February 2023, corresponding with the mass adoption of ChatGPT.These attacks involved the use of sophisticated linguistic techniques including increased text volume, punctuation, and sentence length, the firm said. Furthermore, 82% of 6,711 global employees surveyed by Darktrace said they were fearful that attackers can use generative AI to create scam emails that are indistinguishable from genuine communication. Last week, Europol warned that ChatGPT’s ability to draft highly realistic text makes it a useful tool for phishing purposes, while the capability of LLMs to reproduce language patterns can be used to impersonate the style of speech of specific individuals or groups. “This capability can be abused at scale to mislead potential victims into placing their trust in the hands of criminal actors,” Europol said.In February, a BlackBerry study of 500 UK IT decision makers revealed that 72% are concerned by ChatGPT’s potential to be used for malicious purposes, with most believing that foreign states are already using the chatbot against other nations. Furthermore, 48% of respondents predicted that a successful cyberattack will be credited to ChatGPT within the next 12 months, with 88% stating that governments have a responsibility to regulate advanced technologies such as ChatGPT. Related content news Zscaler shuts down exposed system after rumors of a cyberattack Initially dismissing rumors, Zscaler now says it did have a system exposed but nothing important has been accessed. By Shweta Sharma May 09, 2024 3 mins Data Breach Cyberattacks news Palo Alto launches AI-powered solutions to fight AI-generated cyberthreats The suite is powered by Palo Alto’s proprietary solution, Precision AI, which integrates machine learning, deep learning, and generative AI technologies. By Prasanth Aby Thomas May 09, 2024 3 mins Generative AI Security Software news F5 patches BIG-IP Next Central Manager flaws that could lead to device takeover Two high-risk vulnerabilities could allow attackers to gain full administrative control on devices via leaked password hashes. By Lucian Constantin May 08, 2024 5 mins Threat and Vulnerability Management Cloud Security Vulnerabilities news Suspected Chinese hack of Britain’s Ministry of Defence linked to contractor, minister confirms The UK’s defence minister would not confirm that the attack was conducted by an element of the Chinese state, rather blaming the “potential failings” of a partner. By John Dunn May 08, 2024 4 mins Aerospace and Defense Industry Data Breach Government PODCASTS VIDEOS RESOURCES EVENTS SUBSCRIBE TO OUR NEWSLETTER From our editors straight to your inbox Get started by entering your email address below. Please enter a valid email address Subscribe