Upgraded features designed to tackle novel email attacks and increasingly complex malicious communication powered by generative AI including ChatGPT and other large language models. Credit: Oatawa / Shutterstock Darktrace has announced a new upgrade to its Darktrace/Email product with enhanced features that defend organizations from evolving cyberthreats including generative AI business email compromise (BEC) and novel social engineering attacks. Among the new capabilities are an AI-employee feedback loop; account takeover protection; insights from endpoint, network, and cloud; and behavioral detections of misdirected emails, the vendor said. The upgrade comes amid growing concern about the ability of generative AI – such as ChatGPT and other large language models (LLMs) – to enhance phishing email attacks and provide an avenue for threat actors to craft more sophisticated and targeted campaigns at speed and scale.“Normal” pattern knowledge key to tackling novel, generative AI email attacksAs part of the Darktrace Cyber AI Loop, Darktrace/Email’s new capabilities help it detect attacks as soon as they are launched, the firm said in a press release. That’s because it is not trained on what “bad” historically looks like based on past attacks, but instead learns the normal patterns of life for each unique organization, according to Darktrace. This feature is key to tackling novel email attacks and linguistically complex malicious communication driven by AI technologies like ChatGPT and LLMs. It also enables Darktrace/Email to detect novel email attacks 13 days earlier (on average) than email security tools that are trained on knowledge of past threats, Darktrace claimed.With this upgrade, Darktrace Cyber AI Analyst combines anomalous email activity with other data sources including endpoint, network, cloud, apps, and OT to automate investigations and incident reporting, Darktrace said. Through greater context around its discoveries, Darktrace’s AI is now capable of more informed decision making, with algorithms providing a detailed picture of “normal” based on multiple perspectives to produce high-fidelity conclusions that are contextualized and actionable, according to the vendor. Darktrace/Email’s new capabilities include: Account takeover and email protection in a single productBehavioral detections of misdirected emails, preventing intellectual property or confidential information being sent to the wrong recipientEmployee-AI loop that leverages insights from individual employees to inform Darktrace’s AI to provide real-time, in-context insights and security awarenessIntelligent mail management for improved productivity against graymail, spam, and newsletters that clutter inboxesOptimized workflows and integrations for security teams, including the Darktrace mobile appAutomated investigations of email incidents with other coverage areas with Darktrace’s Cyber AI AnalystWidespread concern over ChatGPT-enhanced email attacks, malicious activitySince the launch of ChatGPT by OpenAI last year, there has been widespread debate and concern over the chatbot’s ability to make social engineering/phishing attacks more sophisticated, easier to carryout, and more likely to be successful. Darktrace data revealed a 135% increase in novel social engineering attacks across thousands of its active email customers from January to February 2023, corresponding with the mass adoption of ChatGPT.These attacks involved the use of sophisticated linguistic techniques including increased text volume, punctuation, and sentence length, the firm said. Furthermore, 82% of 6,711 global employees surveyed by Darktrace said they were fearful that attackers can use generative AI to create scam emails that are indistinguishable from genuine communication. Last week, Europol warned that ChatGPT’s ability to draft highly realistic text makes it a useful tool for phishing purposes, while the capability of LLMs to reproduce language patterns can be used to impersonate the style of speech of specific individuals or groups. “This capability can be abused at scale to mislead potential victims into placing their trust in the hands of criminal actors,” Europol said.In February, a BlackBerry study of 500 UK IT decision makers revealed that 72% are concerned by ChatGPT’s potential to be used for malicious purposes, with most believing that foreign states are already using the chatbot against other nations. Furthermore, 48% of respondents predicted that a successful cyberattack will be credited to ChatGPT within the next 12 months, with 88% stating that governments have a responsibility to regulate advanced technologies such as ChatGPT. Related content brandpost Sponsored by Elastic Search + RAG: The 1-2 punch transforming the modern SOC with AI-driven security analytics AI is modernizing how SOCs function, triaging countless alerts down to a handful of attacks that matter most. By Mike Nichols, Product for Security at Elastic May 06, 2024 3 mins Artificial Intelligence how-to Download the Zero Trust network access (ZTNA) enterprise buyer’s guide From the editors of our sister publication Network World, this enterprise buyer’s guide helps network and security IT staff understand what ZTNA can do for their organizations and how to choose the right solution. By Josh Fruhlinger and steve_zurier May 06, 2024 1 min Zero Trust Access Control Network Security news Germany blames Russian hackers for months-long cyber espionage The attacks by Russia-backed Fancy Bear used an Outlook exploit to compromise several German officials’ accounts. By Shweta Sharma May 06, 2024 4 mins Advanced Persistent Threats Hacker Groups feature AI governance and cybersecurity certifications: Are they worth it? Organizations have started to launch AI certifications in governance and cybersecurity but given how immature the space is and how fast it's changing, are these certifications worth pursuing? By Maria Korolov May 06, 2024 12 mins Certifications IT Training Careers PODCASTS VIDEOS RESOURCES EVENTS SUBSCRIBE TO OUR NEWSLETTER From our editors straight to your inbox Get started by entering your email address below. Please enter a valid email address Subscribe