SlashNext Employs Generative AI to Combat Cybersecurity Threats

SlashNext today launched a platform that makes use of generative artificial intelligence (AI) to thwart business email compromise (BEC), supply chain attacks, executive impersonation and financial fraud.

SlashNext CEO Patrick Harr said the Generative HumanAI platform combines data augmentation and cloning technologies to assess a core threat and then employs a generative AI platform developed by SlashNext to spawn thousands of other versions of that threat to train itself.

The goal is to use a generative AI platform to thwart attacks that cybercriminals are expected to launch using generative AI platforms that enable them to create waves of phishing attacks that are difficult to detect using existing cybersecurity platforms, added Harr.

It’s already been shown how generative AI platforms, in response to prompts based on legitimate messages, can generate voice, text and images that appear to have come from a human. The Generative HumanAI platform creates a baseline of known-good communication patterns and writing styles for each employee and supplier to detect unusual communications and conversation styles to detect those threats. It identifies how threat actors play off human emotions, such as sending “Urgent!” requests that ask users to quickly take wrong actions based on fear. HumanAI simulates those same human emotions and behaviors to detect those messages.

SlashNext has developed a range of software-as-a-service (SaaS) platforms infused with a range of AI technologies, including computer vision, machine learning algorithms, relationship graphs and deep contextualization to thwart multi-channel attacks. Those capabilities are made accessible via a natural language processing (NLP) engine. In total, SlashNext analyzes 700,000 new threats per day using virtual sandbox crawlers that are then tracked via a database.

This potential for generative AI platforms to wreak havoc is already sparking concern among cybersecurity professionals. There are currently restrictions in terms of how platforms such as ChatGPT can be employed, but in time there are likely to be other platforms that are operated by entities or even nation-states that might have fewer scruples concerning how AI is applied. The only way to fight AI platforms as that tectonic shift occurs is with another AI platform, said Harr.

One way or another, cybersecurity professionals should assume that phishing attacks will soon be increasing in terms of both volume and sophistication. Many digital processes will have to be reevaluated simply because the current level of trust that is assumed may no longer be reasonable in an era where email can be easily compromised.

In the meantime, cybersecurity professionals should start tracking how generative AI platforms are evolving. The pace at which these AI platforms are infused with additional image and video capabilities is accelerating. Cybersecurity professionals are now locked in an AI arms race they can’t afford to lose. The fundamental challenge, of course, is that most organizations lack the ability to develop the AI platforms required to combat these threats, so by definition, more organizations will have to rely on platforms that have the resources needed to build and maintain them.

Avatar photo

Michael Vizard

Mike Vizard is a seasoned IT journalist with over 25 years of experience. He also contributed to IT Business Edge, Channel Insider, Baseline and a variety of other IT titles. Previously, Vizard was the editorial director for Ziff-Davis Enterprise as well as Editor-in-Chief for CRN and InfoWorld.

mike-vizard has 756 posts and counting.See all posts by mike-vizard