How AI is Encouraging Targeted Phishing Attacks

As 2023 nears its conclusion, it would be hard to ignore the influence that artificial intelligence (AI) has had on various industries worldwide. Looking back at the year on reflection, to say that AI technology has proven to be a game-changer would be a huge understatement.

However, while AI has paved the way for breakthroughs, efficiency upgrades and many other positive changes for companies, it has, unfortunately, led to the emergence of a profound, innovative method of cybercrime. This short guide will look at the influence that AI technology has on orchestrated, targeted phishing attacks — nothing new to experienced IT and cybersecurity professionals — and how the technology only adds to their ferocity and sophistication.

Today, there are evolving challenges posed by AI-led attacks and malicious apps using polymorphic malware. Fundamentally, by understanding how AI influences phishing attacks and can evade detection, companies can learn how to defend against AI-led methods, and we will be better equipped to preserve data, reputations and finances online.

How Cybercriminals Use AI in Phishing Attacks

Phishing attacks have long been a thorn in the side of cybersecurity professionals and teams, not to mention occurring an estimated 3.4 billion times every day. Fraudulent and seemingly anodyne requests — usually in the form of email or SMS messages — attempt to deceive users into divulging sensitive information or providing login credentials to the ‘sender,’ who, in the eyes of users, seems to be authentic. Traditionally, phishing attacks relied on tried-and-tested social engineering techniques that preyed on human emotions and behaviors.

It can be easy to overlook phishing attacks, given how overt some of them have been. However, many fraudulent emails and texts have evolved from poorly worded emails riddled with grammar and spelling errors to sophisticated messages that can fool even the most vigilant recipient. With the help of AI, phishing attacks are becoming more sophisticated.

Cybercriminals are leveraging AI to conduct highly targeted and incredibly convincing phishing campaigns that prove increasingly difficult to recognize, much less contain. By generating highly personalized, contextually relevant messages that mimic the writing style of trusted brands, criminals can bypass traditional security measures with relative ease. What’s more, AI-powered phishing threatens to undermine many of the telltale signals people rely on to avoid being duped.

Organizations have a legal and moral duty to reinforce their infrastructure to be as cyber-ready as ever if 2024’s phishing trends are anything to go by. They must continually re-evaluate their security posture, conduct new risk assessments with renewed vigor, and deploy enterprise-grade solutions like penetration testing to uncover all potential vulnerabilities before they are exploited.

How AI Makes Phishing More Dangerous

Most AI technology relies inherently on natural language generation (NLG) techniques and deep learning. AI tools are, by their very nature, programmed to improve and deliver more relevant outputs with repeated use. Unfortunately, there is no incumbent mechanism within an AI tool that recognizes any user request as being unlawful or fraudulent.

So, with that in mind, how can cybercriminals achieve a proverbial edge when executing malicious acts with the help of AI?

1. Increased Targeting Capabilities
• AI tools can analyze data sets (not recognizing whether they were obtained lawfully or not) to identify high-value targets and key personnel. This allows attacks to be highly focused.

• Machine-learning algorithms can generate convincing and individualized messages, personas and backstories tailored to each recipient, making them appear more ‘trusted’ or ‘legitimate’.

• Content can be dynamically generated to reflect the target’s interests and position, thus creating a sense of familiarity.

2. Improved Deception Techniques
• Natural language generation can match an organization’s tone and writing style with frightening precision.

• Personalized messages have fewer flagrant spelling, punctuation, consistency, or grammatical errors and more closely mimic human language, which only adds to the difficulties in spotting misinformed discourse.

• By using NLG, phishing emails and messages can be customized to exploit various vulnerabilities rather than those existing in different silos. Content can capitalize on and reference current events, making it easy to create urgency and encourage users to lower their guard.

• What’s more, AI helps attackers evade detection by legacy security systems. AI-generated algorithms can power activity that would mimic traditional, seemingly legitimate communication patterns, making it increasingly hard for malicious or fraudulent activity to be spotted.

It’s evident that with attackers avoiding detection and executing malicious acts with increased ease, the need for stronger and more adaptive cybersecurity solutions is more urgent than ever. Even security-conscious professionals can be caught off guard, so it’s imperative that each organization – however prepared – take proactive steps to safeguard systems, networks and data.

Real-World Examples of AI Phishing

AI-powered phishing attacks are not theoretical — they are already being executed by an array of cybercriminals with different aims and motives. Below are just a few examples of AI phishing attacks:

• Spear phishing attacks on directors: AI algorithms generate phishing emails – seemingly from the CEO or a person of similar influence – that prominently emphasize the urgency and time sensitivity for immediate action from directors. This could be wire transfers, login handovers or anything in between.

• Customer service spoofing: Compromised email accounts — disguised as otherwise trusted institutions like banks — use AI to generate convincing emails that prompt customers to answer questions or take action. Links in these emails are malicious, possibly leading to fake websites or initiating malware installations.

• Fake employee recruitment: Fraudulent companies or individuals use AI to generate convincing job offers or recruitment opportunities for unsuspecting individuals. These could trick active or passive applicants into providing sensitive personal data which is then exploited or distributed without consent.

• Tailored social engineering: AI-generated messages that reference specific recipient information or wider geopolitical issues can create a false sense of security. Malicious actors can strike a personal connection with recipients and execute the next phase of their attack, be it installing malware or breaching logins.

These examples reveal just how far AI can go in making targeted phishing attacks seem more covert and compelling on the surface. If organizations with established security controls are falling victim to these attacks, it begs the question of what can be done to prevent these acts from escalating.

Defending Against the AI Phishing Threat
Traditional defenses like filters, system patches and stronger password policies, while still valid and necessary, need additional layers to protect organizations from the threat of phishing. Organizations need to take a much more proactive stance on security to uncover and address emerging vulnerabilities before they can be exploited.

Deploy Penetration Testing to Uncover Prevalent Gaps
Advanced penetration testing is now a must for companies with an infrastructure that spans multiple locations and geographies.

Ethical hackers use real-world techniques to test systems and users, uncovering weaknesses advanced phishing could exploit. Ethical pentesting allows organizations to discover weak points where AI phishing can gain a foothold, prioritize fixes based on risk severity and likelihood, and reveal any noticeable gaps between systems, endpoints, networks, and devices.

Pentesting solutions give organizations clear, actionable data that they can utilize to bolster their readiness and defense strategies. These solutions are crucial to deploy regularly rather than as a one-and-done tactic.

Employee Education and Vigilance
With humans still the primary attack targets, your team has to be both the first and last lines of defense. Robust, relevant training programs must be rolled out to raise awareness of cybercriminals’ tactics.

Training will arm your team with the ability to identify sophisticated phishing attempts by:

• Scrutinizing and questioning sender details.
• Watching for suspicious timing, message tone, greetings, and other inconsistencies
• Validating links to confirm destinations
• Understanding the need for multi-factor authentication (MFA) and using it frequently to verify logins or access requests
• Reporting anything that seems off

Regular refresher training can prove instrumental in empowering employees to make informed decisions and avoid falling victim to these damaging attacks that damage brand credibility.

Use Ethical AI to Fight Bad AI

The most important, pivotal method to combating unethical AI attacks relies on firms using AI itself. AI is a fast-evolving technology that relies on human input and refinement, and there’s every reason to support using it as a defense mechanism and a cause for good.

AI tools can be deployed with the aim of highlighting even the most subtle differences in anything that is deemed to be ‘ordinary’. This can range from personal identity documents that could have been forged to wire transfers between cross-border bank accounts that have been authorized without scrutiny.

Using ‘good’ AI to fight ‘bad’ AI will change the phishing landscape, with those firms that fail to leverage it successfully falling behind in their defenses. Meanwhile, proactive cyber security leaders can equip their teams to use AI to combat emerging threats that rely on the same technology.

It’s clear to see that AI-powered phishing attempts are not going to stop anytime soon. However, to preserve your organization’s stability and reputation, we’d recommend deploying more stringent security measures as opposed to relying on proven methods. These will soon be insignificant and offer little protection with AI language proving harder to detect and with technology moving at an even faster pace than ever before. By leveraging new solutions and AI itself, you can stay a step ahead and prevent data from falling into the wrong hands.

Avatar photo

Chester Avey

As a freelance writer with more than a decade of experience in B2B cybersecurity, Chester Avey provide articles and content of real value on topics including cybersecurity, information assurance, business growth, software solutions and e-commerce.

chester-avey has 4 posts and counting.See all posts by chester-avey

Secure Guardrails