Propaganda as a Social Engineering Tool

Remember WYSIWYG? What you see is what you get. That was a simpler time in technology; you knew what the end result would be during the development stage. There were no surprises. Technology moved on, though. Now, the mantra should be, “don’t automatically believe what you see.”

Deep fakes, propaganda, misinformation and disinformation campaigns are everywhere, designed to con users into making mistakes. It’s a security problem, but one that goes far beyond malware and ransomware. These social engineering campaigns have a direct impact on humans’ daily life and society at large.

In March 2021, the FBI released a Private Industry Notification warning the public that “Malicious actors almost certainly will leverage synthetic content for cyber and foreign influence operations in the next 12 to 18 months. Foreign actors are currently using synthetic content in their influence campaigns, and the FBI anticipates it will be increasingly used by foreign and criminal cyber actors for spearphishing and social engineering in an evolution of cyber operational tradecraft.”

In a webinar offered by Cato Networks, Raymond Lee, CEO of FakeNet.AI, and Etay Maor, senior director of security strategy at Cato Networks, discussed how and why cybercriminals are using this type of social engineering campaign.

Not all Fake Information is the Same

When discussing cybersecurity, we tend to discuss three main elements: people, processes and technology. The focus tends to be on the technology side—what technologies are we protecting and what technologies are available to advance cybersecurity efforts, for instance—but it is really the processes and the people that are the targets of threat actors.

That’s why social engineering is so successful. Cybercriminals use technology to attack people and processes. From the cybersecurity professional perspective, social engineering is difficult to combat because it does rely so heavily on human behavior. So, different tactics are required when dealing with social engineering and human behavior, and the main focus is to understand the agenda behind the fake information.

There is a tendency to use the terms for false information interchangeably, but it is important to understand the differences. It all begins with propaganda.

Propaganda is a popular political tool to spread skewed information to grow an ideological base. It is based on facts, but the facts are used selectively so the whole picture isn’t presented. Then, there are the subsets of propaganda that we see most frequently in social engineering—misinformation and disinformation.

Misinformation is false or inaccurate information. Often, with misinformation, real data is presented but done so in a way that manipulates reality. This was a serious problem at the height of the COVID-19 pandemic. For example, using the basic fact that COVID-19 is spread through airborne particles, threat actors and others created misinformation campaigns about how you could (or couldn’t) become infected. That one tidbit of knowledge was enough for many to believe it, and threat actors were able to push successful social engineering campaigns.

Disinformation, on the other hand, is false information designed to mislead and distort the truth. It is often used by nation-state actors to plant the seeds of false information on social media outlets, which is then spread until the false information is considered truth.

There is a thin line between misinformation and disinformation. Disinformation tends to be designed to get an emotional response through presenting correlated ideas and creating a false causation—for example, an ad shows that X number of people die as a result of terrorist attacks for every Y number of people in the U.S. that die from lack of health care. Both statements alone are true, but they have no connection to each other. Yet, the person who created the ad wanted to create outrage.

Three Elements of Information Manipulation

There are three common elements used to manipulate information for social engineering purposes. They are:

Missing context. Information is presented in a misleading way or some vital facts are missing. This is commonly manifested on social media as presenting a photo that has nothing to do with its caption. For example, a picture showing violence on a city street with the caption: “See what is happening across America today!” But the picture was, in fact, taken in a European city.

Deceptive editing. Here, the threat actor is taking something that was once a genuine photo, video or illustration of a media story or event but editing key elements, so it distorts the reality to create a different message.

Malicious transformation. This is the most serious of the three. Videos are altered through AI to create something fake that appears real. These are deep fakes. Threat actors use these techniques to push an agenda, whether it is a ransomware campaign for financial gain or to manipulate social outcomes like elections.

Recognizing how this information is used for social engineering is vital for security awareness training. The bad guys will use humans and processes to get to the technology, and they will manipulate the truth to do it.

Avatar photo

Sue Poremba

Sue Poremba is freelance writer based in central Pennsylvania. She's been writing about cybersecurity and technology trends since 2008.

sue-poremba has 271 posts and counting.See all posts by sue-poremba