SBN

What are social media countermeasures?

As the guy who pretty much owns the #socialmediacountermeasures on Twitter, I figured it makes sense to give the term some proper definition beyond just 280 characters.

In short, social media countermeasures are those techniques – both automated and manual – of which social media services use when trying to detect, flag, and remove malicious content. And by malicious, I mean the actually harmful content created by scammers and other cyber criminals. Therefore, these countermeasures do not involve enforcing narratives, shadowbanning, or other forms of suppressing freedom of speech in the name of “fighting disinformation (1, 2)”.

The countermeasures these social media platforms use are, of course, a trade secret, and very little amount of information about them is publicly available. Keeping them that way is a competitive advantage and makes criminals’ lives harder. We can however deduce that all major platforms have long since evolved beyond using simple blacklist of words or URLs as means of detecting malicious content. Behavior analysis seems to be the area of focus these days, as the social media companies can hoover up massive amounts of usage data from real users and then build a model around that. This behavior model alone isn’t enough though, as it only gives us some sort of average, or an acceptable variance, of typical behavior, but it lacks context. Without context a model like that can still detect for example bot-driven copypaste spamming campaigns easily, but when a person writes (at least seemingly) manually messages aiming to scam or phish a specific individual, detecting becomes a lot harder.

That’s way I’ve seen criminals deploy automated tactics that simulate normal behavior, such as introducing a false delay before auto-answering a message or a tweet, or sometimes even creating fake conversations between bots, and in those “conversations” they happen to promote a scam service and so forth.

These could be called counter-countermeasures. It’s a forever cat-and-mouse game between defenders’ tools and attackers’ criminal-cunningness. This is the reason why while most of the spam messages, e.g. YouTube comments, will end up automatically in the “Held for review” folder (so countermeasures caught them), a few will evade detection and end up among the legitimate comments.

Recently I saw a very interesting malicious campaign in YouTube comments, utilizing stolen accounts and impressively contextual and real looking comments. I did however immediately recognize it for what it is, and this once again begs the question: how on earth it didn’t get detected by YouTube’s countermeasures, while it was so blatantly obvious to me? Unless you get a job working in YouTube’s countermeasures unit, you’ll never know.

I will make another blog post about that campaign though. It’s a very interesting example of using multiple layers of the site’s features in order to lure victims into a specific website. It’s a bit NSFW so I need to figure out first if I need to sanitize my screengrabs or not.

Finally, I’d like to remind everyone to report all scam messages. Reports do improve the detection rate in the future! I shared this tip also in November 2022 issue of F-Alert, the monthly threat report by F-Secure. Feel free to download the report and read my article about a curious Facebook scam targeting Page Admins.

*** This is a Security Bloggers Network syndicated blog from Privacy & Security – Joel Latto authored by Joel Latto. Read the original post at: https://joellatto.com/2023/01/19/what-are-social-media-countermeasures/

Secure Guardrails