Signed Malware

Stuxnet famously used legitimate digital certificates to sign its malware. A research paper from last year found that the practice is much more common than previously thought.

Now, researchers have presented proof that digitally signed malware is much more common than previously believed. What’s more, it predated Stuxnet, with the first known instance occurring in 2003. The researchers said they found 189 malware samples bearing valid digital signatures that were created using compromised certificates issued by recognized certificate authorities and used to sign legitimate software. In total, 109 of those abused certificates remain valid. The researchers, who presented their findings Wednesday at the ACM Conference on Computer and Communications Security, found another 136 malware samples signed by legitimate CA-issued certificates, although the signatures were malformed.

The results are significant because digitally signed software is often able to bypass User Account Control and other Windows measures designed to prevent malicious code from being installed. Forged signatures also represent a significant breach of trust because certificates provide what’s supposed to be an unassailable assurance to end users that the software was developed by the company named in the certificate and hasn’t been modified by anyone else. The forgeries also allow malware to evade antivirus protections. Surprisingly, weaknesses in the majority of available AV programs prevented them from detecting known malware that was digitally signed even though the signatures weren’t valid.

Posted on February 2, 2018 at 6:38 AM45 Comments

Comments

mike acker February 2, 2018 7:14 AM

there are many CA resources, and lots of x.509 certs.

so much so the process becomes meaningless.

what needs to happen is simple: any cert — or public key, for that matter — that is to be used for a critical application — such as installing software, or filing a Forms 1040, &c — should be countersigned by the user.

this is all in the original documentation of PGP, in the discussion of Trust Models: we all accept the CA certs as fully trusted — because the OEM told us to.

not good

i don’t think it reasonable to expect everybody and their brother to learn to use PGP/GPG to sign certificates or Public Keys — and to maintain a Trust Model. But we do need to move forward on this, and tighten up the sails.

Packaged Technology

we need to find a way to package this technology. I took my daughter to the credit union for a new account. at the end of the process she was offered an “app” for her phone. but this is the point where the encryption keys need to be generated, verified, and validated.

a “smart” phone is probably not a suitable device for this purpose though. we should not mix entertainment with business. not in the office and not in a computer.

TheInformedOne February 2, 2018 8:52 AM

It’s always been the achilles heal of PKI. How do we know the person or organization associated with a cert is genuine? Do CA’s and intermediaries do a good enough job vetting recipients? Due to the paradox of “security vs. convenience” the simple answer is NO. The only real way to make it better is physical presence. A round-trip email to someone’s Gmail account shouldn’t be good enough to issue a cert. If you want to validate identity, that person needs to go before a Notary Public and show 2 forms of govt. issued ID, after which the Notary sends a digitally signed authorizing token to the CA, who returns the cert (encrypted) to the Notary who physically issues it (in person) to the applicant. Although this would certainly drive up the Notary’s fee, this 4-way handshake might even be good enough to allow for digital online election pre-voting since it provides better authentication than today’s existing pre-election vetting processes based on voter registration card. Developers have an even bigger responsibility since their “code” can be harmful to public safety and addtnl. steps to maintain their certs in safe order (scheduled mandatory CRL checks….etc) should be required. Good security is rarely convenient but of course the bad guys already know that.

asdf February 2, 2018 8:57 AM

The forgeries also allow malware to evade antivirus protections. Surprisingly, weaknesses in the majority of available AV programs prevented them from detecting known malware that was digitally signed even though the signatures weren’t valid.

How does this happen? Was it as simple as AVs just ignoring anything digitally signed? Doesn’t that statement raise a lot of questions?

ghjk February 2, 2018 9:42 AM

@asdf
People should have been taught not to trust antiviruses for a long time. They can be useful as another line of defense but are not reliable at all. If you already applied this, you don’t have much to fear.

Petre Peter February 2, 2018 9:44 AM

Remember! In order for an authentication system to work: 1) credentials have to be difficult to forge; 2) people doing the authentication must know how to detect a forgery. “Quis custodiet ipsos custodes”

Drone February 2, 2018 12:44 PM

“Surprisingly, weaknesses in the majority of available AV programs prevented them from detecting known malware that was digitally signed even though the signatures weren’t valid.”

And what about the AV programs that intentionally flag NON-Malware (signed or unsigned) as Malware false-positive to drive subscription-renewals?

And where does the (now owned by Google) VirusTotal cloud thing fit in here?

Vesselin Bontchev February 2, 2018 1:05 PM

Surprisingly, weaknesses in the majority of available AV programs prevented them from detecting known malware that was digitally signed even though the signatures weren’t valid.

Without proper context, this is misleading. Imagine, for instance, a malware that is self-contained, in a single, small, non-changing executable file. It might be efficient to detect it by computing a hash of the file. Now, if you add a digital signature (valid or not) to this file, the file will change, the hash will change too, and the AV will stop detecting it. It is important to understand here that the AV will stop detecting it not because it is signed but because it has changed. Appending a bunch of garbage will probably have exactly the same effect.

But this is just an example. Different AV programs work differently. Even one and the same AV program uses different detection methods for the different kinds of malware. Without proper context the quoted statement is meaningless and misleading.

@Drone

And what about the AV programs that intentionally flag NON-Malware (signed or unsigned) as Malware false-positive to drive subscription-renewals?

This is total and utter bullshit. No AV producer in their right mind would ever do such a thing. False positives are just as harmful (both for the user and for the producer) as false negatives (i.e., not detecting malware) and the AV producers go to great lengths to avoid them. Suggesting that they would cause them on purpose is ignorant, paranoid idiocy. It would also have exactly the opposite effect – the users are likely to stop using an AV product that causes false positives too often.

Ratio February 2, 2018 1:49 PM

@Vesselin Bontchev,

Without proper context the quoted statement is meaningless and misleading.

Section 4.3, Malformed digital signatures, under the heading “anti-virus protections” on pages 8–9 of the research paper.

jc February 2, 2018 3:37 PM

Stuxnet famously used legitimate digital certificates to sign its malware.

What? You’re telling me the crooks put on a white shirt, black pants, suit, and tie, and they got approval from security at the front desk of the building to enter the office to pull off this scam?

No! It can’t be!

Vesselin Bontchev February 2, 2018 4:38 PM

@Ratio, I’ve read the paper. What I wrote still stands – the statement is without proper context, meaningless and misleading.

They “downloaded a few random samples of malware”. What samples? What malware? Was it small, self-contained, non-changing malware that could be detected by a file hash? We don’t know. Then they signed it and some AV products stopped detecting it. How were these products detecting this particular malware? Did they use a file hash? We don’t know.

Basically, the authors don’t understand how AV products work. Different products work differently and even one and the same product uses different detection methods for the different kinds of malware programs. Without understanding how exactly a product works, you can’t understand why a particular attack against it works, and your conclusions will be wrong and misleading.

@jc in the case of Stuxnet, where were using a stolen code signing key from a company in Taiwan.

hmm February 2, 2018 6:58 PM

@ Vesselin

I agree with your minor point. If you don’t know exactly how the cert signing is affecting the specific AV algo/heuristics process then drawing some conclusions about “all AV” is going to be pretty much pointless.

Anon February 2, 2018 10:07 PM

Surprisingly, weaknesses in the majority of available AV programs prevented them from detecting known malware that was digitally signed even though the signatures weren’t valid.

I’m not surprised by this at all.

It is further evidence that AV products not only fail to protect people from malware, but that they fail at even basic security themselves.

Why do they not validate the signature as part of the test? They obviously aren’t doing this.

AV isn’t worth the disk space it occupies.

Anon February 2, 2018 10:12 PM

i don’t think it reasonable to expect everybody and their brother to learn to use PGP/GPG to sign certificates or Public Keys — and to maintain a Trust Model. But we do need to move forward on this, and tighten up the sails.

Why is it so hard for people to understand?

More to the point: why are crypto products so hard to use for even basic stuff? Why are the developers of these products incapable of producing “one click” interfaces for common tasks so the layman can use them?

22519 February 3, 2018 3:47 AM

@mike acker

“there are many CA resources…so much so the process becomes meaningless.”

Exactly. Who are all those people with the crazy names?

@Anon

“More to the point: why are crypto products so hard to use for even basic stuff? Why are the developers of these products incapable of producing “one click” interfaces for common tasks so the layman can use them?”

I often wonder the same things. At least it seems that some folks are trying to make effective encryption easier. Keybase is a good example.

When one sees asymmetric encryption being used to effectively undermine the supposed goals of asymmetric encryption, it is really depressing. In my thinking, kleptotrojans are game over for the system. If you are on defense, don’t stay up at night worrying about quantum computing. The point behind the power of using fake certificates, kleptography, etc., is that it can all be very effectively denied.

I worry that today’s security situation is similar to that when people were using the Telekrypton. Remember that device, circa 1935…? A typist sat and punched holes in a tape which generated a keystream (a loop), supposedly random (not random at all) and the system was widely used, highly regarded, and deeply flawed. Everything is just fine until it’s not, until someone understands why the system people depend on is schmendrick city. To my mind, that is what weaponized asymmetric and hybrid cryptography does right now–it’s lethal. It’s deniable.

How to make non-repudiation real and absolutely accurate? It is not going to happen, is it? Too much division in the world.

r February 3, 2018 5:33 AM

malware is malware regardless of being signed, if an av uses pki as a heuristic exemption flag that av sucks.

real programs contain real overlaps with real malware, code snippets and file structures are always going to be common between files when similar languages and similar chips are employed within any ecosystem.

malware authors have even been known to extract code from existing legitimate software just to lower their chances of being detected for doing things they’re not supposed to with known techniques.

to white list a specific companies signed software is a reasonable idea if you are the device administrator and have a contract or relationship with such a company, outside of that scope? av and os have almost no business with auto white listing any company on their own.

hmm February 3, 2018 1:49 PM

@ Anon

“It is further evidence that AV products not only fail to protect people from malware, but that they fail at even basic security themselves.”

You are lumping all AV products into one group and judging them by the lowest common denominators, but the fact seems to be that not all AV are affected by this. Sweeping generalizations don’t really do much to enhance actual security, though that itself is a sweeping generalization that can be proven wrong in instances. I guess what I’m saying is if you know how AV is being defeated specifically, that’s a lot more useful and operable information than just saying “AV sucks” and shooting off a few examples.

@22

“Why are the developers of these products incapable of producing “one click” interfaces for common tasks so the layman can use them?””

http://download.cnet.com/EncryptOnClick/3000-2092_4-10449079.html

I mean, these products exist whether or not they live up to all expectations of efficacy.
But you’re expecting ‘lay’ users to be concerned about encryption? Very maybe.

These are the people who are posting their lives to FB feeds, instagramming their breakfasts,
password = 1234

There ARE simple whole-disk crypto apps, phones are trivial to encrypt, etc.
It’s not 1-click but it’s rather close to that in many instances. Bitlocker, etc.

I think what you meant is “why don’t they come encrypted by default, idiotproof, from the factory.”
That comes soon I think, you’re correct that there’s an idiot consumer base that requires that.
But each player wants to have control of that encryption, see. If not the user “taking” it, who has it?
Someone who can subvert or revoke it with a push.

22519 February 3, 2018 11:15 PM

Thinking about the possible uses of signed malware:

Inject a signed kleptotrojan for access into a personal device–delivered over a network, by drone, hand-held device, or from dorked hardware. Authenticate the target (person) via exfil of MULTIPLE pieces of information: IMEI, IMSI, photos, PGP keys, contacts, text, email address, etc. Exfil keys of all types. Voice ID?

Make it a fast process.

Use exfilled keys to resolve ciphertext in stored messages automatically, and use that information to further indicate targetability.

Ugh… facial recognition?

A drone, robot, person, or combination of the three then “services” the target.


Person A: What happened to Jimmy Jo?

Person B: Dunno…

Anon February 4, 2018 1:51 AM

@hmm:

You are lumping all AV products into one group and judging them by the lowest common denominators, but the fact seems to be that not all AV are affected by this. Sweeping generalizations don’t really do much to enhance actual security, though that itself is a sweeping generalization that can be proven wrong in instances.

Yes and no. As you say, not all AV is like this, and I wasn’t literally meaning all AV is like this, but it doesn’t look good.

@r:

if an av uses pki as an exemption flag that av sucks

+1

@22519:

How to make non-repudiation real and absolutely accurate? It is not going to happen, is it? Too much division in the world.

I do seriously wonder if the developers behind the … better known … crypto products have been “got at”. I’m not talking payware RSA, but smaller, free or open-source products.

Take what happened to TrueCrypt for example: well-known enough to gain attention; is high-profile for allegedly being unbreakable in a few high-profile criminal cases; becomes the focus of cryptanalysis, then suddenly and inexplicably disappears. Coincidence? Not only that, but the last release (7.2) is apparently laced with malicious software.

Some security software has an ambience of intentional neglegence in how it is written (OpenSSL received scathing comments after it was reviewed – intriguingly it was never fixed), while others always seem to malfunction, or not operate properly (anti-virus).

So many security products are inflicted with problems, I find it hard to believe it is accidental. I know it is said all the time that writing secure software is hard, but these are the products that are supposed to be better than the average, yet they ALL fail, and often have shockingly low quality standards when examined in the cold light of day.

It’s almost an immutable law of nature that security software is always worse than the malicious software that evades or exploits it.

How can we fix this situation?

Ratio February 4, 2018 5:26 AM

@Vesselin Bontchev,

Different [AV] products work differently and even one and the same product uses different detection methods for the different kinds of malware programs.

They observe that in many cases one and the same product flags unsigned binary B as malware, but doesn’t flag the derived signed binaries B’ and B” —or in the case of some products, flags the original and only one of the derived binaries. The variants of each of the binaries differ only in their PE headers (CertificateTable, IIRC).

The detection method used —whatever it may be— is trivially circumvented. This is not good. Knowing why this is so, while interesting, does nothing to change that.

r February 4, 2018 7:48 AM

re: how common are false positives?

our scans have indicated potentially suspicious code located in

file 434,567>
c:\w64\system32\ntdll.dll

however, this file is signed by microsoft; would you like to quarantine this file?

our scans have indicated another potentially abstract problem

file 515,878<
c:\w64\drivers32\bo2k.sys

this file is signed by nokai technologies ltd, our research indicates … (nothing! we’re not the SEC)

delete, quarantine, ignore?

gadgets present problems, think of false heuristic triggers as stumbling over commonly known gadgets – are they unique enough to get all paranoid about?

in many cases hiding malicious code on a system is as easy as naming it malicious.txt, customized sideloaders can hide nearly anything and still make themselves (and their never read bo2keula.txt) look legit.

miscprogramGPLagreement.txt; cc: my lawyer, reverse engineer.

Wael February 4, 2018 8:37 PM

@Ratio,

but doesn’t flag the derived signed binaries B’ and B”

If B is distance, then the first and second derivatives with respect to time are Velocity and Acceleration, eh? Not what you meant, huh?

We’ve discussed his in the past: something like codeDNA from Johns Hopkins may have better chances.

@Clive Robinson,

Your love and pity doth the impression fill,
Which vulgar scandal stamped upon our brow;
For what care I who calls me well or ill,
So you o’er-green my bad, my good allow?
You are our all-the-world, and I must strive
To know my shames and praises from your tongue;
None else to me, nor I to none alive,
That my steeled sense or changes right or wrong.
In so profound abysm I throw all care
Of others’ voices, that my adder’s sense
To critic and to flatterer stopped are.
Mark how with my neglect I do dispense:
You are so strongly in our purpose bred,

Get off that pit, Mister! I detect your ‘covered footprints’ around 😉

@r,

re: how common are false positives?

Seriously? Apparently less common than false negatives. Ask any famous criminal who got acquitted…

22519 February 4, 2018 10:41 PM

@ Anon

   "I do seriously wonder if the developers behind the... better known...
   crypto products have been 'got at'."

I do too. There are several examples of small companies that made undorked crypto products and then suddenly those companies disappeared like smoke. Be honest about it: undorked crypto implementations that offer real privacy/anonymity/etc. make some people hit the ceiling. Their disappearance is like an open secret.

On this topic, I want to say one thing: at the NSA they have a name for the day when the Soviets started using one-time pads in all of their important communications–October 29, 1948:

                           Black Friday

(See: The National Cryptologic School’s “On Watch”, 9-86)

In other words, there are real solutions in cryptography, symmetric, asymmetric and hybrid. But are those solutions being taken in smart ways? Does the public learn get a real chance to actually encrypt something effectively? Seldom, in my judgment.

  "...but these are the products that are supposed to be better than the
  average, yet they ALL fail, and often have shockingly low quality
  standards when examined in the cold light of day."

Exactly. The namby-pamby stance towards security that one can see in some products, take GnuPG for example, really makes me angry. Why are they still offering NIST curves? Mr. Schneier already made damning statements against the NIST curves–based on highly credible evidence! Moreover, why does GnuPG want to get rid of compression altogether? By the way, can we use some strong, more modern hashes that were not invented by the NSA?

I start to wonder about the NSA. Are they collaborating and doing something honorable, or are they colluding and doing something dishonorable and stupid? I still want to know how Snowden Hobbit exfilled 7 or so terabytes from NSA computers without anyone being aware, and then he waltzed out of a highly secure facility with it under his arm. Sounds like absolute clown shoes to me.

   "How can we fix this situation?"

I despise Snowden Hobbit and would stick a knife in his guts and twirl it joyfully if I had a chance, but don’t let that put you off. Snowden Hobbit said one very disturbing thing that I think was frighteningly true and relevant. To paraphrase: the U.S. government depends on people being uninformed. In other words, democracy in America is becoming a big, fat charade. And this is happening on our watch.

What to do if you believe in defending and supporting the US Constitution? Educate people. Say something when the Constitution is being subverted, especially when the Congress goes along with it (Section 702). Educate people on how to use encryption, and tell them about its value.

Stop letting the world be lied to. Call out people like Hayden who lie like a rug and get paid for it. Let your Congressman/Congresswoman know that you don’t want faulty crypto products.

I often ask myself the same question as you asked above: what to do about this situation. To be frank, a good solution is to loose anything that emits electrons and tell all of your friends to do the same. But who can do that these days? Instead, we see people trusting each other on electronic devices and thinking that the internet is benign: it’s not. It’s a gigantic collection platform and WWF cage for titans. People use it and trust each other like an RJ-45 plug trusts electricity– and in doing so re-affirm one another’s fantasies about security.

22519 February 5, 2018 2:28 AM

A reader may wonder why I kept on talking about the US Constitution in my last post.

The Constitution is the cornerstone of American strength. It was built to last, to resist the most cretinized yes-men and schmendricks that future US “leaders” might be. It’s a guardrail against tyranny.

Recent schmoes have been responsible for some incredible nosedives, and many of those involve security. These policies of backdooring crypto and doing everything to weaken defensive systems are completely Looney Tunes. Undermining certificate authorities, and the trust that people put in them, is bad enough. Just as bad are the sins of omission: why the US cannot leverage its smart people in government security to help make sure take commercially-available security products actually function well is really just a disgrace. The Chinese are suffering from too many belly laughs.

The schmucks who were at the wheel in the summer of 2001–yes, let’s go there–were as benighted as one can imagine, and still are, I dare say. Why is that? Because they still don’t get it. They still don’t understand fundamental concepts about security–nor do they understand terrorism or how to defend networks (the OPM disaster). Well, these days they are too busy fighting each other anyway, so does it really matter?

Why are the schmuck$ benighted?

Because they have reneged on their oath to support and defend the Constitution of the United States. And because they just don’t have the imagination or other bandwidth to understand terrorism. Bin Laden could not hit the US in the very center of its power–as he probably wished–which is the edifice of its law. An ideal terror attack would have done that. Now, years later, we see the laws being corroded and fissures opening up because of the poor response to 9-11–deep self-inflicted wounds that should not have been allowed to happen or continue(dome$tic $urveillance; Section 702), and are doing the work the terrorists could have only wished for: really damaging the USA, causing it to be distrusted, putting a fissure in the cornerstone.

me February 5, 2018 2:57 AM

@Vesselin Bontchev:
No single av is going to detect virus by hash, exactly because it’s enough to recompile it (timestamp change), or add a byte of garbage to change it’s hash.
they use patterns, so they might yes use a hash but not of the whole file, instead they hash some part of the actual virus, in this way they detect it also if there is garbage or if there is a new/different version that is not too much different.

me February 5, 2018 3:01 AM

Also, my av (kaspersky) has a very explicit option about this:
[checkbox] Consider trusted files that are digitally signed
i have unchecked it.

here trusted refeer to some kind of sandbox that this av have: trusted has no restrictions, partially restricted can’t do some things, ….
but i don’t think it must be translated to “ignore, whitelist and don’t even scan every signed exe”

me February 5, 2018 6:41 AM

@Wael
I think that what an av do is blacklist the hash, as an immediate solution and later with automatic and/or manual analysis (not sure if always possible since the huge volume of virus created) add a detection based on a small part of the virus, because it would be far too easy avoid the detection by changing a single byte.
what i said is based on logic and what i have read online so i’m not 100% sure but i’m going to try, i have a few virus at home.

i can’t find the articles right now but i have read this:
-changing the old msdos text “this app can’t be run in dos mode” was enough to avoid detection sometimes
-someone made a program that changed exe more and more until it stopped being flagged so that program could find what exactly the av was detecting.

my experience:
i was writing a dll injector and after each compile av flagged as “generic trojan” so when i got bored i commented most of the code until it stopped detection.
the problem was a messagebox string “error, api OpenProcess failed”, adding a space in the api name was enough to avoid the detection.

i’m not saying that i know better than him (anyone in general) how av solution works, i think that what someone write in a comment is an approximation/simplification of how they work, writing too many details would be too long.
didn’t know who was him, but now that i do, i still don’t think that is 100% correct just because he said it.
also i don’t want to offend anyone in any way, if i have done i’m sorry.

r February 5, 2018 7:30 AM

@Wael, RE: White Listing and FALSE Negatives.

Seriously? Apparently less common than false negatives. Ask any famous criminal who got acquitted…

delete, quarantine, ignore? (if popups irritate 99% of uses then > file.log else alert!)

((our heuristic analysis of your antiviral log file says your output is too complex for human compartmentalization thus we (as eicar) recommend not outputting more than false positives.))

red team go!
😉

Wael February 5, 2018 8:15 AM

@me,

adding a space in the api name was enough to avoid the detection.

Strange.

i still don’t think that is 100% correct just because he said it.

I know that!

i don’t want to offend anyone…

I wasn’t.

@r,

Winter Olympics? Cheering for China now?

me February 5, 2018 8:37 AM

@Wael
yes, strange but not so much, it might be some kind of custom import detection.
av might suppose that the text was there to be passed to GetProcAddress to try to hide the fact that i used OpenProcess (which wasn’t the case).

what i find strange if not inexplicable is why my av doesn’t detect my keylogger;
ok, is not public so they can’t have any hash or pattern but from the heuristic point of view: why an app is calling GetAsyncKeyState to get (regardless of the focused app) if a key is pressed or not, for every key, every milli second.
i don’t think that there is a legit reason for this, it should be super suspicius behaviour… the only reason i can think is that since is not made for evil purposes and has a gui the av doesnt flag it (because of the gui, usually virus doesn’t have gui).
but still this is super strange….

Clive Robinson February 5, 2018 9:35 AM

It was clear to me long before Stuxnet became known that “signed code” was not worth the cost of generating the certificate unless a whole load of other measures were put in place, that 99.99.. times in a hundred they were not.

Myself, @Nick P, Wael and others had some conversations on this blog, likewise long prior to Stuxnet, I guess people can go back and search for them if they are disbelieving.

The main point to realise is code signing is effectively,

1, Generate a PKey pair.
2, Publish the Public Key.
3, Keep the Private Key.
4, Develop some software.
5, Make an archive of the software.
6, Hash the archive.
7, Sign the hash with the Private Key.
8, Release the archive.
9, Release the archive signiture.

Do you see any problems with steps 1,3,4,5,7,9 ?

If you don’t then you realy should do because they are all fairly easily subvertable points that an attacker can use… What about 2,6,8 ? Well they likewise points that an attacker can subvert only not quite as easily…

Thus the whole code signing process is a compleate crock unless you add a whole bunch of other quite complex and difficult to implement processes that are realy quite costly to do, and even more costly to maintain…

The problem with “quite complex and difficult” is that it equates to “near impossible to audit to the required level”. So most Code Signing Audits are at best a sham.

But it gets worse, you have no right to inspect the audit methodologies let alone their implementation so they are not as classy as a black box, more a black hole that you only know is there by some secondary effects at best.

But… Many code signitures have not been done in a way that is easy for the end user to use. Thus they end up relying on someone elses code that could be backdoored (something I would have done If I were a SigInt agency).

Which is why many people have installed software without knowing if the signitures are valid or even checked.

So the whole proces is effectively completely flawed on both sides of the problem (gen/chk).

But this should not be news to anybody who can think and read security alerts and academic papers. The fact it’s been going on for so long says rather more about how low in esteem security is held by just about everybody…

In fact the only people who appear to get upset about it are the owners of “Walled Gardens” that steal your rights of ownership over the equipment you have purchased…

@ Thoth,

There must be a suitable holiday or other celebration comming up, time to make another “Golden Sticker” 😉

@ Wael,

No, no footprints around but thanks for the thoughts. Hopefully I’ll stay up on the perch a little longer this time, otherwise it’s “Blue Norwegian” time…

r February 5, 2018 12:11 PM

@me,

maybe they do (detect your keylogger) but are ‘laying in wait’.

f00d for thought, happy best foot foreword.

@wael,

influenza + kim jong un? i am feverishly delighted at the prospect of a happily nuclear wintered olympics.

nuclear twins?

r February 5, 2018 12:14 PM

alse, @me

think back to the recent malware bytes(?) employee and his ‘creation’.

maybe it’s not your keylogger.

word of advice? get your ‘GPA’ up bro.

echo February 5, 2018 12:34 PM

Fight back against forced obsolecence! Install fresh malware today!!!

https://www.bleepingcomputer.com/news/security/nsa-exploits-ported-to-work-on-all-windows-versions-released-since-windows-2000/

A security researcher has ported three leaked NSA exploits to work on all Windows versions released in the past 18 years, starting with Windows 2000. The three exploits are EternalChampion, EternalRomance, and EternalSynergy; all three leaked last April by a hacking group known as The Shadow Brokers who claimed to have stolen the code from the NSA.

Anon February 5, 2018 8:26 PM

@me:

I talked to a few people from Kaspersky a few years ago at a conference, and showed them some proof-of-concept code. It intercepted Windows log in, and modified it.

Their AV product was completely oblivious the the fact I had replaced several system DLLs with stubs.

me February 6, 2018 2:24 AM

@Anon:
same here, i have sent a poc for a bypass, they said will try to fix it also if it is a limitation imposed by microsoft, on some os only… i can only hope that one day it will be fixed.

maybe we are just overstimating the usefulness of an av solution…

i know that it is designed more to detect old known virus and it can’t detect new ones (the one that bad guys actually use).
but av is useful in another way:
it force bad guy to innovate, he can’t make a complex virus with many functions because two days later it will be detected, so there is no point in working hard if you have to restart from zero two days after. so they keep the complexity low.
this on linux is missing, you can be infected by something 10 years old and not even notice it.

Mike Barno February 6, 2018 7:26 AM

@ me :

… it force bad guy to innovate, he can’t make a complex virus with many functions because two days later it will be detected, so there is no point in working hard if you have to restart from zero two days after.

Well, no, that depends on the goal.

If the “bad guy” is just some basement hack looking to harvest credit-card numbers or PayPal passwords from the general public, then sure, it makes bad-guy-sense to deploy simple malware that just repackages known exploits with quickly-codable variations, hoping to get a couple of days’ use before detection and a couple of weeks’ use while patches make their way to users.

But if the “bad guy” is a serious team within a government spy agency or an international mafia, whose target is a single person (or crucial project) whose secrecy they really want to compromise, then it can be quite worthwhile to invest a lot of resources into developing complex original malware exploiting zero-day vulnerabilities, even though it might literally get used only once and its supporting infrastructure shut down before two days have passed.

me February 6, 2018 9:00 AM

@Mike Barno:
true, but most of the people are not attacked by a government, and also if i saw more abuses than uses of government malware i still want to belive that they are the good guys (of course nsa is for sure excluded from the good guy list)

Mike Barno February 6, 2018 12:47 PM

@ me :

… if i saw more abuses than uses of government malware i still want to belive that they are the good guys…

Which government? If you see the USA’s NSA pervasive invasive surveillance (for nominally anti-terrorism purposes) as non-good-guy activity, then which government malware is preferable? Mexican monitoring of anti-corruption investigators? Turkish monitoring of journalists covering the war against Kurds in Syria? Chinese monitoring of pro-self-rule Hong Kong publishers? Egyptian monitoring of democracy advocates?

… but most of the people are not attacked by a government…

Most plumbers, and most truck drivers, and even most computer programmers working for local grocery stores are more likely to see some harm (a few hundred dollars of financial loss, inconvenience of having to get a credit card replaced) from a small-time crook’s simple virus. But if you’re an investigative journalist, or a political activist, or a network-security researcher, then you’re more likely to wind up disappeared, never to be seen again, due to a complex, sophisticated attack by a government’s covert agency or its contractor.

Government-ordered malware attacks are not a negligible problem just because fewer of them cause enough damage to be noticed.

me February 7, 2018 8:17 AM

@Mike Barno:
Government-ordered malware attacks are not a negligible problem just because fewer of them cause enough damage to be noticed.

i think you are right, and to be clear i think too that any of the examples you mentioned are immoral abuses.

what i wanted to say is that if i stop thinking that my/other governments are the good guy the only thing left is that there aren’t any (except normal people)… not a very happy scenario.

yet you named many examples of abuses, some of them was known to me, thanks to citizenlab, some are new.
i can also point out some other: hacking team sold malware to sudan (and was illegal) but still nothing happened to them.
there is only ONE case i know where malware helped (and i’m not sure how much), but is 1 vs too many abuses…

Clive Robinson February 8, 2018 3:33 AM

@ me,

[I]f i stop thinking that my/other governments are the good guy the only thing left is that there aren’t any (except normal people)… not a very happy scenario.

It’s the way of the world, and has been for several centuries.

As history shows “Power has no sense of humour, loyalty or humility”. Further it is based on the simple premise that the only rights you have are those that you can seize and hold unto yourself against all comers.

The only real advantage of “power” in the political and dictitorial sense is it is grossly inefficient as everybody looks out for ways to improve their position. Thus there is capacity for those with a mind to enrich themselves to gain considerably. The only question is if they want material or political enrichment and how others view that.

To see what is close to the ultimate power structure look at crime syndicates, especially those involving drugs. That is what society is moving to as criminals and corporates discover just how much they have in common. Further just how little politicians and legislators can be bought for, to make their crimes against society not just legal but apparently virtuous…

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.