Breaking the Zeppelin Ransomware Encryption Scheme

Brian Krebs writes about how the Zeppelin ransomware encryption scheme was broken:

The researchers said their break came when they understood that while Zeppelin used three different types of encryption keys to encrypt files, they could undo the whole scheme by factoring or computing just one of them: An ephemeral RSA-512 public key that is randomly generated on each machine it infects.

“If we can recover the RSA-512 Public Key from the registry, we can crack it and get the 256-bit AES Key that encrypts the files!” they wrote. “The challenge was that they delete the [public key] once the files are fully encrypted. Memory analysis gave us about a 5-minute window after files were encrypted to retrieve this public key.”

Unit 221B ultimately built a “Live CD” version of Linux that victims could run on infected systems to extract that RSA-512 key. From there, they would load the keys into a cluster of 800 CPUs donated by hosting giant Digital Ocean that would then start cracking them. The company also used that same donated infrastructure to help victims decrypt their data using the recovered keys.

A company offered recovery services based on this break, but was reluctant to advertise because it didn’t want Zeppelin’s creators to fix their encryption flaw.

Technical details.

EDITED TO ADD (12/12): When BitDefender publicly advertised a decryption tool for a strain of DarkSide ransomware, DarkSide immediately updated its ransomware to render the tool obsolete. It’s hard to come up with a solution to this problem.

Posted on November 21, 2022 at 7:08 AM10 Comments

Comments

Clive Robinson November 21, 2022 10:41 AM

@ Bruce, ALL,

This use of PubKey, was originally tslked about by Moti Yung a couple of decades back.

Back then factoring a 512bit key was seen as close to impossible.

These days not at all…

So it’s important to mention that 1024bit RSA that is still in usage can be factored. So 1.5k bit would probably be “easy meat” in a decade.

So going for 2k bits would be the absolute minimum for new “one time use” RSA keys these days.

Winter November 21, 2022 11:50 AM

There is a lots of annoying RC4 encryption with a hard coded key and shuffling offsets and stripping.

In Short, a lot of Security by Obscurity which was just annoying.

The only thing that really was secure was the RSA-512 encryption. They even tried to remove the public key. Had they succeeded in wiping the public key, they would have been successful. But deleting bits is Hard™.

A few years back, RSA-512 would have been unbreakable, now it isn’t.

Lesson to learn: Secure symmetric key lengths grow with a few bits a year. RSA key lengths must grow non-linear (exponentially) over time.

NIST wants us to use RSA 2048 as a minimum now (cannot find the original NIST report).

‘https://www.jscape.com/blog/should-i-start-using-4096-bit-rsa-keys

According to NIST, a 2048 bit RSA key is as strong as a 112 bit symmetric key, which does not really instill a lot of confidence for the coming years. To get to an RSA key as strong as a 128 bit symmetric key, you need an RSA key of >= 3072 bits.[1]

[1] RSA key lengths for 256 bit strength: Don’t ask, use something else.

Clive Robinson November 21, 2022 1:16 PM

@ ALL,

Re : Do backups righ…

As you read the Krebs artical you very quickly come to,

“… because of the way his predecessor architected things, the company’s data backups also were encrypted by Zeppelin.”

Backs being encrypted also gets mentioned a few times more in the article. As some so rudely say,

“Take a hint”

The way you are doing your backups is probably wrong…

I’ve mentioned the vulnerability of backups to ransomware of various types on this blog many times over the years. First as an “insider” revenge and later outsiders.

The simple fact is backups are as much as possible “automated” for ease of use/operation. Remember what makes it easy for you frequently works in the attackers favour as well…

The first thing to take onboard is,

1, Issolation is essential.

If the attackers can not reach the systems they can not attack them. This however only means they are forced to change tactics from attacking existing backups to attacking data going to new backups.

You can limit this by,

2, Allways check the backups on an issolated independent system.

The question then is how do you prevent or limit the attackers ability to attack the data going to new backups.

This is very system specific so not amenable to “generalised advice” other than reiterating point 1,

3, Issolation is essential.

As I mention from time to time, almost the first “on site” question I ask is,

“What is the valid business case for this (user) system to be connected directly or indirectly to external communications?”

Or more simply,

“Why’s this got the Internet?”

Mostly in traditional systems removing external connections makes an attackers job much much harder. Unless you are subject to a deliberately planed attack, which most ransomeware is actually not. The ransomware operators will work on the “low hanging fruit principle” and in a very target rich environment like the Internet, go find an easier target to attack.

It’s a point that is long over due for consideration by senior managment.

By and large companies do not pay employees to

1, Surf the web.
2, Watch NSFW content.
3, Update Social Media.
4, Endless bid on Ebay & Co,
5, Home shop.
6, Install malware.
7, Exfiltrate value.

Or many other “not work” activities like download copyright protected misic[1].

Also if you have in house “developers” not giving them Internet access, slows down any “code ‘cut-n-paste'” activities they might indulge in. Which all to frequently as they are “minimal example code” drag a whole ship load of vulnerabilities with them.

All to many organisations of all sizes give “Internet access” to all apparentky based on MBA Mantra’s about synergy and tapping currently unused capability and opertunity etc, etc, etc… The theory appears to be,

“We missed SMS and Email, so we’re not going to miss the next collaboration”

Unfortunately the next collaboration appears to be malware or more specifically ransomware…

Oh and for goodness sake, stop using “Cloud Services” for everything. They are an invitation to attackers including the cloud service operators (they are not your frienfs, and you are their prey they feed off). Also they are a “fools bargin” they might be “the cheap option now” but trust me, once your alternatives are limited and you are “locked in” the price will climb faster than you can keep up.

[1] In quite a few countries workplaces are not regarded as being covered by general broadcast regulations / personal use. So any source of music used in the workplace has “Royalties due” and the various “Performing Rights” organisations are not adverse to shaking down companies for over priced licences and threats of fines with hints of bad publicity etc to drive it home…

lurker November 21, 2022 1:44 PM

… they were forced by regulators to prove that no patient data had been exfiltrated from their systems.

Surely that comes under “cruel and unusual punishment.” If intruders can erase their encryption key, they can erase exfiltration logs . . .

Winter November 21, 2022 2:25 PM

@All

The simple fact is backups are as much as possible “automated” for ease of use/operation. Remember what makes it easy for you frequently works in the attackers favour as well…

I am neither an expert nor experienced in backups. But I have listened to experts. (2.5 admins)

  • Backups have to be automated, or else they will not be done consistently.
  • Backups should be deep, changes stored many months back. (eg, zfs).
  • The machine storing the backups should access the machines that must be backed up. No one should be able to access the backup machine remotely. Backups should be pulled from the machine to be backed up, not pushed to the backup location.
  • If all files are changed (encrypted) and backed up in a short time, all alarm bells should go off.
  • One backup is no backup, two backups is a start.
  • If you have not restored from backup to a different machine [1], you probably have no functional backup.

[1] I do have experience with standard Windows backups. It is worse than useless. The backup proved to be a disk sector backup. A new disk layout = no backup.

iAPX November 21, 2022 2:52 PM

@Winter

The machine storing the backups should access the machines that must be backed up. No one should be able to access the backup machine remotely. Backups should be pulled from the machine to be backed up, not pushed to the backup location.

This is an incredibly powerful advice, as an alternative you could backup on a versioned filesystem without ability to update or delete existing files(so push is possible also if security is handled on the backup server side).

And naturally backup operations should be insulated from other network usages, to enable correct monitoring, firewalling and auditing.
Virtual Networks is incredibly valuable if correctly deployed.

As stated too many modifications between 2 backups should create alerts, also the presence of too many non-compressible files (this is an artefact of encryption), and you could add to the mix files that should never be touched to raise alarms too.

Clive Robinson November 21, 2022 5:03 PM

@ Winter,

Re : Backups.

“The machine storing the backups should access the machines that must be backed up. No one should be able to access the backup machine remotely. Backups should be pulled from the machine to be backed up, not pushed to the backup location.”

You should have pulled that into three individial points.

1, No one should be able to access the backup machine remotely.

You would think this would be a given but frequently it is not. Large backup systems can be so complex to run and so critical to the overall system, that they have maintenance / tech support ports for the manufacturer / supply to give support remotely.

We could spend days discussing the merits, however my viewpoint is the same as yours segregation is essential.

However that also means even from the machines being backed up.
Which brings us to,

2, The machine storing the backups should access the machines that must be backed up.

This I disagree with, what it should access is an intermediate system. Which brings us to,

3, Backups should be pulled from the machine to be backed up, not pushed to the backup location.

This is partially correct and partially incorrect. And where things start getting to complex for a blog post or three.

If you have a “pull mechanism on a computer then this can give an attacker an easy way to exfiltrate data if they can activate / use it. Such a mechanism must be designed so that either such an attack can not happen or that any data pulled is useless to an attacker (no easy task).

You can use a variation of the ransomware idea to do this. In effect the machine being backed up pushes encrypted files to a fixed network physical layer address and port to a “buffer island” system. The trick I’m not going to go into is the KeyMat and flow control managment. That is how keys to encrypt can be used without the attacker getting access to them, and how file meta-data and file directory data likewise the attacker can not get at.

Briefly in the likes of high security environments you use an entirely seperate comms link and hardware “in-line encryptors”. Think of the encryptor like a three port HSM, the “push” computer to be backed up can only push files to the HSM which encrypts them and sends them out on a second port to the intermediate buffer island system. The KeyMat is sent to the encryptor over a managment port the computer being backed up can not see let alone access. The encrypted files in the intermediate buffer island then get pulled by the actuall backup system that decrypts and checks the files are correct via CS-Hash etc. As always the devil is in the details when setting up an intermediate buffer island storage system that has data pushed on the “in port” and pulled on the “out port”.

It’s easier to see how to get such systems to work by “sketching out” rather than trying to describe (a picture being worth a thousand words).

Ted November 21, 2022 9:05 PM

I wonder if Unit221b is one of the few orgs that is set up decrypt Zeppelin simply because they have access to donated processing infrastructure. Most people probably don’t have an extra 800 CPUs laying around.

I see Zeppelin is one of the 1,082 strains of ransomware that the ID Ransomware website can identify with a sample file. It’s good the FBI found this strain could be decrypted and made a referral.

Clive Robinson November 22, 2022 12:50 AM

@ Ted,

Re : FBI role.

“It’s good the FBI found this strain could be decrypted and made a referral.”

As I understans it from the availabl evidence the FBI “did not find anything”, “they were told”.

The reason that only the “Law Enforcment” got told back in Feb ten months ago was that those finding “the break” knew that “telling publically” would just cause the ransoware writers to “fix the bugs” very rapidly as they would be highly motivated to so do.

Ted November 22, 2022 11:44 AM

@Clive

those finding “the break” knew that “telling publically” would just cause the ransoware writers to “fix the bugs” very rapidly as they would be highly motivated to so do.

Yes. It’s what some might call “pulling a Bitdefender.” Back in January 2021, Bitdefender publicly advertised a decryption tool for a strain of DarkSide ransomware.

DarkSide very quickly updated their ransomware and announced: “new companies have nothing to hope for.” They went on to attack Colonial Pipeline later that year.

https://www.technologyreview.com/2021/05/24/1025195/colonial-pipeline-ransomware-bitdefender/amp/

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.