The NSA Says that There are No Known Flaws in NIST’s Quantum-Resistant Algorithms

Rob Joyce, the director of cybersecurity at the NSA, said so in an interview:

The NSA already has classified quantum-resistant algorithms of its own that it developed over many years, said Joyce. But it didn’t enter any of its own in the contest. The agency’s mathematicians, however, worked with NIST to support the process, trying to crack the algorithms in order to test their merit.

“Those candidate algorithms that NIST is running the competitions on all appear strong, secure, and what we need for quantum resistance,” Joyce said. “We’ve worked against all of them to make sure they are solid.”

The purpose of the open, public international scrutiny of the separate NIST algorithms is “to build trust and confidence,” he said.

I believe him. This is what the NSA did with NIST’s candidate algorithms for AES and then for SHA-3. NIST’s Post-Quantum Cryptography Standardization Process looks good.

I still worry about the long-term security of the submissions, though. In 2018, in an essay titled “Cryptography After the Aliens Land,” I wrote:

…there is always the possibility that those algorithms will fall to aliens with better quantum techniques. I am less worried about symmetric cryptography, where Grover’s algorithm is basically an upper limit on quantum improvements, than I am about public-key algorithms based on number theory, which feel more fragile. It’s possible that quantum computers will someday break all of them, even those that today are quantum resistant.

It took us a couple of decades to fully understand von Neumann computer architecture. I’m sure it will take years of working with a functional quantum computer to fully understand the limits of that architecture. And some things that we think of as computationally hard today will turn out not to be.

EDITED TO ADD (6/14): Since I wrote this, flaws were found in at least four candidates.

Posted on May 16, 2022 at 6:34 AM34 Comments

Comments

Filip May 16, 2022 9:01 AM

How about public key cryptography with key size depending on size of encrypted data? Something like one time Vernam cipher, but not for every bit of data, just enough to disguise data as static noise.
Limited by one key for one dataset like Vernams.

Lupe May 16, 2022 9:52 AM

“Those candidate algorithms that NIST is running the competitions on all appear strong, secure, and what we need for quantum resistance,” Joyce said. “We’ve worked against all of them to make sure they are solid.”

Hmm… isn’t Rainbow still a candidate? The NIST site still lists it, having not been updated since 2020. That makes me wonder whether this is an old interview (it’s paywalled, so I can’t see whether they mention a date), or they’re unaware of the work of Ward Beullens: “Breaking Rainbow Takes a Weekend on a Laptop” (21 Feb 2022). Or am I missing something, maybe that the flaw is not as bad as it seems? Full key recovery certainly sounds bad, and nothing like the “break 9 of 13 rounds in 2^119 time with 2^90 memory for a specific class of weak keys” we usually see.

Otherwise, it seems like NSA were unable to discover such a crippling flaw, or were unaware of a relevant 2-month-old paper—or they found the flaw and kept it secret.

For things like software-update signing, I wonder whether we should just stick with Merkle signatures or similar. Apparently the private keys get large, but the public keys don’t, and how much does it really matter if the signer has to store tens or hundreds of megabytes of key? (Though, out of curiosity, does anything prevent one from simply using the output of a stream cipher to generate keys?)

Sumadelet May 16, 2022 10:28 AM

Quantum-resistant cryptography is all well and good, but we also need a breakthrough in key distribution and management that is both secure and easy to use for the ‘person in the street’.

Ted May 16, 2022 10:44 AM

Amazing that people may be stockpiling encrypted data in anticipation of a quantum computing break through.

Practical quantum computing seems to always remain “ten years in the future,” which means no one has any idea.

I hope they’re at least selective about what data they retain in that case. Surely we cannot be ready for “Nation State Data Hoarders” to show on the telly?

On a different note, I think my neighbors would be happy if I grew a privacy screen between our yards.

Clive Robinson May 16, 2022 11:33 AM

@ sumadelet, ALL,

we also need a breakthrough in key distribution and management that is both secure and easy to use for the ‘person in the street’.

We’ve needed it for a very long time, since well before “Certificate Authorities”(CAs) were a thing.

CAs are clearly a failure in oh so many ways that we need to get rid of them.

However what makes them a significant failure makes them absolutly appealing not just to the low end crooks on the way to becoming criminals, but also to those crooks that “sit behind high office”.

Thus firstly we need to work out how to get rid of a hierarchical structure, and replace it with an effective “trust structure”, that still has to be thought up.

Judging by the number of years and the lack of progress one of two things can be said,

1, Nobody wishes to solve it.
2, The level of dificulty makes it unsolvable currently.

I’ll leave it to others to make their own minds up, but lets be clear the crooks are definitely benifiting currently.

lurker May 16, 2022 1:52 PM

@Lupe, All
“You have reached your limit of free articles.”

rob joyce quantum resistant algorithm

turns up a number of hits all quoting Bloomberg in varying degrees of accuracy and completeness.

“There are no backdoors,” Rob Joyce, the NSA’s director of cybersecurity, told Bloomberg in an interview on Friday.

makes it Fri 13 May 2022. Odd that my search turned up in the top ten SCMP, Epochtimes, RTdotcom, none of which I have visited since last cookie/storage purge. . .

Lupe May 16, 2022 3:19 PM

Thanks, lurker. Your link gives me a TLS error (PR_END_OF_FILE_ERROR), but I found a copy on archive.org.

More charitable interpretations would be that Joyce knows Rainbow is no longer a candidate, ahead of any public annoucement from NIST, or simply assumed it wasn’t and that it would be obvious to everyone.

@ Clive Robinson,

Judging by the number of years and the lack of progress one of two things can be said,
1, Nobody wishes to solve it.
2, The level of dificulty makes it unsolvable currently.

I’d say point 1 is obviously false, given that people have occasionally tried to solve it. I also don’t think it’s unsolvable per se. I suspect the truth is that the people who wish to solve it are not the same people with the ability to solve it, and are not sufficiently motivated or able to put together a group of more able people.

It won’t be cryptographers, computer scientists, or other mathematicians who solve it. Really, we’ve already got the solution to CAs—DANE—but don’t have browser-makers on board, and anyway, it does nothing about other problems (such as hierarchical models, to those who see them as a problem).

We have a bunch of non-interoperable things that kind of solve the problem, at the cost of adding other problems (like having to trust centralized services and maybe proprietary software). There’s no clear business reason for any one party to really solve it, though; it’d help competitors, after all, and a lot of companies seem to prefer having 100% of a smaller market.

The term “trust” should probably be avoided, having near-opposite meanings betweens theoreticians and the general public (in infosec theory, the parties that one “trusts” are basically any of those who can break the security; and even a standard English definition may relate to mere predictability, whether or not the predicted actions are favorable).

Clive Robinson May 16, 2022 6:29 PM

@ Lupe,

we’ve already got the solution to CAs —DANE— but don’t have browser-makers on board

Not surprising realy…

Actually the “DNS-based Authentication of Named Entities” (DANE)[1] proposed standard(s)[2] has quite a few problems more than just a ludicrously over controled hierarchy.

It’s based on DNSSEC that was and still is a disaster[3] in it’s own right, and tries badly to solve a problem that does not realistically exist using “so last century” technology that in many cases is assumed to now be “broken crypto”[4]

The problem with nearly all existing “trust” systems is not only are they hierarchical, they are ludicrously cumbersome, and as far as many are concerned compleatly inflexible.

Neither CA or DNS hierarchies are going to survive the most likely Balkanisation of the Internet into hostile fiefdoms that is hurtling towards us like a runaway goods train that has already “shot the points” (that happened in the UN ITU Doha Conference back in 2014).

All the Web 3 or Web 3.0 nonsence is not going to make things any better, in fact it will definately make things worse, as has much of HTML 5.

What we need is a “distributed trust model” based on “reputation”, not a hierarchical one based on untrustable power hierarchies. But as importantly it needs to be not just efficient, it also needs to be “local” such that any form of non local disruption will not effect it’s functioning.

Having investigated the harder “rendezvous problem” I’m actually confident a “reputation” system is achivable. The real problem is doing it without leaking information to a third party.

[1] Simplistically DANE binds RSA based X.509 certificates, to DNS names… Using the disasterous and known to be insecure “Domain Name System Security Extensions”(DNSSEC) system[3]. As such it inherits all the myriad of DNSSEC issues. Not least of which is it’s hierarchy control features that puts power in a hands of very few, with all the known problems that already causes (think FBI insisting it controls the Internet, and Microsoft getting US Judges to sign court paperwork that gives unlawful extra judicial consent to carry out activities that are illegal in other jurisdictions).

[2] See RFC 6394 / 6698 for original proposal and use cases, since updated with 7671/2/3.

[3] DNSSEC is a crypto disaster going back to a design from the early 1990’s based on algorithms from the 1980’s and earlier. Most of which are now at best questionable, as for 1024bit RSA that can still be found within, well that is assumed with good reason to be broken[4], like a dropped basket of eggs, and was more than three quaters of a decade ago,

https://sockpuppet.org/blog/2015/01/15/against-dnssec/

[4] Whilst a bit old, the work that made this abundantly clear was the Bernstein and Lange, “Batch NFS” paper. That gave extentions to the “Number Field Sieve”(NFS) which had non trivial consequences in terms of real world performance enhancment,

You can download it from Daniel J. Bernstein’s web site,

http://cr.yp.to/factorization/batchnfs-20141109.pdf

MrC May 16, 2022 9:13 PM

Erm…

Rainbow is outright broken: hxxps://eprint.iacr.org/2022/214.pdf

And the Israeli Defense Force’s Center of Encryption and Information Security have an attack that, depending who you ask, takes Kyber, Saber, and Dilithium down into the realm of “no real danger from present technology, but weaker than AES-128,” and casts some doubt lattice-based cryptography generally: https://groups.google.com/a/list.nist.gov/g/pqc-forum/c/Fm4cDfsx65s

Lupe May 16, 2022 11:25 PM

Actually the “DNS-based Authentication of Named Entities” (DANE) proposed standard(s) has quite a few problems more than just a ludicrously over controled hierarchy.

Yeah, that’s why I was careful to say it solves only the CA problem. It adds its own problems, of course, and DNSSEC is a bit of a rabbit hole. I’m more optimistic than you on that, but it’s not great now and it’ll take a lot of effort to change that. Geoff Huston occasionally collects data on it. In theory it supports better algorithms than RSA, but in practice it doesn’t work all that well: lots of resolvers fail with RSA-1024, and even more fail with ECDSA.

One can install custom trust anchors to deal with balkanisation, a “solution” which is itself balkanisation…

But all CA’s really tell you is that you’re communicating with the real example.com. To the extent you want to validate this hierarchical data, it makes sense to have the signatures provided via the same hierarchy. This doesn’t imply it’s a good basis for security or trust, whatever “trust” means.

What we need is a “distributed trust model” based on “reputation”, not a hierarchical one based on untrustable power hierarchies. But as importantly it needs to be not just efficient, it also needs to be “local” such that any form of non local disruption will not effect it’s functioning.

Well, we tried that with PGP, and a common complaint was that nobody knew how to answer when PGP asked “how much do you trust this person?” or “how sure are you of their identity?” So, I feel like “we” don’t really know what our goal is, we don’t know how to create a comprehensible interface for this unknown goal, and we don’t know how to technically implement it.

But I’ll push back a little against the idea of a trust model. What forms of trust are required in which situations, and why? That seems to be a word that “we” (people in computer science, mathematics, crypto) throw around in relation to security, often without considering whether it’s necessary or even useful for security.

As an example, Google hosts a bunch of common Javascript code at googleapis.com, to be referenced by arbitrary webpages. Traditionally, that meant a browser had to trust Google and Google’s CA and registrar and registry—if your bank includes a file from there, they get the same access to your bank accounts as you. But this was never sensible, and changing the model by which the browser confirms it’s talking to the real googleapis.com would be the wrong solution. jquery.min.js 3.6.0 is expected to be the exact same set of bytes every time, known in advance to whoever was writing the webpage that referenced it. “Subresource Integrity” (SRI) significantly reduces the amount of trust required. Now the only trust required of Google’s server is that it doesn’t invade user privacy or exploit flaws in the user-agent.

Similarly, I think smartphone app sandboxing is more valuable than certificates proving authorship (by some company you’d maybe not heard of till install time). It’s not currently sufficient, but a zero-trust model seems more promising to me than a distributed trust model, for many purposes.

This all relates to why I say the problems won’t be solved by the mathematicians et al. We can prove that some data has hash H, and hash H was signed by the private key X associated with public key Y, and public key Y has the following attestations… and then we pretend, or let people believe, that it implies anything about the wisdom of running the data as code, or sending private information as it suggests, or whatever. In reality, we’re not all that good at anything beyond the raw math, and the Rainbow paper suggests we’re still stumbling even there.

SpaceLifeForm May 18, 2022 12:58 AM

Follow the signs. The old backdoor may work fine.

hxtps [:] //gizmodo.com/ nsa-no-backdoors-new-encryption-standards-promise-1848924186

“Do this. Don’t do that! Can’t you read the sign?”

hxtps://piped.kavin.rocks/watch?v=c9lh7lqZojc

John Brown May 18, 2022 3:26 AM

@Ted why would they be selective? It may be the ‘job’ of NSA types to violate the security of free humans for the good of the American empire, but they’re also personally perverts who want to be able to wank to all of your baby photos.

Anon User May 18, 2022 4:05 AM

For the real secret stuff, just keep the public keys as secret as the private keys. Don’t distribute them over the net at any cost, instead keep them local between the communicating parties. Even if all encrypted communication was collected and the algorithm gets broken, there is still nothing to decrypt without a public key available to the attacker.

It doesn’t scale too well, but still much better than a One Time Pad, and once established you get most of the advantages of public key schemes.

Clive Robinson May 18, 2022 5:53 AM

@ Anon User, ALL,

For the real secret stuff, just keep the public keys as secret as the private keys.

And now for the bad news…

It can be shown that like most key based systems the key encodes a shadow of it’s self in the ciphertext. It is at the end of a day just a “mapping function” of large size

That is it can be distinquished from “truely random” in quite a few ways[1]. Thus you have to do a lot lot more to get reasonable security for more than a handfull of bits.

It’s why, it’s not even close to the advantages of a one time pad…

When you consider some recomendations are for 8k+ bit RSA keys that is equivalent to upto 256 characters of plain text (about the size of larger One Time Pad tapes/pages). You have to ask if the advantages are realy worth it.

And that’s before we talk about some of the key sizes for supposedly “Quantum secure” crypto algorithms.

[1] All generator functions suffer from the following two issues,

1, They are a mapping function.
2, They have a state.

Either is sufficient to determin which key is being used within about two to three encryptions.

Ted May 18, 2022 6:01 AM

@John Brown

but they’re also personally perverts who want to be able to wank to all of your baby photos.

I had a bowl cut, so I don’t think so.

Denton Scratch May 18, 2022 8:57 AM

@Lupe

Well, we tried that with PGP, and a common complaint was that nobody knew how to answer when PGP asked “how much do you trust this person?” or “how sure are you of their identity?”

This business of trust and identity. I once set out to get a PGP key authenticated through the Web Of Trust system. It involved meeting a certifier in person, and presenting personal documents, including passport and bank statements. I didn’t go through with it, because:

  1. I realised that I might not want my online identity linked to those documents.
  2. Hardly anyone else uses Web Of Trust.

What I really need is a way to prove that this “me” is the same “me” as in my other posts. I want to be able to be anonymous, but verifiably the same person.

The word “trust” isn’t useful in this context; it means too many different things. I’ll lend you a tenner, and trust you to pay it back (because I can afford to lose a tenner). I’ll lend you ten grand, and trust you to pay it back because the courts will enforce the debt. I trust this piece of code, because it’s run in an environment where it can’t harm me. I trust your mobile device because I know that you can’t alter the encryption keys.

The only circumstance in which I’d want my online identity linked to official records would be if I were dealing with a government agency. That’s roughly the same as the IRL situation: I don’t expect non-official agencies to demand official identity documents (opening a bank account is an exception). In nearly all other situations, I expect people to accept my self-attestation of my identity; the only effect of using a false name should be that my current utterances and actions can’t be linked to my other utterances and actions.

That seems to mean that I want to hold multiple identities, each of which I can prove is mine, but which can’t easily be proven to all be the same individual. You can do this with signatures, but all the “popular” schemes are tied to official government-issued ID. Something like SSH keys seems to get close to what I’d like.

Clive Robinson May 18, 2022 10:55 AM

@ Denton Scratch, Lupe, ALL,

What I really need is a way to prove that this “me” is the same “me” as in my other posts.

Actually that would be a major mistake to commit.

Your life is “not you as an individual” but “a collection of individual roles”

It’s the roles that the world sees but bureaucrats and authoritarians that want to see you as a single entity, which is realy very very bad news for every individual.

The old “usuall suspects” on this blog such as @Nick P, @Wael, @Robert T, and myself discussed how to do this years ago.

You build a “Role Reputation”.

That is you define a unique “Role IDentity”(RID) and you create a root PK cert pair for it (RID-PK pri/pub). You post messages signed by RID-PK pri that anyone can check as belonging to your role. Your posts get moderated as each post is accepted your site “Role-Reputation-Score”(RRS) rises, if it’s not acceptable either the score does not rise or may go down. As your RRS rises at some point you are considered safe enough not to be moderated or held. If you do something questionable then your post gets removed and your RRS gets decremented by some amount. If you continue to transgress then you drop back into “being held” or “being moderated”. The site might have a “keep alive function” that is if you stop posting your RRS goes down with time or gets re-set to some threshold value.

Who you realy are matters not a jot, it’s your “role” you are building a “reputation” for.

This is the basis for a “natural model”, “reputation system”, which works in a similar way to which “human trust”[1] does within a normal society.

You can extend the RRS system by having others roles with suitable reputation scores effectively “up-voting” your roles reputation with a time based decay.

Again this is in effect mimicking “natural human trust”.

The question then becomes can you transfer reputation anonymously from role to role. This is a hard problem to define. Think about non anonymous reputations, “Guard Labour” and Politicians and those that work in formal control hierarchies “demand” not “earn” a reputation. Academics and similar tend to “earn” not “demand” a reputation. Those in the former group should have no anonymity because it is their “actions” they need a reputation for. Whilst those in the latter group, can have or have not by their choice anonymity as it is their thoughts and words that are being judged generally not their actions.

The fact I use my name and make no secret of it, should not effect the validity of what I say, it is for others to read, and decide my thoughts and words worth. Any reputation I have or have not, should only effect the individual reader, that is it is their choice to read or read not my words.

From my perspective others should not be able to “poison the well” by pretending to be the role I post my thoughts and words under. Nor carry across unreasond praise or loathing from one subject to another. Yes my role has certain view points others roles might chose to disagree with. But mostly I can justify my roles viewpiint / position and either have done so or will do so. If others disagree then they likewise should be able to justify their roles viewpoint / position. If they can not do so as a role or need to claim some “higher authority” then their role has failed. The failings of other roles should not be pushed onto the role I use here.

Unfortunately this vote likes crypto coin needs an “anti double spend” quality. Where as once people would have almost automatically shouted “blockchain” they should now realise it effectively strips away anonymity to some level. The problem is that the blockchain and similar are non-ephemeral, that is every action of a role is in effect recorded. All to easily the anonymity that a RID can give can thus be stripped off revealing the actual persons ID.

It’s just one of the things I think about from time to time, because if it can not be solved then that indicates that anonymity with tracable reputation is not possible.

[1] Where possible I try to qualify trust the two obvious because they are almost opposits are,

1.1 Human trust
1.2 Security trust

But both of those “work within” acceptable or common models hence,

1.3 Natural trust model
1.4 System trust model

Like “reputation”, “roles” and similar “trust” is one of those things we need to put a lot more “practical” academic work into and start using.

Denton Scratch May 19, 2022 3:41 AM

@Clive

Actually that would be a major mistake to commit.

Maybe I expressed myself badly; I want multiple identities, each of which I can prove I own, without others being able to connect them, with only government ID and bank ID being linkable by others to me the person.

Clive Robinson May 19, 2022 4:52 AM

@ Denton Scratch,

Re : Maybe I expressed myself badly

No and yes…

The problem is naturaly we say,

me, I, you…

in english, even though we are in reality a collection of “relationships” based on “roles”,

friend, colleague, boss, subordinate, son, daughter, mother, emoloyee, employer, club member, customer, etc.

Our actual data communications are based on the relashionship or role, not on who we, you, I, are, as a physical body, name, or social security number issued by the State for the “Convenience of the State”.

Think as your “name” as actually part of your “address” and your “roles” identified like the “your ref” and “my ref” used in written letters.

It then becomes easier to see that the “name” is actually redundant and can in fact be replaced with the “ref”, which is unique to a communication exchange or sequence of exchanges. Making the “ref” anonymous is thus easy.

It’s even alowed legaly, you can treat a non living entity as a “name” that has roles “Chairman”, “CEO”, “Company Secretary”, “legal representative”, “owner”, “occupier”, etc.

Unfortunately where we need multiple relationship support the most –user comms software like browsers[1]– the designers and developers want to ignore the idea of “relationships” and stick with “users” for the designers and developers “convenience” not that of the needs of the user.

Will this change?

Well maybe if we keep drawing attention to it, but not otherwise.

[1] In the past this was not realy as much of an issue, as you would have multiplle accounts that could have any string of identifiers as long as they were “unique” which “personal names” actually very rarely are which causes all sorts of issues in larger organisations with administration. Where the “nsme as an address” becomes apparent when you are “Steve Smith in accounts” or “Steve Smith in support” etc.

Sumadelet May 19, 2022 8:16 AM

Although I’m late to this party, I may as well refer to the apposite quotation by Mandy Rice-Davies.

‘Well he would, wouldn’t he?’

( h++ps://inews.co.uk/culture/well-he-would-wouldnt-he-bbcs-the-trial-of-christine-keeler-gets-famous-quote-right-374825 )

Further reading: h++ps://en.wikipedia.org/wiki/Profumo_affair

Winter May 20, 2022 7:29 AM

In what must be “pure coincidence”, Nature published a paper[1] on May 12 titled (sorry, not Open Access):

Transitioning organizations to post-quantum cryptography
ht-tps://www.nature.com/articles/s41586-022-04623-2

Quantum computers are expected to break modern public key cryptography owing to Shor’s algorithm. As a result, these cryptosystems need to be replaced by quantum-resistant algorithms, also known as post-quantum cryptography (PQC) algorithms. The PQC research field has flourished over the past two decades, leading to the creation of a large variety of algorithms that are expected to be resistant to quantum attacks. These PQC algorithms are being selected and standardized by several standardization bodies. However, even with the guidance from these important efforts, the danger is not gone: there are billions of old and new devices that need to transition to the PQC suite of algorithms, leading to a multidecade transition process that has to account for aspects such as security, algorithm performance, ease of secure implementation, compliance and more. Here we present an organizational perspective of the PQC transition. We discuss transition timelines, leading strategies to protect systems against quantum attacks, and approaches for combining pre-quantum cryptography with PQC to minimize transition risks. We suggest standards to start experimenting with now and provide a series of other recommendations to allow organizations to achieve a smooth and timely PQC transition.

[1]Joseph, D., Misoczki, R., Manzano, M. et al. Transitioning organizations to post-quantum cryptography. Nature 605, 237–243 (2022). ht-tps://doi.org/10.1038/s41586-022-04623-2

Ted May 20, 2022 10:05 AM

@Winter

Gosh I wish the Nature PQC transition paper was not $32. Found out I could download it thru a university library though. It’s 7-pages, or about 5 and a half pages without the references. The topic is so relevant to organizations, I could only hope they’d make it available for free.

SpaceLifeForm May 20, 2022 6:19 PM

@ Anon User

re: It doesn’t scale too well

For the real secret stuff, just keep the public keys as secret as the private keys. Don’t distribute them over the net at any cost, instead keep them local between the communicating parties.

You did not define ‘local’.

It does not scale at all. If Alice and Bob need to meet face-to-face to exchange their Publickeys and Privatekeys, they may as well just exchange large OneTimePads and agree on a protocol of their use.

The point of PublicKeys, is, well, that they are Public. So that Alice and Bob do not need to meet face-to-face.

Reputation is required.

As you are reading this, you are trusting a PublicKey.

Did you ever meet Bruce face-to-face and exchange Keymat?

No, you did not. You are trusting his reputation and that the schneier.com TLS Certificate is valid. That is all.

Canis familiaris May 21, 2022 11:19 AM

Did you ever meet Bruce face-to-face and exchange Keymat?

No, you did not. You are trusting his reputation and that the schneier.com TLS Certificate is valid. That is all.

I’m not trusting Bruce in the slightest.

TLS tells me, at best, that the communication between my PC and the web-server is difficult to eavesdrop on. It says nothing, absolutely nothing, about the trustworthiness of the endpoint.

Even that is not a hard-and-fast guarantee, as I have not checked the source code of all the software involved in the chain of communication, and compiled it using a verified compiler. Given the code ‘trusts’ root certificate authorities about which I know little, I am open to MITM attacks which will not be flagged up by my browser e.g. by Cloudflare, which MITMs a significant portion of the Internet.

I’m reading some interesting text, the provenance of which is difficult to ascertain to a degree of certainty required to apply the description ‘secure’, let alone trusted.

I might trust a formal handwritten communication from Bruce (the identity of whom is attested by two forms of government-issued ID, one of which is photo-ID) with his signature on it, attested by witnesses. That’s a minimum required legal standard in some parts. Inconvenient for the Internet, but convenience trumps thoroughness.

Remember, nobody on the Internet knows you are a dog.

JonKnowsNothing May 21, 2022 2:03 PM

@Canis familiaris @All

re: nobody on the Internet knows you are a dog

That might have been true early on but it’s more accurate to restate it

No one reading your text knows you are a
No one watching your vids knows you are a

Except For:

Every LEA, LEO and POAB know exactly who and what you are, where you live, who you work for or who you don’t work for, you family, your lifestyle, your living situation, what you eat for breakfast, lunch and dinner (provided you have food to eat or the brand of pet food you prefer.

The leash maybe long but it’s tied on quite tight. You may not know who holds the other end until you hit the end of the feeder line.

===

POAB (Plethora of Other Agencies and Businesses)

JonKnowsNothing May 21, 2022 2:06 PM

LOL

The parser omitted the Fill In the Blank phrase…. Well, y’all will have to Fill in the Blanks for yourself

Canis familiaris May 23, 2022 3:06 AM

@JonKnowsNothing

You are right. Essentially the doggy anonymity is flipped round to you not knowing who you are communicating with (or leaking information to), even though others with privileged access to the network have a pretty good idea who you are. So rather than

“Nobody on the Internet knows you are a dog.”

it becomes

“On the Internet, you don’t know if you are communicating/leaking information to a dog.”

where “a dog” is shorthand for any unknown entity.

Am I reading Bruce Schneier’s blog, or is it Bonzo Schnauzer’s blog with a human photo attached? In this case, I don’t actually care, as I’m more interested in the concepts than where they came from, and good thinking is independently verifiable.

No doubt somebody, somewhere keeps a record of which endpoints (and associated humans) are reading this blog. (Waves)

SpaceLifeForm May 23, 2022 4:57 PM

@ Canis familiaris, JonKnowsNothing

A chew biscuit

No doubt somebody, somewhere keeps a record of which endpoints (and associated humans) are reading this blog. (Waves)

That is why I mentioned not expanding the size of the recent comments set.

The size of recent comments (traffic size in bytes even though TLS encrypted) can be used as a fingerprinting metric, which potentionally could be used against VPN or TOR users.

While I doubt most of the commentors here actually use either VPN or TOR, the spambots may very well always do so.

Is the dog named BOT or TLA? Maybe there are two of them, and they are related.

Maybe it’s one chasing it’s tail. Maybe it is Wag the Dog.

SpaceLifeForm May 23, 2022 6:22 PM

@ Canis familiaris, JonKnowsNothing

re: chew biscuits

Here is very recent example. The spambot posts to an old article (4 years old in this case), which dynamically changes the bytesize of the recent comments page.

https://www.schneier.com/blog/archives/2017/03/security_vulner_8.html

hXtps://www.schneier.com/blog/archives/2017/03/security_vulner_8.html/#comment-405239

As I do not and will not use VPN or TOR, I do not care anyway as I can be easily traced just by normal traffic analysis.

Because I know that the FBI and NSA know who I am already, and I am not laundering money, there is no reason to try to hide my traffic. I may upset them once in a while because the truth hurts. But, it is important that they think outside the box, and connect dots.

If FBI and NSA can not do that, well, see toilet paper and thrones.

There is probably a lot of cryptocurrency money laundering happening via TOR.

That reminds me to check out if the Tor Browser will function without actually using TOR.

1&1~=Umm May 25, 2022 12:26 AM

@SpaceLifeForm:

“That reminds me to check out if the Tor Browser will function without actually using TOR.”

Does it matter?

As has been noted long ago the DoJ and other prosecuters will claim having it on your computer is a sign of intent to commit crime or unlawful behaviour. Which will be used to justify the use of other legislation that is overly broad in scope.

Oh and do not forget the universal catch-all that you can always be found guilty of, ‘conspiracy’.

The point being once they start you have to be found guilty of something

  • To justify the resource use.
  • To collect promotion points.

The fact you may not have done anything illegal, immoral, unethical or even unkind, is not the point.

  • Justice has to be seen to be done.
  • There must be no chance of escape.
  • There has to be no compensation.
  • There must be no embarrassment.

So,

‘The ends will always justify the means.’

JonKnowsNothing May 25, 2022 12:38 PM

@1&1~=Umm, @SpaceLifeForm, @All

re:

The fact you may not have done anything illegal, immoral, unethical or even unkind, is not the point.

Justice has to be seen to be done.
There must be no chance of escape.
There has to be no compensation.
There must be no embarrassment.

In light of the recent SCOTUS ruling preventing State Criminal Cases from passing into the Federal courts regardless of deficiency of the State level Defense Contesting the Allegations (1), there is likely to be a shift in how cases will be presented.

ATM Federal Cases are all the rage, good publicity and broad application. They are subject to long reviews and challenges. Should the shift move State Level prosecution, there are fewer avenues for rebuttals or appeals. It might not be quite as news worthy but many Federal Laws can be applied equally at State Levels using similar or equivalent State laws.

Once you are convicted and the State appeal process exhausted, you will be a long term Guest of that State.

====
1) IANAL in summary: any deficiency in Defense against allegations by the State, regardless of their origin are no longer grounds to pass on to the Federal Courts. Missed applications, missed submissions, lack of investigation, lack of countering experts, lack of countering science rebuttals are all 100% the fault of the Accused and now Convicted Person. Regardless of origin or deceit by the State, there is no allowance for further review past the State Supreme Court level. Many reviews are procedural only, detailing issues within the Transcripts and there are no avenues for submitting new evidence or rebutted forensic methods.

Conservative majority hollows out precedent on ineffective-counsel claims in federal court

  • In Shinn v. Ramirez and Jones, two men on Arizona’s death row raised claims in habeas corpus proceedings that their trial attorneys were constitutionally ineffective – one for failing to investigate evidence suggesting his client could not have committed the crime, and the other for failing to investigate her client’s intellectual disability, which could have spared him the death penalty. Although the Supreme Court’s 2012 decision in Martinez v. Ryan permitted defendants to raise such claims for the first time in federal court, on Monday the court ruled 6-3 that they cannot develop evidence to support those claims.

ht tps://www.scotus blog. com/2022/05/conservative-majority-hollows-out-precedent-on-ineffective-counsel-claims-in-federal-court/

(url lightly fractured)

Chris Drake June 16, 2022 12:53 AM

purpose “to build trust and confidence”:

“NSA Says that There are No Known Flaws”

“flaws were found in at least four candidates.”

Someone forgot to tell the Math people what the purpose was supposed to be, so they could have thrown out the obvious duds before making their “no known flaws” statements… putting aside the irony of anyone ever deciding to “trust” anything said by an agency that exists to know all our secrets…

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.