Oblivious DNS

Interesting idea:

…we present Oblivious DNS (ODNS), which is a new design of the DNS ecosystem that allows current DNS servers to remain unchanged and increases privacy for data in motion and at rest. In the ODNS system, both the client is modified with a local resolver, and there is a new authoritative name server for .odns. To prevent an eavesdropper from learning information, the DNS query must be encrypted; the client generates a request for www.foo.com, generates a session key k, encrypts the requested domain, and appends the TLD domain .odns, resulting in {www.foo.com}k.odns. The client forwards this, with the session key encrypted under the .odns authoritative server’s public key ({k}PK) in the “Additional Information” record of the DNS query to the recursive resolver, which then forwards it to the authoritative name server for .odns. The authoritative server decrypts the session key with his private key, and then subsequently decrypts the requested domain with the session key. The authoritative server then forwards the DNS request to the appropriate name server, acting as a recursive resolver. While the name servers see incoming DNS requests, they do not know which clients they are coming from; additionally, an eavesdropper cannot connect a client with her corresponding DNS queries.

News article.

Posted on April 18, 2018 at 6:29 AM37 Comments

Comments

Thoth April 18, 2018 6:47 AM

@all

Even with ODNS, countries can still implement censorship to prevent access to these ODNS services on grounds of suspicion and so-called “National Security”.

The reason ODNS is yet another bad attempt at network security is because these DNSes are essentially network chokepoints and points of reliance.

The only remedy would be a decentrlized and oblivious solution where every is a DNS itself which in simple terms means we need to move to a broadcast style system.

David Alexander April 18, 2018 8:38 AM

The ODNS servers become a priority target for compromise by anyone currently using the DNS information for intelligence purposes. They either want to obtain copies of keys or read traffic in the server in plaintext.

Snarki, child of Loki April 18, 2018 8:46 AM

Seems to me that the ODNS servers would be very vulnerable to DDOS attack. Lots of bad DNS lookups, that take a long time to resolve, for example.

Brent Longborough April 18, 2018 8:59 AM

Bruce, have you had a chance to look at the Cloudflare dns-over-https proxy? On the face of it it seems a simpler solution, but I’m sure there are hidden “naughty bits” you may care to comment on.

It’s part of the Cloudflare Argo Tunnel daemon, cloudflared:

https://github.com/cloudflare/cloudflared

Argus April 18, 2018 10:57 AM

If you see the request coming into the server, and requests coming out of the server. It should be pretty easy to correlate the two, unless the volume of requests is high enough, or they throw in some caching as well.

albert April 18, 2018 10:58 AM

@Thoth,
“…oblivious solution where every is a DNS itself …”

Please edit ‘every [what]’?

. .. . .. — ….

bcs April 18, 2018 11:46 AM

Slight modification: Use {www.foo.com}k.{some other domain}. Then people can choose whatever resolver (or chain of resolver) they want.

This would require an attacker to compromise an unpredictable set of systems to break any given user. It would also allow the user to not totally trust any one of the chain and still have some expectations of privacy.

OTOH, this would likely make the DoS possibilities even more of a problem.

Michal Malec April 18, 2018 3:45 PM

This looks like a good solution before the quantum computers become more popular. More things needs to be done to protect PKI from breaking then. I think quantum computers will become popular in the next few years, before ODNS would come into life. We will see =)

Justin Andrusk April 18, 2018 4:17 PM

Seems to me that this might subvert certain tools such as DNS firewalls and make it easier for bad actors to do nefarious things with TXT records without being detected due to the crypto.

Thoth April 18, 2018 6:06 PM

Typo Error Edit:

“every node” should be inserted to “oblivious solution where every is”.

Clive Robinson April 18, 2018 10:53 PM

At best it trys to protect the name requested in one direction only (from client upwards). What it does not protect is thay the client has made a request, nor does it hide any changes that request has made.

Thus as @Argus has noted the system is easy meat for even simple traffic analysis.

There are two issues to consider,

1, Is the existing DNS going to leak information as well.

2, Can the traffic analysis be reduced / eliminated.

The answer to the first is the existing DNS is designed to hemorage information at every stage thus provide a way for the information to be found.

The answer to the second is as @Thoth notes posible for some of the path of the client request.

The problem is that of Tor that I have been mentioning for several years now. As long as the client or the final server are not integrated into the mix network then some level of traffic analysis and leakage will happen.

The message is if we want to make the DNS in anyway private we have to get rid of the existing DNS if we do not the system to leak some information that a sufficiently high level attacker will be able to trick the DNS –because they can subpoena the DNS PubKey pair– from a leak that is a trickle to a full blown torrent.

That is we don’t want to “prune the tree” of the existing DNS we want to “chop it down, and burn every last leaf and twig of it” leaving only the memory of why “efficiency -v- security” is so often a problem as it is with the existing secure DNS setup.

So far so easy, the question is what to replace it with, which will have not just sufficient crypto strength to avoid subversion but sufficient human strength to avoid the MICE / $5 wrench / judicial soloution at the heart of the replacment name service. But also survive the equipment they buy from being “enhanced in transit” by the likes of a SigInt agency. All so the top layer of the domain name distributed data base can actually be trusted, so the next layer down has a trusted foundation etc.

I could go on but people need to understand that the SigInt, IC and LE agencies do not behave as “rational actors” therefore they will go to any lengths to protect their rice bowls. Whilst the SigInt and IC try to remain out of sight the likes of the FBI and DOJ are about “sending messages” to steam roller all opposition. They only came unstuck with Apple because they were to full of themselves and did not do their initial ground work sufficiently.

Thus perhaps the first place to start fixing the DNS is by fixing the sociopath problem in variois government agencies. Which all current “oversight” systems are never going to do. It’s a fundemental issue with all hierarchical systems be they human or otherwise, power / control / corruption are most easily wielded from the top.

So perhaps we should have a think about how to develop a non hierarchical system. Contrary to popular opinion such systems do exist and they can be made secure at a price.

Then we have

Roggle April 19, 2018 1:04 AM

@Clive

State actors aren’t the only threat out there and you could break every Internet infrastructure security model out there if your threat model involves hardware being interdicted and humans being compelled by tech-stupid courts or $5 wrenches.

Winter April 19, 2018 2:01 AM

If the ODNS servers cache the resolved requests, I would assume that that would limit the amount of information leaked.

Clive Robinson April 19, 2018 2:52 AM

@ Thoth,

A question for you, but first the prelude,

As you know I’m not particularly fond of the “blockchain” as the world appears to have gone “blockchain mad” which is often a sign of “bubble investing”. But that does not mean I can not see the technology has worth in the right place when you find it 😉

However taking the blockchain back a step or two to the Merkel Tree, that can be shown to be the most efficient mechanism we currently have for a trusted database one or two additions gets you to a trusted ledger which can if distributed properly reveal tampering or other bad behaviour fairly quickly and from whence it came[1].

Whilst such a public ledger can be trusted for consistancy within it’s self, the problem is there is no trust on those entering the system so there is no way integral to the ledger to trust a “first entry”. This is the same issue we have with PKI and CA’s, to many it appears “CA’s will sign anything no questions asked if the price is right” whilst not true in general there are enough failures known to show that there are no effective trust mechanisms in place.

Have you had any thoughts about how to extend trust backwards such that a ledger is not a “garbage in garbage carried”[2] system?

I know it will not be 100% as trust systems are not reliable predictors of future behaviour[3]. However there should be some way to make trust falsification more difficult and it is sorting this out that interests me with regards many supposed secure systems.

I know @Bruce has recomended a diversification of document sources, but for obvious reasons we all start out without even a birth certificate. So certain things have to happen that are often beyond our control for us to get certain documentation. For instance some people for various reasons will never get a drivers licence, others will not get credit scores others mortgages and so the list goes on. However as @Bruce notes a centralised system is a quite dangerous thing, not just because it makes criminals tasks easier but because as many US Voters have found they can easily be prejudiced against by various political entities or have other effective rights required to live in society curtailed on anothers whim.

[1] No I do not mean it is attributable to an individual or entity, but rather that it can be shown at what node the tampering tried to enter the system (think of it as toolmarks on a window frame of a house that has just been broken into, it does not tell you “who” but it does tell you “where and when” thus with a better chance a bloodhound has to pick up the sent trail).

[2] Such as we have seen with “smart contract” systems.

[3] As I’ve noted before “You only become a murderer after you have killed someone, but not always” thus a loss of trust is retrospective to an act of betrayal becoming known, not prior.

Clive Robinson April 19, 2018 4:13 AM

@ Roggle,

… your threat model involves hardware being interdicted and humans being compelled by tech-stupid courts or $5 wrenches.

Of those three there are known solutions to two of the “interdiction” is something that even the US Department of Defence is still working, especially the “supply chain” part of the problem.

The issues with the judiciary are most easily solved by never having what they desire in their jurisdiction or you having access at any time.

For instance lets say I design an embeded device for you and I make it sufficiently tamper proof. It consists of two parts the home unit and the mobile unit. I generate to sets of PubKey pairs P1pub/P1pri and P2pub/P2pri, I then shred the four primes used to generaye them. I embed P1pri and P2pub in the home device and P1pub and P2pri in the mobile device.

It’s fairly easy to see that the two devices can communicate data secretly across an open network simply by use of the PubKey pairs. Importantly it can be shown that you have know knowledge of the keys and cannot get access to them. Further that if you are caught with the mobile device it is clear that whilst they might be able to read any new messages from the base and might be able to generate new messages from the mobile, there are protocols that can be used to establish ephemaral keys that if they have been used in the past are now lost to the world and unrecoverable. You as the user can be compleatly oblivious to this and other techniques can cause ephemeral key regeneration/renegotiation sufficiently frequently that even if mobile device is grabed powered up it will have failed to negotiate a new ephemaral key and zeroed the last active one. So it can be shown in court that you not only do not know what the secret keys were you never could have.

Further other protocols can be added to say the home unit that a short while after a failure it zeros it’s PubKey stores.

The real issue of course is can you as a human keep up with the level of OpSec you put into the system and the potential loss of communication that might involve.

The only other requirment is that whilst you and the mobile device might start in the same legal jurisdiction, you only use them in different jurisdictions. Thus there is little a judge can do.

Further if you pick your jurisdictions with care there will be no cooperation in a sufficiently timely manner and there are techniques that protect against that as well.

It might sound paranoid but there are plenty of people around following more intense OpSec techniques and they are people just doing their jobs abroad working with valuable information.

Also whilst it might not be of much consolation with such situations the $5wrench does not get the wielder any further than it does the archetypal gaval basher.

It’s a point that certain people have been trying to get through certain political skulls for quite some time and for some reason the lack of “make it so” omnipotence has made the politicos cranky.

But even if they ban such encryption systems or any computer encryption software there are the equivalent of paper and pencil ciphers that are sufficiently secure and phrase codes that can with a little practice be said fairly normally, thus any monitored communications system would be of just as much use to them as a highly enciphered one, which is nil.

As we know from a cranky Australian politition rationality and logic are not things they want stopping their ambitions and plans, hence the rather daft comment about the Laws of Australian being superior to the laws of mathmatics…

SpellKing April 19, 2018 6:01 AM

@Clive Robinson
“politition”

The entire world is waiting to find out what a politition is.

GoodGood April 19, 2018 7:28 AM

The real issues here are not technical, but economic and political. Any solution that works well, will be opposed by nations and their very well funded intelligence services. The intelligence services may be able to break the solution, with tools we don’t even know they have. The nations can pass laws forcing large companies to act in a manner that is less secure. The companies can’t afford to withstand these governments and their bad laws beyond a certain point. The individuals and non-profits willing to stand up to governments do not have even a fraction of the funding they would need. And voters are oblivious and apathetic to these issues — so living in a democracy is no different (on this one issue) from living in a benevolent monarchy run.

OrthoGrafZahl April 19, 2018 7:31 AM

@SpellKing

It has to do with this new compleatly embeded ephemaral mathmatics thing. Duh.

Argus April 19, 2018 9:30 AM

@Winter caching may make the service unusable for services with low TTLs that use DNS for failovers between data centers.

Probably better than nothing, but not perfect.

Sancho_P April 19, 2018 9:34 AM

OK, I don’t know much about DNS and that stuff.
But I think the client asks to resolve a certain name to the particular (IP) address.
– It does it every time it wants to connect to “name” – that’s strange, to say the least, as the address rarely changes.
FWIW, say, we can hide that.
– Next the client connects to that address, only ms later.
But then, why did we hide the DNS search in the first place?

However:
The network will always connect us to the first respondent of that particular address.
In the given system, as long as the adversary can be faster, we’ll never know who replies.

albert April 19, 2018 12:07 PM

@Thoth, @Clive, @Sancho_P, etc.
“every node” Thanks, that makes sense.
The DNS system is a convenient way of turning easy-to-remember names into difficult-to-remember IP addresses. It’s also an easy way to monitor myriads of IP addresses from a single point, as opposed to have to create a list of all the computers you want to monitor.

It appears to me that applying band-aids to an insecure system won’t make it secure (and may make it worse), so the system needs to be scrapped. With all the power we have on our desks (and in our pockets), why not store the IP addresses on our computers?

. .. . .. — ….

Gerard van Vooren April 19, 2018 4:15 PM

Well guys, I am sorry to say this but this myth is being busted by any gov that is around these days. Sooner or later this will happen: Busted, busted, busted.

Simply because these engineers/scientists could have invented an impressive security, it’s “those guys” who will bring up the “save the children” crap and that is the end of it.

Those guys will bring such a long list of amendments it’s just not funny anymore and the same counts to this story.

Clive Robinson April 19, 2018 5:13 PM

@ Albert,

It appears to me that applying band-aids to an insecure system won’t make it secure (and may make it worse), so the system needs to be scrapped.

One of the first design errors of the current DNS with regards privacy is that is a “Pull not Push” system. The reason for that choice is that “Pull” is more efficient and timely than Push.

However the implication of Pull is that it has to be not just fast but “low latency” as well, which makes “traffic analysis” way easier… Push however whilst it can work and give low latency information has many needless network issues not least of which is multiple redundant flows of information which whilst it does not reveal user request time refrences does cause a lot of redundant traffic. As with cross-bar switch positioning, access to any DNS information can be efficiently split into multiple “data nodes” around a distributed network.

Sancho_P April 19, 2018 5:37 PM

@albert

I’m afraid it would be of no real benefit.
To store (most of) the IP addresses (or to run a DNS on your local network) might be relatively easy, but it will not change the following request to connect to the desired service. So your ISP knows exactly at which times and how often you access 66.33.204.254 (schneier.com).

And it will not solve the “first come first serve” issue – If someone replies “in lieu” of 66.33.204.254 and is faster then you will see their page.
The other, authentic replies will be silently ignored by your computer.

Thoth April 19, 2018 7:55 PM

@all

Not exactly part of the topic but is still relevant to the topic, Google has decided to explicitly prevent any sort of “Domain Fronting” techniques that are traditionally relied upon by journalists, dissidents and privacy oriented people as well as bad actors.

Because of the way our network works, we have to use weird methods like “Domain Fronting” to attempt to clean away our transmission traces and it’s a cat and mouse game of taking out any proxy services that is responsible for masking traces by using national level bans and via political and economical sanctions and attributions.

Google’s DO BE EVIL motto becomes DO BE MORE EVIL 🙂 .

Potential P2P communications platforms are not properly funded and thus impacts adoption rates.

The attitude of people are in a denial state and this affects adoption rates too.

All in all, the root of all the problem has nothing to do with technology as broadcast and multicast networks already long existed.

The problem is with the people, politics and money.

Link: https://www.theregister.co.uk/2018/04/19/google_domain_fronting/

Thoth April 19, 2018 8:11 PM

@Clive Robinson

RE: Blockchain

It is just a fancy way of saying Merkle Tree and no more. In it’s most basic form as we know it is what it is and marketing people call it Ledger, Next Gen Trust, Blockchain …etc… and rightfully, a bunch of scams, spams, junks and nonsense are built on top of that.

I will touch on my latest snake oil hunting expedition which I just found another probably snake oil trying to mingle in solid security technology just for sales and marketing.

There is a form of Merkle Tree called the Permissioned Ledger which essentially means that nobody except those with explicit permission can add an entry into the Permissioned Ledger. Another name for it is a ‘Curated Ledger’.

It is essentially a PKI in a Ledger. You announce a Root Key in the Genesis Block (root hash of the tree) and every entry must be signed by a Root Key or in the usual CA style, approved by the Root Key to make an entry. An illegal entry would be pruned from the Permissioned Blockchain.

I have never really considered much on the Blockchain issue of preventing garbage entering the system because my personal view is the same as yours that Blockchain is simply a hyped up technology.

Would anyone bother to traverse a Merkle Tree simply just to search for records or would they rather search a normal flat file or database from a centralized system with proper authorization (i.e. X.509 certificate signed entries using internal CAs). I have tried to install a Bitcoin Core client (official Bitcoin client) and the current blockchain is about 20+ GB big and downloading and verifying the blockchain takes about 1 week to do so.

If I am a user, why would I want to wait for 7 days for my Bitcoin client to sync up to the network and do the due diligence of checking all the hashes of the entries in the Merkle Tree before I can start to spend my Bitcoin funds ?

Of course there is a thin client version but the thin client version does not do a check on the blockchain and simply shortcuts everything which means you cannot fully guarantee that your funds are sent correctly !

My only thought of using Blockchain is for internal application (i.e. my modified version of the Castle-&-Prisons model) where I described the use of blockchain to keep the Prison Cells in check so that it enforces an internal trust of sorts between the Prison Cells but because the use is exclusively with the C-&-P setup and no external parties are going to touch the C-&-P blockchain, so the amount of garbage that would enter the system is lesser and it works as a Permissioned Blockchain because only the Prison Cells are allowed to update the Permissioned Blockchain assuming all the Prison Cells’ Public Keys are already somehow known to each other (via some Root Signing Key I suppose that is embedded in the Genesis Block ??? ) and thus that is my only use of it by limiting off access for a very specific and non-generic purpose.

wetsuit April 20, 2018 10:19 AM

@albert:

“Why not store the IP addresses on our computers?”

LOL – that’s a really big hosts file! There are many more sites than domains, so (according to Netcraft), 200M domains x 100 = 20GB, which is not bad. Would have to be updated smartly, and that’d be the trick.

The general idea is good, and is why I’ve cached a long run of internet usage into an unbound local resolver. Of course if the first pull was bad, it’s still bad.

— ..— -… .- ..-. .- .-. — . .-.

albert April 20, 2018 1:33 PM

@wetsuit,

Noted, but I still like the idea of caching frequently visited web sites. Any recommendations for an unbound local resolver for Linux?

Farming was a dream of mine…a long time ago.

. .. . .. — ….

Clive Robinson April 20, 2018 7:15 PM

@ Albert,

Any recommendations for an unbound local resolver for Linux?

It’s already there as historically, the way you want to do things, was the way it was originally built to work.

Have a look at your local hosts file[1] all *nix and as MS nicked BSD networking all networkable MS OS’s and for the same reasons Mac OS’s have them.

You also as part of DNS have a local cache on the host machine which in essence is a speed up mechanism as well as to reduce DNS requests.

Simply when you make a network request the OS goes through the file and cache first looking for a match on the domain name. If found the local copy is used unless it is deemed to have gone stale, it’s only then a DNS request across the network is made.

So far so easy…

To do things “automagically” you need to get access to the local DNS client resolver cache. On MS OS’s it’s usually quite simple using “ipconfig”… *nix however is a very different kettle of fish.

On many versions of *nix trying to find let alone read and get into a meaningfull format the cache content is at best an uphill struggle… the reason appears to be in part historical.

If you have ever had to deal with BIND you will probably “have a chill in yer waters around now” it is probably the most used least understood and best avoided piece of DNS resolver software on the face of the planet. Books used to be written about it and strange attractor forces produced black holes in admins otherwise enlightened minds…

The result of that and the usual *nix philosophy of “the programer knows what they are doing” came to the fore. Thus it was left to the application programmer to cache DNS records, and that a request to the resolver would be to get a fresh copy from the wider network… Thus *nix DNS udp traffic can be quite high as most application programmers don’t actually know what they are doing so make assumptions and take simple routes.

Why there is a lack of local resolver cache I have no idea, that’s just the way the developers seem to be happy with. Which is why many distros are like that, there have been a few changes to getting an internal DNS resolver cache.

Once you do get at the cache –if you have one– you next have to get the contents in a meaningfull format. Once you have done that it should not be too difficult to make updates from the cach to the file with a shell script or perl program. But you will need to implement your own “stale” function to prevent the file filling with malvertising links etc.

[1] https://en.m.wikipedia.org/wiki/Hosts_(file)

Use The Source April 24, 2018 3:38 AM

@Sancho_P

Isn’t that what https is for? And silent behavior can always be tuned to log whatever you want.

Sancho_P April 24, 2018 5:32 PM

@Use The Source (”Isn’t that what https is for?”)

Admittedly my knowledge isn’t sound, but when TLS (the “s” from https) is negotiated, the server has to present the certificate for the requested service.
This happens before http (URL, the domain name) enters the game.

So when a certain IP address represents several virtual servers (= services), the physical server (host) has to know which service (domain name) you want to connect to, to present the according certificate to your connecting TLS client.
Therefore the client has to name it first.

This is done by the Server Name Indication (SNI) extension, being part of TLS, – only that the transmitted hostname (domain) is not (can not be) encrypted.
I think the Host header is mandatory.

So your client shouts in the clear to your ISP something like:
“connect me [encrypted] to [youpor*.com] at 216.18.168.116”,
not leaving many questions regarding the desired target in the open.

Re “silent” vs. “log” I dunno, but:
– ‘can always’ means that it is not (?)
– it may be a nightmare to log because multiple paths are at the core of the Net’s resilience / IP (?)

Sancho_P April 24, 2018 6:05 PM

@Moderator

Got that response several times now, also with “Preview”:
Comment Failed
Your comment submission failed for the following reason:
Invalid request”

(my local time was about 00:00, bad coincident?)

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.