Fix TLS. Let's get rid of certificates.

February 26, 2017
19

TLS secures the connection between your browser and the websites you visit (and a lot of other Internet connections that do not involve either a browser or a web server). TLS should provide confidentiality (so nobody can steal your passwords or see which webpages you are visiting), integrity (so nobody can modify the transactions you send to your bank) and authenticity. When properly used, TLS provides the first two guarantees, but it is increasingly becoming apparent that it fails to provide the latter: authenticity. The use of certificates (and the poor understanding of what authenticity on the web really means) is to blame.

(Note: I wrote an update to clarify and improve the idea, based on comments I received.)

The problem with certificates

Certificates tell your browser which public key belongs to a domain (like paypal.com or eff.org. With that information, your browser is supposed to be able to verify that it is indeed setting up a connection with that particular domain, and you are supposed to be certain that you are communicating with the website you intended to visit. You are sure because the green padlock in front of the URL in your browser tells you TLS is securing the connection.

But this reasoning is wrong.

Many so-called certification authorities (CAs) can issue certificates for a domain. They are supposed to do so faithfully. And not to issue certificates for keys to entities that do not own the claimed domain. Unfortunately not all certification authorities are honest. Either because they are hacked (see the DigiNotar case) or because they are controlled by an 'evil' government (both China and the Netherlands own CAs that are trusted by all major browsers). As all browsers equally trust all certification authorities, even sites with a secure connection can be spoofed: you think you are visiting PayPal, but instead you are visiting a very convincing copy hosted by a criminal gang.

Also people don't really understand URLs. Sometimes the difference between a genuine domain (paypal.com) and a fake one (paypa1.com) is hard to tell. Moreover, many people will not necessarily notice a problem with a domain like paypal.com.singin.cc, espcially if it is the start of a long, complex URL. Such domains can also obtain a (valid!) certificate. Especially now that Let's Encrypt makes getting certificates easy and free. (As a result, Let's Encrypt has been blamed to make phishing easier. This is not fair, as I will explain in this post.)

We see that TLS has been quite successful in preventing a man-in-the-middle attack to be staged once a connection has been set up. However, there are still many opportunities for an attacker to stage a man-in-the-middle attack whenever the connection is being set up.

Previously proposed solutions

This is not a new problem and several countermeasures have been proposed, of course.

One of them is certificate transparency. The main idea of this approach is the observation that if some party tries to generate a fake certificate for a domain, this adversary typically does not want to be detected doing this. Hence it will try to limit circulation of the fake certificate to targeted individuals or regions. The rest of the world still sees the true, original, certificate for the same domain. If somehow these different views of what the certificate for a domain is could be brought together, then an attack would easily be detected. This is what certificate transparency aims to achieve. The idea of this approach is to log all issued certificates to several public logs, spread all over the Internet. Certificates are only valid, and accepted by a browser, if they appear in such a log. Independent parties monitor these logs to detect any inconsistencies, like different certificates issued for the same domain that contain different public keys. This makes it hard for rogue issuers to create fake certificates for domains as the real issuer will certainly make sure the certificates it issues are added to the logs as well.

The other solution is public key pinning (also known as certificate pinning), and works like this. Each website includes the hash of its public key (or the public key of the certificate authority it asked to issue its certificate), in the header of each web page it serves. Browsers are expected to store these hashes the first time they visit a new website. Whenever a browser visits the website again, it must check that the public key in the certificate (or the public key of the certificate issuer) it receives this time matches the hash stored for this website the first time it was visited. This ensures that any changes to the certificate (and thus any attempt of a man-in-the-middle attack) will be detected.

But the problem really is more fundamental

Both solutions described above keep the basic idea of a public key infrastructure intact. They still trust certificate authorities to issue certificates that bind public keys to domain names. This by itself is a problem.

Recall what I said in the introduction: many people have a poor understanding of what authenticity on the web really means. One of my favourite papers on this topic ('The nature of a usable PKI', by Carl Ellison) talks about this problem extensively. One problem Ellison identifies is that names (e.g. domain names) may be globally unique, yet this does not mean that they are globally meaningful. For example, even for a global, well known, brand like Apple, there exist other well known companies, like Apple Records, with a very similar name that could easily be confused. If you heard about Apple Records (and never heard of Apple computers), would it be wrong for you to think that apple.com was its website? Similarly, is there any particular reason to believe that amazon.de belongs to the same company Amazon as the company running amazon.com? For these (and other reasons), Ellison argues that the current method of assigning permissions to names (using access control lists) and assigning names to keys (using identity certificates) is wrong. And he proposes to assign permissions directly to keys (using authorization certificates) to avoid the fraught indirection through names.

His observations equally apply to the problem of authenticating websites. The core of the issue is that you should not rely on a name (and a certificate) to obtain the public key that belongs to it. Or rather, one should not interpret an identity certificate as something that binds a key to a name, but as something that binds a name to a key: given a key, you can 'reliably' obtain the name of the entity controlling it (but not the other way around).

In fact when authenticating websites, we really don't care about the name of the website. All we care about is whether it is authentic: whether the website we are visiting now is the same website we visited before (and that we enjoyed, laughed at, surprised us, ...). This is what authenticity on the web is all about. Being able to tell that we are visiting the same website as before allows us to establish a trust relationship with that website over time. With every positive interaction, our trust increases (and with every bad experience, our trust drops). We don't need the website's name for that.

The real solution: get rid of certificates

Once we realise the true nature of authenticity on the web, the solution becomes trivial. Instead of relying on certificates, browsers store the public keys of each website we visit (yes it is very similar to public key pinning, but with significant differences). It runs like this.

Every time you visit a website, it will send your browser its public key. If this is the first time you are visiting this website, a warning message will appear. Your browser will provide some useful information about this site (see below) and ask you whether you want to continue and really visit this site. If you agree, the public key is stored and associated with this particular site (or rather, its domain name). The next time you visit this site no question is asked and the public key the browser stored is used to set up a secure and authenticated connection with the website. This prevents man in the middle attacks and ensures that you are talking to the same website as before.

If the public key you receive when visiting a website differs from the one your browser stored for it, this means one of two things. Either you are visiting a rogue site that is trying to pull of a man-in-the-middle attack, or the website you visited some time ago has changed keys in the meantime. This happens occasionally because cryptographic keys need to be changed once in a while. Your browser can easily distinguish these two cases if the new key in the header of the webpage is actually signed with the previous key for the same website (whose public counterpart is stored by the browser). This indicates a legitimate key rollover. For additional security, websites may preannounce when they will refresh their keys, so that browsers know for which period the keys they store are valid.

How does this prevent phishing?

Now if you click a link to a phishing site, this site is most likely one you never visited before. (Unless the phishers hacked a well known and often visited site, but this is something that cannot be solved by securing or authenticating the connection.) As a result you browser will present you with a dialog to ask you whether you really want to visit this new site.

It is vital that the user is presented with clear and concise information that will help him or her to make the right decision here. Of course the domain name of the site about to be visited should be clearly visible. But what else? Some UX designers should really jump in here to create a seamless yet secure user experience. But one thing I can imagine is a system where, as soon as the dialog is created, information about the domain being visited is displayed from various sources (e.g. Google, or known phishing domain databases like PhishTank) that have a global network picture and hence are in a unique position to tell very early on whether a certain domain is untrustworthy. Such sites could actually be fed with information from users across the world using this system by collecting their decisions when responding to the website-visited-for-the-first-time dialog.

Of course care has to be taken here to ensure that the process of identifying possible malicious sites is transparent and not abused for e.g. censorship to block undesirable content.

Some more usability aspects

We should avoid a situation where a user is presented with a large number of warning messages. The user would quickly get annoyed and no longer pay attention. The first time a user starts surfing the web, all websites will be new to him and his browser, so he will get to see quite a few warning messages. I tried to find any information about the average number of websites people visit regularly (and that they have to verify once before being able to visit them securely forever), and the number of websites that they visit maybe once or twice in a year (for which the above process may be too heavy handed). I couldn't find any, so if you know some reliable statistics, let me know and I will update this post.

To reduce the number warning messages significantly browsers should by default contain the public keys of the most popular websites, so that these are available right after the browser is installed. In any case to prevent user from automatically accepting to visit a new site without verifying the information, the default choice should be to cancel and not visit the site.

TOFU: Trust On First Use

The method outlined implements the TOFU (Trust On First Use) principle, and is used by ssh (a secure remote access protocol) for example. The main drawback of this approach is that it shifts the problem of preventing a man-in-the-middle attack to the first time you visit a website. How do you know this new website you are visiting is genuine? How do you know you can trust it? Well... the idea to use a kind of global network view outlined above is supposed to detect the obviously malicious sites. But actually, building real trust is a process that takes time, within which your experiences dealing with the site (and knowing it is the same site each and every time you visit is) determine your current confidence in dealing with this particular website.

How does this differ from public key pinning?

At first sight the new approach does not appear to differ much from public key pinning. Both use the TOFU principle, and both rely on the browser to keep information about domains to detect man-in-the-middle attacks. Yet there are significant differences.

First and foremost, the new approach no longer relies on certificates at all. Given the fundamental issues with certificates discussed above, they should be avoided and abandoned altogether. Relying on certificates keeps attack vectors open that the new approach no longer suffers from. In particular, sites no longer have the option to specify a (hash of a) public key belonging to certification authority that the site trusts to issue its certificates. Hence compromise of such a certification authority (and such compromises have happened, also for CAs that were deemed too big too fail) is no longer an issue. And if in the end the certificate can be made irrelevant, it is much cheaper and easier not to need the services of a CA anymore.

Second of all, the new approach aims to leverage global network insights (from say Google or other large data brokers that have reliable information about the background of a particular domain) to inform users about the pedigree of a new website they are visiting. This should increase the usability of the approach and should reduce the risk that users become victims of phishing scams.

Practical aspects

The approach outlined above can be easily implemented in practice. We can still use TLS as the underlying secure communication protocol. Some minor changes would be required to not exchange certificates but instead rely on public key stored by the browser (or keys sent by the webserver the first time the site is visited). But we can avoid even those changes by relying only (and accepting only) self-signed certificates and let the browser store and use the public keys contained in them.

In case you spot any errors on this page, please notify me!
Or, leave a comment.
François
, 2017-02-26 18:55:37
(reply)

A possible alternative without ditching certificates, although I’d be all for ditching them in the future, could be the (great) reduction of CA certificates in the browsers. Ideally it’d be restricted to a manageable number of CAs where you’d have a handful (or maybe only Let’s Encrypt?) for DV certificates and a limited number for EV certificates. It would be great to have ~10-20 CAs in your browser (explicitly without sub CA capability) and complete certificate transparency logs.

This way, you could link different browser capabilities depending on the level of trust. For example, HTTP only would completely disable JS, cookies and not even allow <form> submits. A DV certificate would allow “normal” operation, but block things like submitting credit card numbers or SSNs. An EV certificate would remove those restrictions.

Furthermore, a CA could be tied to a (subset) of possible (cc)TLDs to sign using some kind of X.509 extension. For example only EU CAs can sign EV certificates for .nl or .de domains, but they would be unable to do that for .us.

This could all be done in parallel to implementing TOFU and for example DANE with DNSSEC.

Matthew Gladman
, 2017-02-28 08:28:44
(reply)

There is some practical problems with the solution given (In the form of prompts) is that 1. You’re absolutely right that people will ignore it, the ones that need the most protecting from phishing are the least to read a prompt, they just understand that sometimes websites prompt and clicking yes let’s them get in with their day 2. If a site got sources from several domains (images, iframes, xhr, etc) all of which have never been seen before and all had https, there could be a large collection of prompts

Although I am all for getting rid of CAs, the inherit problem is you need to trust something, but you can’t trust anything at all.

Even the solution you outlined with the web browser providing a list of known keys, that could easily written to by a malicious actor from the OS level, or governments could get involved and add things in, etc etc. And we’re back at the same problem as CAs.

Also, the web is huge, lots of new and valid sites are coming out everyday, is it fair that they get a scary warning, and other sites don’t?

Fix TLS. Let’s get rid of certificates. | ExtendTree
, 2017-02-28 08:52:26
(reply)

[…] Read Full Story […]

Mans
, 2017-02-28 09:30:13
(reply)

Have a look at Convergence, which is moxie marlinspike’s old project: https://en.wikipedia.org/wiki/Convergence_(SSL

It is dead in the water now, but it is the same idea.

Jaap-Henk
, 2017-02-28 10:47:03
(reply)

It’s really different from Convergence as it does not rely on any notaries. See the discussion on certificate transparency in the post.

Matthaus Woolard
, 2017-02-28 09:58:14
(reply)

@François

Nice idea. On limiting functions for non ssl etc. Another option would be for browsers to require certificates to be signed by more than one root. And the browsers could then try to encode pairing of roots that are not trusted.

With some roots able to sign by themselves but only for dedicated subdomains eg china gov root could sign *.cn.gov but to sign any other domain they would need a cross signature from a non gov controlled CA.

This would also allow CAs to effectively report of the bad practices of other CAs since all Fishers would need to get double signatures and therefore 2 CAs would be involved and if the second CA does not trust the fisher then the second CA would be able to report the bad actions of the first.

If 2 CAs are seen to partner up often than browsers could also impose restriction on new certificates after a given data can no longer be cross signed by these 2 CA so as to stop buddies of CAs producing base practice.

asdasd
, 2017-02-28 10:43:29
(reply)

In the past we had something like TOFU for cookies. Remember? In the old days browsers used to ask if you would like to allow cookies for the current website. I think most browsers still support this feature, but actually nobody uses it (most users just allow cookies for every domain).

Furthermore, you are missing the aspect of multi-device usage.

I respect your idea, but actually I think it is flawed, since the users you want to help would not even be able to identify the right website during first use. I do not like the current system either, but I think it is still better than your proposal ;-)

If you do not trust the CAs just remove their root certificates from your pc and your browser will ask you everytime you visit an https website the first time. –> Problem solved

Mark Koek
, 2017-02-28 13:19:42
(reply)

As a professional phisher (white hat of course :) - at https://phishingtest.nl/) this does not really bother me. Users can easily be enticed to click through such warning messages. Especially because they will quickly become used to being warned about legitimate sites. You can’t just say “let some really smart UX designers solve this” - it’s quite a fundamental issue.

Christian Harms
, 2017-02-28 14:41:53
(reply)

If other (trusted) browser will share their accepted public certificates and use it to validate?

Then the internet will be your certificate authority!

Ninju Bohra
, 2017-02-28 17:44:13
(reply)

Maybe we can do something like a ‘ledger’, to steal from the bitcoin vernacular, of internet-wide trusted certs and ONLY if you are presented a cert that does not match the ‘ledger’ will you be prompted to accept/decline the cert

benton
, 2017-02-28 18:29:39
(reply)

As the post points out, the problem is the lowest-level “web of trust”, which has to be rooted somewhere.

One potential solution for this is to leverage existing (social) networks of trust. Instead of being rooted in just one place, or in too many places, the web of trust can be rooted in several well-known public places. Proof of an entity’s public key is posted to Facebook, Twitter, LinkedIn, GitHub, etc. and all of them are checked at the time of verification.

This idea has been implemented nicely at https://keybase.io/

Marcus Kool
, 2017-03-01 00:48:25
(reply)

The above describes a defense only against MITM attacks and is vulnerable to viruses that control a computer (i.e. can write to files on the disk). Being able to modify files, a virus can add a fake CA certificate to the trusted list of CAs, but also modify the file where a browser stores the public keys that it found on previous vists to sites. Note that a virus can modify anything that the browser might consider a list a ‘safe things’ whether it be keys, CAs or other identifiers.

In a defense to a virus attack modifying browser files, the browser should be able to determine the authenticity of a website without using a simple list of keys/CAs/etc. There are proposals to use DNSSEC records for this purpose. One simple way is to publish in DNS the public key of the certificate on the website. This way we can probably dump the CAs and use self-signed certificates on our sites.

Jaap-Henk
, 2017-03-01 08:01:50
(reply)

If we cannot assume that the browser (or computer) can store a key to verify things it receives, all bets are off: also information provided through DNSSEC is signed and must be verified.

Tobias Herkula
, 2017-03-02 17:12:10
(reply)

DBOUND and DANE+DNSSEC should solve a lot of current issues.

The main discussion is not about PKI or Certs, it’s about Trust and a lot of people need to accept that trust needs to build up, all the solution we currently have try to hide this fact or even state that trust is an easily achievable commodity.

And there will never be a solution to this “trust” problem. Perhaps “zero knowledge” approaches can solve that, but the solution would then shift to a state where “trust” is not needed at all and the solution is provable by math…

Jaap-Henk
, 2017-03-02 17:38:45
(reply)

I agree the discussion (in the end) is about trust and that this needs to be built up. My argument that one of the tools to build trust is for users to get assurance that the site they are visiting now is the same site they visited before. Trust on first use (TOFU) works better for this than a PKI; it has the added benefit that it does not need any centralised infrastructure. DANE-DNSSEC are just another PKI in a different guise, so they don’t solve the issue either.

Tobias Herkula
, 2017-03-03 11:11:54
(reply)

But it makes a difference if you don’t trust a couple thousend of CAs, nobody will blame you, but if you can’t trust DNS then you have a completely different problem. DANE-DNSSEC helps to provide the assurance after the first use of a remote resource. DBOUND additionally helps to relate a lot of other resources to the same “trusty”

Jaap-Henk
, 2017-03-03 11:21:36
(reply)

Wasn’t the whole point of having authentication in TLS that we don’t want to trust only DNS?…

Tobias Herkula
, 2017-03-03 11:46:02
(reply)

DNSSEC provides a way to trust the DNS response, DANE provides a way to validate a presented Cert without the need of a centralized Cert PKI. TLS then secures the connection to the resource. And DBOUND groups resources to a single trust anchor over domain boundarys.

To your Question, simply “No”, DNS and TLS are not related.

Jaap-Henk
, 2017-03-03 12:13:51
(reply)

W.r.t your last remark, I beg to differ.