A few days ago I talked about how to fix TLS by ditching certificates and using public keys sent by the websites themselves to authenticate them. That proposal attracted quite some criticism. I realised I didn’t explain the idea very well. So here is an update, to address the comments and to explain the idea better and more precise. Read the original post for some more context and background.

The basic idea

A few days ago I explained the idea including a mechanism to detect phishing attacks. This makes the protocol more complex, and creates confusion. So let’s try again, explaining the basic idea first.

Whenever a browser sets up a new TLS connection with a domain, the web server serving that domain respond with its public key (instead of a certificate, as is currently the case) in the initial TLS handshake. (This is more precise than saying that the web server sends its public key in the header of every page it sends.) The web server also specifies the date when the key will expire. The first time a browser visits a new domain, for which it does not know the public key yet, it accepts the public key ‘in blind faith’ and stores it (together with the expiry time) as the public key for this domain. This is based on the Trust-On-First-Use (TOFU) principle. This first connection, and every subsequent connection to this domain, is authenticated using this public key stored by the browser. Until the key expires.

If authentication fails, or if the web server responds with a different key than the browser previously stored for the domain (and this is not a valid key update, see below), the connection is terminated (and a warning message displayed to the user). Some global observatory could be notified of this event, because it signals that the domain may be compromised.

Note that in this basic protocol, no warning messages are however displayed when a new domain is visited. Hence, any third party content on the website from unknown (but valid) domains are accepted silently (as is currently the case using certificates).

Updating keys

Before a key expires the web server must generate a new key pair for the domain. To securely inform browsers of the new public key, without allowing adversaries to announce their key for this domain, the new public key is signed with the old private key. Actually, every public key sent by a domain during the TLS handshake is signed using the previous private key (except for the very first public key for a domain of course).

If a browser receives a new public key for a domain for which it stores an earlier key, it needs to verify the signature on this new key. If the browser actually currently stores the previous public key, it can verify the key immediately. If the key stored by the browser is older, the browser asks the web server to provide it with all (signed) public keys used by the web server in the mean time, thus giving it a chain of public keys and signatures from the old key it stored until the fresh new key it just received.

The web server stores all public keys it ever used, and their signatures, for this purpose. The corresponding private keys are destroyed as soon as they expire.

With this chain the authenticity of the new key can be established. If the key is valid, the browser stores the new key (and the new expiry time) for this domain and discards the old key.

Dealing with key compromise

Websites are hacked regularly. This may compromise the private key the web server is currently using to prove authenticity and reveal it to the adversary. If that happens, the key update protocol outlined above can also be used by the adversary to convince a browser to install a new key the adversary controls for the domain.

Systems like Perspectives have been proposed for this issue, where trusted notaries provide information about the public keys they have seen for the domain in question. However, if a powerful attacker is very persistent in disseminating its public key to all the corners of the Internet, it may overpower a small web site trying to advertise its new key. At some point it becomes hard to tell which key is the right one.

Some out of bound method, not involving ‘live’ keys on web servers that are relatively easy to obtain by adversaries, needs to be relied on instead. This should be investigated further.

In a way the old certificate-based PKI provides such a method, where a domain can request a new certificate for a new, safe, key it generated. (Killing the rogue certificate issued by a malicious or careless certification authority is less straightforward however, as certificate revocation has its own problems.)

Preventing key compromise

To prevent such ‘straightforward’ key compromise, the web server could deploy layered signing keys. In such a setup a master signing key is stored in a secure environment (say a HSM). The web server is only given a temporary private key, for which the master signing key issued a certificate. This essentially creates a mini-PKI. If the temporary key is compromised, the master key is still valid and can generate a certificate for a new temporary key (once the web server has been recovered from the compromise).

The temporary private key is used by the web server to authenticate TLS connections. To allow browsers to verify that, the web server not only sends the public key corresponding to the master signing key, but also the certificate that belongs to the current temporary key during the TLS handshake phase. As before, the browser only stores the public key for a new domain. It uses that key to verify the certificate, and uses the public key in the certificate to verify authenticity of the TLS connection.


The advantage of this approach is that there is no longer a risk that a third party you never heard of issues a rogue certificate for a domain. This is a security improvement. In fact, web sites no longer need to rely on (and pay, usually) third parties to provide certificates currently essential to prove authenticity of their site.

Moreover, compared to other approaches to address the problem (like certificate transparency), this approach is very privacy friendly: you do not rely on any third party to provide you with additional information on request. Such a request would leak information about your browsing behaviour to that third party. (This is actually a drawback of the anti-phishing extension to be discussed below.)

It also becomes easier to properly configure shared hosting sites, routers, or Internet of Things devices (whose domain name or IP address are not fixed when manufactured).

There are also disadvantages. As discussed above, key compromise (and servers forgetting their keys) are less easy to handle than within the current PKI.

Also, storing the public keys of all domains you visited essentially creates a crude browsing history that cannot be deleted. This thwarts any ‘clear history’ commands or private browsing modes (when new domains are visited during that mode). Perhaps this can be mitigated storing salted hashes of domains and keys that allow the browser to verify domain/key combinations, without storing them in plain sight. But this needs further investigation.

The TOFU (Trust-On-First-Use) principle has it’s own fundamental drawbacks. For example, what happens if you use your new laptop or setup the smartphone you just bought using a public WiFi hot spot? The chances of an adversary to inject its own keys in the connection dramatically increase in that case.

Finally there are some issues with the authenticity of third party content (discussed in a bit more detail below). Also users that use multiple devices or browsers to surf the web need a way to securely synchronise information about the public key associated with a domain.

Preventing phishing

Changing the basic protocol a little, the protocol can also be used to warn users about phishing attempts. (Whether a totally different method should be used for this is of course up for debate; someone suggested to use password managers for this instead.)

The idea is to show a dialog to the user whenever the browser encounters a new domain for which it hasn’t stored a public key yet. The dialog should display the domain name of the site the user is about to visit, some context information (obtained from third parties like Google or known phishing domain databases like PhishTank) that will help the user to decide whether he really wants to connect to this new website he never visited before. As phishing attempts typically want to lure users to fake copies of websites they regularly visit, this should raise an alarm with the user when he clicks such a link. (Like I said in the original post, whether this really works well depends on the number of warnings this will trigger, the reliability of the context information the dialog provides, and a proper UX design. And yes I do think that we can let people make security decisions, even on the Internet, provided we give them the right information.)

This dialog should only be displayed though for the domain of the main page being displayed, i.e. the domain of the URL being clicked on or entered in the browser manually. Any third party content loaded by that page should be treated as before: for any unknown domain the keys should silently be accepted. First because we don’t want to bombard the user with warning messages. Second, because most users are not aware that so much third party content is loaded for pages they visit. And third because the third party content comes from domains the users never ever heard of (and for which they cannot make any sensible decision as to whether to accept that domain or not).

Which, by the way, shows us an interesting issue with how third party content is authenticated on the web right now. As argued above third party content cannot be authenticated by the user (or rather his browser). It is the web site that knows and uses this third party content, and therefore the web site that should authenticate it. In the current systems this happens implicitly: your browser securely loads a page with third party content from an authenticated source. This page refers to third party content using URLs (that are authentic because the page is securely loaded). And the third party content is securely loaded because a certificate for the domain in the URL can be used to authenticate the web server serving the third party content. Here Ellison’s argument against using identity certificates for this purpose does not apply as the ‘names’ are authentic (they are well defined and precise).

In the current proposal this implicit authentication of third party content is no longer provided, making it theoretically possible for adversaries to attack users through the third party content they (unknowingly) visit when surfing the web. To be clear, there is only a very small window of attack, namely when the user visits the domain serving the third party content for the very first time. And the adversary has to actively mount a man-in-the-middle attack at that precise moment. But it is a risk nonetheless.

Possible countermeasures are to make the web site responsible for explicitly authenticating any third party content it relies on. This can be done in two ways. First, the web site could include the public keys of all third party content providers in the pages it serves. This means they have to regularly check for key changes occurring at those third party content providers. But I think it is necessarily a bad thing if web sites regularly check the third party content they rely on. Another option is even more drastic: extend HTML to require a hash to be present for every third party content loaded from a page, so that any changes to that content can be detected (by the browser loading that page and its third party content). Of course this only works for static content (but this protects web sites from unexpected changes to the third party content they rely on, which may be good property in certain circumstances).


I would like to thanks some of my followers on twitter and the people discussing this on Hacker News and lobste.rs for their criticisms and remarks that helped me improve this idea and explain it better.