Only the owner of a cryptographic key can decrypt any message encrypted against it. Therefore, if you want to send a message securely to another person, you have to know and use his key to encrypt the message. You have to be certain that it belongs to that person, and not to somebody else that tries to eavesdrop on your communication. This is why many secure communication apps allow you to verify keys using a short fingerprint that is uniquely tied to the key and that can be verified 'out of band'. This means you have to ask for someone's fingerprint (over the phone, or by looking at his business card) and compare it to the fingerprint your app shows for that person's key. Apple's iMessage is a notable exception, though. And frequently criticised for it.
One and a half year ago I wrote a blog post about the value of verifying public keys. I argued that verifying public keys has limited value, in particular for an app like iMessage. The main argument was that Apple has three different ways to get to your data: through the hardware, the operating system and the iMessage app itself. All three are made by Apple. Verifying the key doesn't help much: you have to trust Apple anyway. Even for other apps, the value of verifying keys is limited: you still have to trust the app for not leaking in some other, unobservable, way the messages you are sending.
But this blog post by Nick Weaver made me reconsider.
Recall that in order to securely send a message to Alice using iMessage, Bob's iPhone fetches Alice's public keys (more than one if Alice has more than one device that can receive iMessages) from the iMessage keyserver operated by Apple. The issue Nick rightly points out is that
to tap Alice, it is straightforward to modify the keyserver to present an additional FBI key for Alice to everyone but Alice
The message is then encrypted to both Alice's real key and the FBI key. The user interface of iMessage does not tell the user anything about the keys being used to send the message. So the user cannot detect that this is going on. (In fact, Apple doesn't use certificate pinning, so the FBI could even set up its own iMessage keyserver and with a man-in-the-middle attack redirect all requests for keys to that server.)
Thinking about this I realised the real underlying issue is the following. Can a law enforcement agency coerse a trustworthy service provider to cooperate? And can a service provider convincingly prevent this from happening, either legally or technically?
There are three cases to consider. In the first case, the system is so poorly designed that law enforcement can get at the data it wants by subverting or bypassing the security mechanisms in place. No cooperation from the service provider is required. This is the case for older smart phone models (that can be sucked empty using a number of forensic tools), and (as explained above) also for iMessage (without certificate pinning).
In the second case law enforcement needs passive cooperation from the service provider, asking the service provider for any data that can help law enforcement to get at the data. This is the case if the service provider stores the data unencrypted on a central server, or if the service provider keeps some centrally stored keys that can be used to decrypt stored data or intercepted communication. We call this method of cooperation 'passive' because the service provider does not have to actively change something or start doing something to fulfill the request. The functionality of the service it provides is not changed in any way. It merely has to hand over the data and keys it has already. (So the request to start logging something that is otherwise not logged is not a passive cooperation request. Installing an interception device, or providing access to the data through technical means is a from of passive cooperation, however.)
In the third case law enforcement needs active cooperation from the service provider, asking the service provider to make some changes to system components (either centrally, or at the user devices) that changes the functioning of the system and that will allow law enforcement to get at the data. This is the case if a central key server must be changed to serve different (law enforcement controlled) keys, a server must start logging something that wasn't logged before, or if a software update must be prepared to create a backdoor. We call this method of cooperation 'active' because the service provider has to actively change something or start doing something to fulfill the request. In the case of active cooperation requests, we wish to make a further distinction between local requests, where the service provider must only change some of its local (central) systems, versus remote requests, where the service provider must change not only its local systems but must also update remote (user controlled) devices (like their PC, smart phone, etc.).
(By the way, this distinction between passive and active cooperation resembles the distinction between active and passive adversaries used in the security and cryptographic research communities.)
Most legal systems in the world require businesses to cooperate in a passive way with law enforcement or the intelligence services. Businesses that store any information that might be relevant for the investigation of a crime need to provide that information to law enforcement when requested. (The request has to be done in some controlled, legally sound, way. The details vary from country to country.) Interception of communications by law enforcement is also often allowed, especially in a targeted fashion. The necessary equipment to make this possible has to be installed by telecommunication providers. Access to the equipment is again controlled, based on some process ensuring the request for 'lawful interception' is indeed lawful.
Active cooperation is a different matter. For example, the proposed update to the Dutch law regulating the intelligence agencies (the WIV) specifically only allows the intelligence agencies to eavesdrop on communication and to try to undo the effects of encryption (through cryptanalysis or by asking service providers for the decryption keys). It does not allow them to interfere with the normal functioning of the communication systems they monitor. In other words, subvert systems or setting up fake key servers is not specifically allowed.
On the other hand, active cooperation is not an imaginary threat. Surespot recently stopped responding to a periodic request for information about government demands for information. This suggest it has been served with subpoena to cooperate with the government. Another source claims he has independent proof Surespot was backdoored.
Not all active cooperation is necessarily illegal. Until recently data retention laws were in force in all countries in the European Union. These laws required telecommunication providers to keep the metadata (caller, callee, time and duration of the call) of all communications of their customers for six to twelve months. This constituted a form of local active cooperation. The Court of Justice of the EU ruled this unrestricted form of data retention based on the EU Data Retention Directive disproportionately infringes on individuals' privacy rights. Still, data retention laws are in force in some EU countries as of date.
Making this distinction in cooperation requests clearly shows the value of verifying fingerprints: it forces law enforcement to make a remote active cooperation request. With fingerprint verification in place, law enforcement can no longer ask service providers to make a local change in the key server to start serving law enforcement controlled keys. That would immediately be detected by people verifying the fingerprints of keys they use.
The point of fingerprinting is not so much to keep the service provider honest. (As I argued in my previous blog post there are many ways it can betray our trust without us noticing.) The real value is that it gives the service provider no way to successfully cooperate with a local active cooperation request. It basically allows a service provider to convincingly respond to a law enforcement request for cooperation saying that the only way that they can help, is when law enforcement issues a remote active cooperation request.
This matters in countries that have a sufficiently transparent and robust legal system that do limit what law enforcement and the intelligence services are legally allowed to do. A truly trustworthy service provider based in such countries with the necessary resources to successfully fight the case, can and should challenge such a request for (remote) active cooperation.
In principle, the mere fact that some level of cooperation by service providers is required, should mean that law is forced to work more in the open. In fact, this even holds if only passive cooperation is required. Unfortunately (as mentioned before) in many countries service providers are not allowed to publicly report about such cooperation requests, at least not in any detail. Luckily, some countries have started to allow service providers to publish transparency reports in relatively broad terms. This should continue, and should allow countries to publish more details.
As we discussed above, the technical setup, the architecture of the system determines the type of cooperation request law enforcement has to resort to to obtain the data they want. We saw that fingerprinting helps to protect online communication. Also important in the area of online communication is the concept of forward security, that ensures that previously intercepted encrypted data cannot be decrypted at a later stage when law enforcement obtains the keys.
Similarly, cloud services can be made more robust against law enforcement cooperation if the key to decrypt the data stored in the cloud is only stored in the device of the user, and never leaves that device. When the cloud service provides the option to share data with others, this has to be done in a cryptographically secure way, and may again need to involve some form of fingerprinting to prevent spoofing attempts by law enforcement.
All those measures limit the power of law enforcement and the intelligence agencies, require them to be more overt about their intentions and operations and allow these operations to be challenged in court. This helps to restore some power balance. A balance that currently is sorely missed.
A couple of weeks ago, parliamentary questions about Qiy have been asked to minister Plasterk of the Ministry of the Interior and Kingdom Relations. The answers to these questions were not only given by the Ministry of the Interior, but minister Kamp of Economic Affairs opted-in on them. The reason is that Economic Affairs is directly involved in the development of the Qiy Scheme project and that civil servants of multiple ministries are participating in work streams of the project. What’s going on? Can Plasterk or Kamp “coerse a trustworthy service provider to cooperate? And can a service provider convincingly prevent this from happening, either legally or technically?” Do you have any clue?