After Apple released a document last month describing iOS security in detail for the first time, a lively discussion about iMessage security ensued on Hacker News. The main criticism: users so not need to (cannot even) verify the authenticity of the public keys used to encrypt the messages. Instead users need to trust Apple to give them the right keys, and not to sneak an extra key in that would allow Apple (or the NSA) to eavesdrop on your messages. But is this criticism fair?
In a mobile messaging application like iMessage (or TextSecure or WhatsApp) several parties may try to eavesdrop on your messages. These so called ‘adversaries’ are
- The message app provider itself. WhatsApp, for example, is notorious for collecting all contacts in your contact list behind your back. And Surespot uses your messages to create anonymous aggregate statistics.
- The operating system (OS) provider of your mobile phone.
- The hardware provider of your mobile phone.
- An external party, that can intercept and modify all network traffic
Clearly, the system should protect against an external adversary (like your local snooping government). (BTW.: if the external adversary is able to compromise or bribe the service provider, OS provider or the hardware provider, all bets are off, see below…)
Unfortunately, iMessage does a rather poor job. iMessage needs the public key of the recipient for a message. It requests this from Apple’s directory services (IDS), using a secure TLS connection. Unfortunately, iMessage does not do certificate pinning on this connection. This means that anybody able to forge a certificate (which is any party with access to a root certificate can pretend to be Apple’s IDS and serve arbitrary keys when requested. As a consequence, the attacker can decrypt the message (as it is encrypted to a key the attacker controls).
But let us suppose, for the sake of argument, that this specific issue is resolved. Is it then really a design error for Apple not to allow users to verify the public keys they receive?
The important point to remember here is that in the list of adversaries above, if you use an iPhone, Apple provides both your hardware and your operating system. If you use iMessage, Apple can even attack you in three different ways. On the other hand, if you use a different messaging app, or if you are on Android and use something different than a Google service like Hangout, the app provider and the OS provider are different.
In any case, you have to trust the messaging app itself. If it would silently encrypt any message against an extra key only the service provider knows, it could recover all messages you send. You could try to monitor the messages the app sends, and hope to detect such malicious behaviour. Probably this is doable (although a cleverly coded subliminal channel may be hard to detect). If the source code of the messaging app is open, you can verify the code and compile it yourself.
But in the case of Apple, even with a trustworthy app, the operating system itself could also record and later forward any messages to Apple. Given the huge amount of different messages a typical smartphone with a few apps sends, it would be quite easy to hide these extra messages. In other words, even if users would verify public keys, this would not protect against a malicious OS.
If the app provider and the OS provider are not the same (like if you use Textsecure on Android for example), then adding user verification of keys does add security. As TextSecure can’t compromise users through the OS, this verification of keys stops TextSecure from getting at your messages behind your back (assuming, as explained above, other ways of leaking messages by the app itself would be detected; it helps that TextSecure is open source).
Can Apple read your messages? Yes, of course! Because it owns your phone, the operating system and the iMessage app itself. No amount of cryptography can remove that power from Apple…