The 2014 edition of the Real World Cryptography workshop was held last week in New York, hosted by the City College of New York. Here are some personal highlights of day #1 (I have not included a summary of all talks, and do not pay an equal amount of attention to all talks). I have also made summaries for day #1 and day #2.
I missed the first talk because of delays on the E subway line. I came in just when Zooko Wilcox-O’Hearn (Least Authority) started his presentation about the Tahoe-LAFS encrypted peer-to-peer storage system.
In Tahoe-LAFS the unique identifier for an object also encode the access rights to that object. In other words, it's a capability (cap). Tahoe-LAFS implements read caps and write caps (write cap allows both reading and writing). It also implements verify caps, that allow one to verify the integrity of the object, without being able to read it. Directories are mutable files, containing caps that point to the files that belong to that directory. Read onlyness is transitive (because that's what users expect): if a directory is read only, a write cap inside it cannot be used to modify the object.
Open problems: implement a write-only cap, or an append cap. Revoking caps is also not implemented yet.
Writing to a file is implemented as creating a new instance which is signed with the write key embedded in the write cap. Using verify caps nodes can verify the contents of the new file, and delete the old version afterwards. But this type of pruning is a separate process that does not seem to be part of the core of Tahoe-LAFS.
Moti Yung (Google) talked about the difference between practical security ("100 % security does not exist" and theoretical cryptography (with "security proofs"). Every once in a while there is a paradigm shift that requires cryptographers to reconsider their models. Paul Kocher's 'invention' of differential power analysis (electronics indsutry knew about these attacks way before 1999), lead to the development of leakage resilient cryptography. Now (with the Snowden revelations) we know powerful adversaries do exist, and we have to reconsider our models again: secure multiparty computations (see day 1) is only part of the answer. We need models that consider full breaks of some of the participants in a computation.
Moti proposes midgame attacks for this purpose. In such an attack, the adversary gets access to the full internal state of some node. To resist such attacks, we have to resort to split architectures (distributed computing), like client/server, pc/smartcard, software/trusted-hardware, etc. The work is split in such a way that Alice has the root, long-term, keys but does hardly any work while Bob only gets ephemeral keys and does basically all the work. This way we can allow full compromise of Bob. Moti called this 'Architectural MPC'. Moti gave an example of computing the HMAC in a sponge like manner, where Alice initialises the sponge with the long term key (making sure rewinding is impossible), while Bob mixes in the messages with the sponge. As a last step, Alice receives this result and applies a final transformation using her long time keys to return the HMAC.
Bruce Schneier started his talk admiring (and mentioning) all the cool codewords the NSA uses. I'll focus on the interesting observations he made.
NSA has many exploits that jump airgaps.
NSA has turned intelligence into a global surveillance system, which is robust: technically, legally and politically. The NSA has several different technical methods to get the same data, and has several different legal grounds to get this data. Sometimes they will ask for data at the front door to launder data they got from the back door.
The FBI Stingray looks very similar to what was found in the NSA TAO catalogue. This implies sharing between agencies. Sometimes the FBI operates as a cover for the NSA.
Cellphones allow the NSA to track anybody on the planet.
NSA is not the only one doing this. All nation states do this. And criminals will be able to do this in three years (!). This is the fundamental problem: the Internet is insecure for everyone.
Interesting observations regarding the Snowden documents: SIGINT only, no COMSEC. Company names are rare (hidden by codewords like LITTLE (for Level3) and BLARNEY (for AT&T).
Encryption works at scale (although it will offer an individual enough protection). This is because endpoints are horribly insecure. We know encryption works because the NSA cannot break Tor, and because they got way more data from Yahoo than Google (that uses encryption) even though Google has many more users than Yahoo.
The 'black budget' contains wording that seems to suggest that they are onto something in terms of cryptanalysis. Bruce's guesses: elliptic curve cryptography (weaknesses in certain curves), general factoring or discrete log, or RC4.
Bruce is optimistic in terms of what we can do to defend ourselves. (Even) The NSA is constrained by the laws of economics, physics and mathematics. The trick is to leverage this to prevent bulk surveillance. Also, the NSA is risk averse and documents reveal the NSA is afraid/worried by personal security protocols. Things we can do: design secure cellphone systems, make systems like Tor (much more) user friendly, and distribute much more (Bruce worries about the fact that there are only a handful very large ISPs).
Bruce states that the problem is political, and long beyond a legal solution - a similar point of view has been expressed by Evgeny Morozov as well. The frame of the discussion must be changed: it's either security for all or surveillance. (Side note: I'm not convinced. I think it is quite hard to argue that one cannot design a system to which the NSA only has a backdoor. In fact, all of public cryptography is built on the concept of a trapdoor one-way function!)
During the Q&A the following points came up.
Because of the recent developments we have to be much more conservative about key sizes. Bruce offered no suggestion as to how conservative exactly.
"The constitution is not a suicide pact": the situation we are in now is radically different from the times the constitution was written.
The Patriot Act and FISA had already been written; they just waite dfor something like 9/11 to get enough support to get it enacted.
Another cryptanalytic breakthrough the NSA may have may be in the field of random number generators.
The difference between government and corporate surveillance is the cost of false positives.
Yevgeniy Dodis (New York University) talked about random number generators. Linux /dev/random is complex (800 lines of code) and a perfect example of security by obscurity. However, it does show that in practice engineers had a good security intuition. So far, theoretical designs oversimplified the problem (and hence cannot lead to satisfactory results in practice). A way out is to separate entropy estimation form the rest. In fact, Yevgeniy presented a protocol that does not explicitly do any entropy estimation, but that does guarantee that the random generator is secure as soon as the total fresh entropy collected since the last compromise becomes larger than a specified threshold.
Daniele Perito (Square) discussed how Square uses Bluetooth Low Energy to implement a location sensitive wallet that 'opens' if you are close to a merchant that you have subscribed to. Daniele offered some insights in the performance of Bluetooth LE in practice: it consumes 1% of energy compared to classical Bluetooth, has a 6ms wakeup time, but only 10-20 kbps data transfer rate in practice (compared the 300 kbs advertised). The protocol itself wasn't terribly exiting. It did make me wonder whether some of my old research on ephemeral pairing was relevant in this case.
The final session had an interesting talk Dirk Balfanz (Google) about web authentication. He presented two ways to bind bearer tokens (i.e. cookies or passwords) to the (secure) channel associated to a session, reducing the risk of stealing the bearer token and using it elsewhere.
I particularly liked the idea of running some JavaScript locally to verify locally that the user is there (using his password) but sending only a proof of this fact to the server.
Client compromise is still an issue of course. To solve this we need to move the crypto to hardware, but this requires standardisation.
Dirk (rightly) observed that if authentication becomes more secure, then so has account recovery. This is hard...
It was interesting to see that the worlds of distributed computing (my old area of research) and applied cryptography (my current one) appear to be both highly relevant in practice for the development of systems that are truly secure and privacy friendly. It is no coincidence that many researchers are active in both areas.
If cryptography indeed works (as Bruce Schneier claims) then indeed the endpoints are where vulnerabilities are exploited. These should therefore be our main concern. Mobile devices (being more personal, close to us) are especially valuable targets. Given some recent developments in this area (the secure Blackphone, first small steps towards open-hardware phones like Fairphone, and others), I expect some good opportunities for interesting research and possibilities to achieve real world impact in this area.
[…] The 2014 edition of the Real World Cryptography workshop was held last week in New York, hosted by the City College of New York. Here are some personal highlights of day #1 (I have not included a summary of all talks, and do not pay an equal amount of attention to all talks). I also made a summary of day 2 and day 3. […]
[…] other stuff « Highlights of the Real World Cryptography 2014 workshop, day #1 Highlights of the Real World Cryptography 2014 workshop, day #3 […]