The 2014 edition of the Real World Cryptography workshop was held last week in New York, hosted by the City College of New York. Here are some personal highlights of day #1 (I have not included a summary of all talks, and do not pay an equal amount of attention to all talks). I also made a summary of day 2 and day 3.
The first session covered Bitcoin. Arvind Narayanan (Princeton University) explained that Bitcoin are not really anonymous. You can trace all transactions on the public ledger, and using some basic rules (like the fact that two accounts involved in a transaction are linked, and the fact that change left as a result of a transaction has to be deposited in a new account) one can deduce that certain accounts are controlled by the same person. This creates clusters of accounts. By trading with known people using bitcoin, or by scanning message or post signatures containing account identifiers, the identity of the person controlling the cluster can be (easily) determined.
Matthew Green (JHU) then explained how to make Bitcoin anonymous by introducing Zerocoin and its practical variant Zerocash. Zerocoin is a nice trick that can be explained relatively easily, at least conceptually. The idea is as follows. A zercoin is commitment C(x) of a secret value x. Introduce a new transaction into the bitcoin system that allows one to spend a bitcoin to create zerocoin. You can later get a bitcoin back by proving in zero-knowledge that, from a selection of n zercoins that are stored in the pubic ledger, you know the secret input x for one such zerocoin x. Because it is unknown to anyone which zerocoin actually contains this x you have basically mixed your bitcoin with n other coins, achieving (a degree n) anonymity.
Unfortunately, Yifu Guo (Avalon) spoke so fast that it was sometimes hard to follow what he was saying. I believe he had some interesting things to say about the impact of ASIC development and the concentration of Bitcoin miners in a few large groups on the stability of the currency. Probably worth looking for some other papers, articles or presentations of this guy.
Brian Warner (Mozilla) discussed the security of Firefox Sync. He described how real users got confused by Firefox Sync because the actually believed it would be good way to backup their browser data. Only to discover that after their PC crashed, the actually needed an activation code from that PC in order to get access to the bookmarks from another PC... Nice example of how Sync ≠ Backup.
He then explained the new onepw protocol to be released soon for the new Firefox sync that was completely developed from scratch. It looked extremely complicated to me, and I left wondering whether it wouldn't have just be easier to modify the original sync protocol. This protocol encrypts the browser data against a random key, and stores the encrypted data remotely while storing the key itself in the browser. To sync, a new browser needs an activation code generated by the old browser to transfer this encryption key using the JPAKE key exchange protocol. I wonder whether one could not simply allow the user (once he has set up Firefox sync) to register an activation code of his own choice that will allow him to recover the data whenever the original system crashes.
Side remark: in fact it often confuses the hell out me that in many cloud systems backup and sync are described as separate things, whereas to me they are actually the same: they both store data remotely, and backup is meant to restore data to a device because it was lost, whereas sync copies the same data to a device that already has some of that data (and perhaps some new data as well).
Yehuda Lindell (Bar-Ilan University and Dyadic Security) first introduced the concept. MPC allows several parties, each with input xi to jointly compute a function f(x1,…,xn). MPC aims to achieve privacy (hiding the inputs to the participants) and correctness (the inability to influence the outcome of the computation) against several types of adversaries: honest-but-curious, malicious, and covert (being malicious but wishing to be so undetected).
A general theme appears to use the cloud to do MPC computations. Individual clients do not participate in the computation directly. Instead they securely share their input over say three nodes (preferably in three different clouds), and let these nodes do the computation instead.
John Launchbury (Galois) explained how this technique can be used to filter email, where the filters are expressed as regular expressions, or merge several VoIP streams at a central location, without revealing the content of the email or the streams. Using some clever tricks he achieved reasonable speeds for his approach.
Jakob Illeborg Pagter (Partisia/Alexandra Institute) explained how the same trick has been applied to implement secure auctions in Denmark. Efficient implementations are possible in this case because in an auction the outcome is public.
Yehuda Lindell closed the session explaining how MPC can be used to mitigate server break-ins, especially avoiding data leakage. In particular, cryptographic keys are a single point of failure. If you split the key in shares, use MPC to simulate the computation of a function (e.g. ecryption) using the full key, and recover from individual break-ins into servers, you have achieved long term protection. Applications are encrypted databases, authentication (see below) and Hardware Security Modules (HSM). In the latter case you replace physical security with distributed security. It is a different kind of security, but definitely an interesting approach. They have practical results, in the malicious adversary model, implementing AES in under 20 ms.
MPC can be used for authentication as follows. An important threat in password based authentication is the problem of data leakage/theft from the server. I.e. the password file falls in the hands of the attacker. Suppose you store the encryption of the salted hashed password in the password file, and split the key over the webserver and a third server. Then a MPC protocol between the webserver and the third server can verify that the password entered by a user corresponds to the entry for that user in the password file. Because the password file is encrypted, bruteforcing the hashed passwords is no longer possible. Recovering the decryption key requires an attacker to subvert two systems now.
During the Q&A the following ideas came up. You can actually think about ways that prevent the sending of the password in the clear to the webserver all together. For example, by splitting and encrypting the password locally (using some client side JavaScript, that needs to be trusted of course). Also, the webserver can outsource its part of the computation to another server, and simply ask for the answer (and a proof that the protocol was run correctly). It is not clear though how much overhead that still incurs on the webserver (as the proof and its verification are probably non-trivial).
[…] of attention to all talks). Day #1 (which was much more interesting in my opinion) can be found here. Day #3 will follow […]
[…] talks, and do not pay an equal amount of attention to all talks). I have also made summaries for day #1 and day […]
[…] Hoepman has written summaries of the recent Real World Cryptography 2014 workshop: Day 1 Day 2 Day […]
[…] of attention to all talks). Day #1 (which was much more interesting in my opinion) can be found here. Day #3 will follow […]