Highlights of the Real World Cryptography 2015 workshop, day #1

January 8, 2015
3

The 2015 edition of the Real World Cryptography workshop was held in London January 7-9. Here are some personal highlights of day #1 (I have not included a summary of all talks, and do not pay an equal amount of attention to all talks). I have also made reports for day #2 and day #3.

Session 1: Anonymity in Practice

"Tor vs. mass surveillance" by Roger Dingledine (Tor project).

Roger spoke very fast, so this is going to be a long summary. Tor currently has over 1 million users a day. The network consists of over 6000 relays and 4000 bridges. The advertised capacity is 12000 MiB/s, of which 6000 is used.

Anonymity serves different purposes to different groups of people. It is important to be aware of this when trying to 'sell' the value of a service like Tor. For governments, it provides resistance to traffic analysis. For human rights activists (and users) it means reachability and the possibility to prevent censorship and still access blocked services. (Many people in Iran use Tor to access Facebook or Twitter). For businesses, Tor simply means network security. For citizens, Tor provides privacy. For those people arguing that Tor is used by bad people for bad purposes (child pornography, drugs sale, etc.) Roger argues that bad people are doing great on the internet even without Tor. (Side note: I don't think the issue is dismissed that easily. I would love to have statistics about the percentage of 'bad traffic' on Tor).

Tor does not mix, in order to be fast enough to be useful. Mixminion was a high latency, mixing alternative. In theory the security would have been higher. But because hardly anybody used it, the anonymity set was small and hence the security in practice was quite low, so they abandoned that project. (Side note: could you make it a choice for individual users to ask for mixing of their traffic while it traverses the Tor network?)

Tor is trying to more reliably measure the level of anonymity it provides. But this is tricky: diversity of relays makes 'traffic confirmation' harder, but adding twice as many relays does not double diversity and hence anonymity. In fact, adding a few high bandwidth relays in New York will decrease diversity.

Roger then shifted focus on censorship prevention. Attackers can prevent people from using Tor in several ways: block access to directories, block all IP addresses of known relays, filter Tor traffic through deep packet inspection (DPI), or prevent users from finding the Tor browser bundle in the first place. In practice attackers use DPI (instead of filtering on IP). This is relatively easy because Tor uses a different SSL library than most webservers, and hence the SSL messages of Tor traffic can be distinguished from ordinary SSL traffic. (In fact, Tor has observed that governments are willing to throttle all SSL traffic just to filter Tor.) A fix to this is the concept of pluggable transport that tries to hide Tor traffic by making it look like any other traffic. Another approach is fronting where, instead of connecting to a relay over SSL directly, you connect to the relay through an SSL front-end hosted by a very large company (e.g. Google) that receives a lot of SSL traffic (and that governments are relucted to block altogether). The front-end then forwards the SSL traffic to the intended host.

As the final part of his talk, Roger briefly talked about 'global adversaries' like the NSA. They have xkeyscore rules to look for IP addresses of directory servers, and rules to distinguish Tor SSL from other SSL. One thing to learn from the Snowden revelations is to realise that the Internet is more centralised than we like: if an agency like the NSA gets a few big telco's on board they already control a lot. Therefore it is desirable to defend against end-to-end correlation attacks (because attackers can observe both source and endpoint simultaneously). Roger conceded that he only recently realised that DPI is useful for surveillance too, even though they cannot read the traffic, because it still allows the adversary to build lists of (suspected) Tor users, for example. An obvious research question then is: is there a way to implement "unobservability" that allow users to connect to Tor without attackers noticing.

"SecureDrop: anonymous/secure communications for journalists and sources" by Yan Zhu and Garret Robinson

SecureDrop allows sources to submit material to journalists anonymously. Currently, there are 15 organisations using it (The New Yorker, The Guardian, etc.). SecureDrop protects the source identity, the confidentiality and integrity of submissions, and the confidentiality, authenticity and integrity of messages between source and journalist. It does so against an active network attacker that could seize the SecureDrop server and seize and search devices that belong to suspected source. SecureDrop is accessible as a Tor hidden service (one instance for every organisation) to submit content. There is a separate hidden service to retrieve submissions. Challenges they face are twofold. They would like to implement end-to-end encryption (to help defend against server compromise). Currently the server itself encrypts. But end-to-end encryption conflicts with forensic deniability: to decrypt you need to store a key somewhere, and having the key implicates you. As a possible solution they mention the possibility to generate a key at the client, encrypt that with a user passphrase, and then store this encrypted blob on the server. The other problem is secure code delivery: how can users verify integrity of the client.

During Q&A, Ross Anderson mentioned that the best way to create forensic deniability would be if the Guardian would encrypt their whole homepage and add a button there saying "talk to a journalist in private".

Session 3: Crypto without Errors, both Malicious and Benign

"Error-prone cryptographic designs" by Dan Bernstein (University of Illinois at Chicago and TU Eindhoven)

The way cryptography is designed can have serious impact on the security of implementations. DSA is a good example. DSA "gives user enough rope to hang himself, something a standard should not do" (a quote from Ronald Rivest). AES is another example. Because of the design of AES, AES software uses fast lookup tables. Access to these tables depends on the private key. These access patterns influence the cache state of the processor. Hence timing attacks can recover the private key. As a result secure implementations of AES do not exist. AES has serious conflict between security, simplicity, speed. This is inherent to its design.

Instead of blaming the implementation, ask yourself in case of an attack: what could the designer have done to prevent the attack. In many cases the designer in fact could have prevented the attack.

Dan notes that there is (unforunately) much more public review of designs than of the corresponding implementations

Dan questions the value of security proofs. The problem is that proofs have errors, proofs are not tight enough in practice (the forking lemma in the proof of Schnor means the proof gives no security guarantees in practice, security definitions prioritise simplicity over accuracy (e.g. timing is not part of the security definitions). As a result the fundamental goal (i.e. achieve provable security) is to switch to weaker primitives. But this means the security of the overall system is weak too..

Dan's final advice: try to compensate in the design for errors implementors will make. "Ask not what your implementor can do for you, ask what you can do for your implementor"

"The EC_DRBG/VCAT review and overview of NIST processes" by John Kelsey (NIST)

History first. NIST and NSA coauthored a set of standards on cryptographic random number generators. NSA provided the specifications Dual EC DRBG. There were many reasons to reject or modify Dual EC DRBG, but that didn't happen. Snowden revelations suggest that Dual EC DRBG was backdoored intentionally, but there is (unfortunately) no proof for that. All this has led NIST to think about what went wrong and how that could be prevented in the future.

First and foremost, NIST has rethought their relationship NSA and NIST. Both organisations have different goals. Yet some kind of relationship necessary because NSA standardises security requirements for "confidential" stuff, while NIST standardises these requirements for the other stuff government deals with. You don't want government agencies to have to buy two completely different systems. NIST believed that NSA would never lie to them, although they realised they would maybe not tell them everything. This has changed now.

NIST also changed the standardisation process. From now on all authors (also NSA) are co-authors. NSA contributions will be clearly identified. NSA developed algorithms require public review and analysis before inclusion in NIST standards. They should be published in conferences, for example. Comments should be handled consistently, and should always be published. Comments should be addressed publicly. NIST needs to still think about ways how to deal with informal and anonymous comments (some comments might be under an NDA for example). Finally, NIST aims to improve record keeping and version management and improve overall project management.

Session 4: Virtual currencies and passwords

"Virtual currencies: Obstacles and applications beyond currency" by Sarah Meiklejohn (UCL)

Bitcoin is less decentralised, anonymous, stable, and useful as initially thought. For example, 90% of the mining power is controlled by 10% of the mining pools. To increase usefulness, Sarah discussed some other things you can use (a system like) bitcoin for, that provides a globally visible, updateable but immutable (you can add but not change) database. For example, you can use it to prove existence of an object at some point in time (timestamping). The idea is to create a hash of the object, use this as a bitcoin address and then create transaction to this address from another address you control. The miners will timestamp this transaction. Similarly you can claim ownership, by signing the bitcoin address.

In fact we see many applications built on top of the bitcoin blockchain. In fact Ethereum is an example of a platform to built distributed applications, powered by 'ether' as the cryptofuel. (Side note: I often feel this is like a solution looking for a problem. Proper solutions for the real problem probably require different, and in fact truly distributed instead of merely decentralised, solutions.)

"Facebook: Password Hashing & Authentication" by Alec Muffett (Facebook) and Andrei Bajenov (Facebook)

Facebook has their own password hashing function called "the onion". Main feature: it involves a secret key mixed in through a service call. Any attacks require use of this secret, which make these attacks harder. Also, use of this secret can monitored for abnormal usage patterns.

"Life of a password" by Arvind Mani (LinkedIn)

One interesting idea presented is to use key rotation, i.e. change the secret key mixed in with the password every once in a while. This fingerprints the password hash database, which allows you to know when the database got compromised. It also increases the likelihood that not all credentials can be cracked.

Accidental logging by different parts of the system may in fact store passwords entered by the user. You can deal with this at the user agent side (by encrypting the user password against a random public key), but this is problematic if you need to support all possible client platforms. You can also fix this at ingress, encrypting the passwords at the proxy.

Session 5: Privacy and the law

"In PETs we trust: Gaps between privacy enhancing technologies and information privacy law" by Claudia Diaz (KU Leuven)

Claudia distinguished two types of privacy. Constitutional privacy aka the fundamental rights approach (European Convention of Human Rights, or the US constitution), provides a high level, abstract, protection of citizens against government intrusion. It is technology independent. Informational privacy, as enshrined in the European Data Protection laws, also applies to the private sector and is not technology neutral (it essentially assumes the service provider is trusted). It's goal is to set a minimum standards to allow free flow of personal information (data economy).

Privacy Enhancing Technologies (PETs) focus on minimising data and aim to eliminate a single point of failure. A service provider is an adversary for privacy but may be trusted for providing service. From this perspective, PETs are more aligned with constitutional privacy. In fact, PETs are caught in a regulatory limbo: between a framework that recognizes their goals but not their means (constitutional) and vice versa (informational, where privacy by design is part of data protection legislation).

We can categorise PETs on the type of legal incentives/protections that are necessary to make them thrive. The first category are PETs that service providers (SPs) need to implement as part of their service by necessity. Examples are private information retrieval, attribute based credentials. These PETs must be mandated by regulation. The second category of PETs can be deployed by clients unilaterally, but must be tolerated by SPs. Examples are end-to-end encryption, and the use of Tor (from the perspective of the website visited). These PETs must be protected by preventing blocking them by SPs. The third category of PETs do not depend on a (central) SP at all, like P2P applications. Regulation should protect the ability to develop such P2P applications. (Like net neutrality laws do.)

Session 6: Short Talks 1

"The ISO Standardization Process of PLAID: A Cryptographer’s Perspective" by Jean Paul Degabriele, Victoria Fehr, Marc Fischlin, Tommaso Gagliardoni, Felix Günther, Giorgia Azzurra Marson, Arno Mittelbach and Kenneth G. Paterson (TU Darmstadt and RHUL)

PLAID (Protocol for lightweight authentication of identity) is a general purpose smart card authentication protocol, currently being standardised by ISO. PLAIDs history provides evidence that standardisation does not seem to work well for cryptography.

PLAID was standardised in Australia in 2010 (AS-5185-2010). It got submitted to ISO using a "fast track" procedure (ISO/IEC 25185-1.2). TU Darmstadt and RHUL analysed PLAID and found that it provides weak privacy (you can trace cards, and discover card capabilities), and uses uncommon design strategies. In fact, they recommend not to use PLAID at all. They sent their feedback to ISO as official UK / German comments. In 2014 a new draft appeared, that mentions all the comments but answers them in silly ways, basically dismissing the (very relevant) comments as insignificant.

This shows, at least in the example of PLAID, that a completely broken cryptographic protocol can get standardised, even if comments on the draft clearly demonstrate the fact the protocol is broken.

"Cryptography is for Everyone: From W3C Web Cryptography API to Client-Encrypted Email and Back Again" by Harry Halpin (W3C)

Harry's talk was totally confusing to me, so I cannot for the life of me make a good summary of that. I apologise.

"New Kid on the Block: CLINT: A Cryptographic Library for the INternet of Things" by Michael Scott (CertiVox Labs)

Many cryptographic libraries are made by cryptographers for cryptographers, so they are hardly useful for people in the real world. They are big, cover many primitives at several levels of security, and sometimes depend on external libraries. CLINT aims to improve this.

"One of our algorithms is missing: Crypto APIs in 2014" by Graham Steel (Cryptosense)

Most real world crypto is delivered through APIs. These are often standardised. This process is slow, and hence APIs lag behind advances in crypto.

In case you spot any errors on this page, please notify me!
Or, leave a comment.
Highlights of the Real World Cryptography 2015 workshop, day
, 2015-01-09 09:33:47
(reply)

[…] Jaap-Henk Hoepman On security, privacy and (occasionally) other stuff « Highlights of the Real World Cryptography 2015 workshop, day #1 […]

Highlights of the Real World Cryptography 2015 workshop, day
, 2015-01-10 00:10:28
(reply)

[…] all talks, and do not pay an equal amount of attention to all talks). I have also made reports for day #1 and day […]

Eduard de Jong
, 2015-01-15 18:22:01
(reply)

The actual status of standardization of PLAID in ISO is somewhat different than presented in session 6. When ISO in its JTC1 meeting approved of the fast track, processing the Australian source into an ISO standard was assigned to the ISO group on smart cards (ISO/IEC JTC 1/SC 17/WG 4). In the September 2014 meeting of this group the comments received from UK&DE resulted in a decision to ask the ISO group doing cryptography (SC 27/WG 2) for assistance. It looks like that the presenters in this session base their critique on the documents that were input to the discussion in the ISO working group, and not on the decisions made in the meeting. These input documents indeed are rather dismissive of the comments.