Highlights of the Real World Cryptography 2015 workshop, day #2

January 9, 2015
3

The 2015 edition of the Real World Cryptography workshop was held in London January 7-9. Here are some personal highlights of day #2 (I have not included a summary of all talks, and do not pay an equal amount of attention to all talks). I have also made reports for day #1 and day #3.

Session 7: Symmetric encryption

"Searching Encrypted Cloud Data: Case Study on Academia + Industry (Done Right)" by Sasha Boldyreva (Georgia Tech)

Sasha first gave a short tutorial on symmetric encryption, focusing on security definitions like IND-CPA, IND-CCA etc. It was actually quite detailed on the stuff most people in the room would know, so this wasted some valuable time... Then she talked about extra functionality like deterministic encryption (useful for fast keyword search), order preserving encryption (to sort ciphertext and allow range queries) and format preserving encryption. Because none of them satisfy IND-CPA, the question is what are the right security definitions for this. For order preserving encryption (OPE) the intuitive definition is that OPE should be indistinguishable from a random order preserving function (OPF). Sasha then talked about her (positive) experience in working with SkyHigh, offering a system that allows users to encrypt their data before storing it in an arbitrary (insecure) cloud, while maintaining original functionality. In particular you want to be able to search in this data, so searchable encryption is what they were after. If you use strong encryption, searching is linear in the size of the database. If you use order preserving encryption, you can sort the encrypted index and search in logarithmic time, and perform range queries efficiently. Sasha then presented an interesting construction based on the observation that there is a bijection between the set of OPFs from [M] to [N] and the set of M-out-of-N combinations. It was a shame she didn't discuss this in a little more detail; at least it went too fast for me to understand exactly how it worked...

"Can encryption save the credit card? A decade of standards and implementation" by Terence Spies (Voltage Security)

There are over 14 billion credit cards in the world, that handle $3.6 trillion transactions a year. Each year $12 billion is lost on fraud (although most of that is non security related) In credit card security standards there are two main mechanisms specified: point-to-point encryption (P2PE) to protect credit card numbers in transit over a network, and tokenisation, to protect credit card numbers that are stored in databases. Because many legacy application depend on the format of a PAN (Personal Account Number, as credit card numbers are called in the credit card industry), tokens must look like a PAN. A PAN starts with 6 digits indicating the issuer, followed by 6 and 4 digits, of which the last 4 tend to be public because they are printed on receipts for example. Therefore to create tokens there is the NIST SP800-38G draft standard that specifies how to encrypt the middle 6 digits.

"Authenticated encryption and the CAESAR competition" by Elena Andreeva (K.U. Leuven)

Elena talked about Authenticated Encryption (AE), a primitive that provides confidentiality and authenticity at the same time. A good AE scheme should be nonce misuse resistant (to protect against flawed implementations, user errors, and virtual machine resets), secure againts release of unverified plaintext (where the attacker gets the ciphertext decryptions before verification is complete, for example through insecure memory, small buffers or real-time requirements) and side channel resistant. There is a competition for AE designs CAESAR: Competition of Authenticated Encryption: Security, Applicability and Robustness. Elena made some nice visualisations of properties of the competitors.

Session 8: MPC

"Smarter decisions with no privacy breaches - practical secure computation for governments and companies" by Dan Bogdanov (Cybernetica)

Sharemind presented a practical application of secure multiparty computations involving 800.000 study records from the Ministry of Education in Estonia, and 20 million tax records from the Tax Office. This makes it the largest secure MPC application ever. Because the data from the source databases is immediately split into secret shares divided over three nodes that compute the function, Sharemind claims the system does not process personally identifiable information (PII) at all. I beg to differ because the system as a whole does compute a function over PII as input. Trying to define it away like this at least goes against the spirit of data protection, because it would allow you to process PII to compute an arbitrarily privacy invasive function without any oversight.… (I later had a discussion over diner with a few people who did not agree with me. The question is what exactly the system is that processes the information, and who is responsible for that system. I guess we have to ask a lawyer about this...)

"TinyGarble: Superfolding Circuits with Logic Synthesis" by Farinaz Koushanfar (Rice University) and Ahmad Sadeghi (TU Darmstadt)

Ahmad first announced an Android app called NearbyPeople that allows you to discover people nearby you with similar interests or contacts. Click the link to beta-test. The remainder of the talk covered Secure Function Evaluation (SFE). This allows two parties to jointly compute a function F over their private inputs X and Y without revealing any information about these inputs to each other. The traditional trick used (Yao's method) is to let Alice create a garbled circuit based on her private input X and let Bob evaluate this garbled circuit using its private input Y. Most constructions so far have used combinatorial approaches to create these garbled circuits. This makes them quite large. This work considers sequential circuits that have state. The size of a sequential circuits can be independent of the input (e.g. an n bit adder requires only a 2 bit full addition circuit and a 1 bit register). Moreover, they have tapped into the year long experience of hardware designers that master the magic to translate higher definition language describing functionality into very complex hardware designs. As a result, this has enabled them to implement garbled circuits for complex functions like SHA3, RSA-8192, that have never been reported before.

Session 9: Secure Hardware / Side-channels

"The ins and outs of programming cryptography in smart cards" by Pascal Paillier (CryptoExperts)

Pascal gave a general introduction to smart cards, that I skip. It can be found in many good textbooks or overview articles. What excited me is his announcement of Opencard (link should work somewhere in Q2 2015). It is a truly, fully open smart card with full documentation that can be programmed in C or assembly and that gives full access to the crypto coprocessor. The CPU is a 32bit ARM with 600 kB of EEPROM memory and 18 kB RAM. The crypto coprocessor supports DES/3DES, AES and RSA as well as common mathematical operations. They also will create an Opencard Market for 3rd party extensions, like pairing implementations on the Opencard. This is a very good initiative because smart cards are typically a closed technology. You have to rely on semi-open Multos or JavaCard if you want to implement something yourself, and if you want to implement your own cryptographic protocols or algorithms, then that's too bad because you have no direct access to the crypto coprocessor. We would love to get a copy of this and see how our Idemix implementations in IRMA would perform on this card. Pascal promised a 4 fold speed increase. Let's see.…

"The Need for Speed: Applications of High Performance Computing in Side Channel Research" by Elisabeth Oswald (University of Bristol)

Side channels like timing information, power fluctuations, acoustic signals, error messages, cache behaviour etc, can be used for key recovery, plaintext recovery and device fingerprinting. The big question is: can you somehow estimate how much your device leaks. A way to formalise this is compute, given N observations, how much effort is needed to find the secret key. This gives you a leakage bound λ. For this you need to determine two things. First, you need to determine leakage detection: given a trace, determine which points give the information about a particular secret. The second is leakage exploitation: you need a strategy to convert this information into bits of the secret you want to recover. It turns out that it is important to combine the information that comes from the several leakage points (instead of picking only one, the best). With that you only need half the number of traces to guess the secret, compared by focusing only on single points.

Session 10: Short talks 2

"We <3 SSL" by Emilia Kasper (Google)

Internet-breaking bugs are common. Heartbleed is but one example, even though it attracted a lot of attention. This is in a way surprising because its NIST (NVD) vulnerability score was lowest (among the GOTO fail, early CCS attack, and RSA signature forgery): although Heartbleed is easily exploitable its impact is low. Still: why the hype. Part of the explanation is the good branding (great logo!) and press coverage, making it part of pop culture. Also the simplicity of the exploit helps. It is easy to explain to say your grandparents. Other attacks like remote code executions are not concrete enough and the true impact is not well known (offensive institutions are much better judging bug impact). Main OpenSSL criticism: no code review. Then again, for Heartbleed, code review would not have worked. Code review is not an audit. Code reviewers are not trained to find complex bugs. Few people get paid to audit, and even fewer people get paid to turn bug into exploit. You get what you pay for... OpenSSL has made some changes. They expanded the development team: it is now 3 fte strong and has 12 volunteers. Code reviews are mandatory now before they are pushed out, and a new security policy ensures that the procedure of how security issues are handled is now much more transparent. The (crypto) community could help OpenSSL in several ways.

  • Formal verification of cryptographic code, e.g. for newish fast and constant time elliptic curve implementations for P-224, P-256 and P-521
  • State machine analysis. The code of the current state machine is very old, and not written with adversarial behaviour in mind.
  • Smarter tools for finding/building exploits
  • Constant time cryptograpy: AES, RSA, P-256 are well covered across platforms, but what about constant time implementations of common operations like conditional operations. Also authenticated encryption is brittle, and new primitives are really needed.

"Post-quantum key exchange for the TLS protocol from the ring learning with errors problem" by Joppe W. Bos, Craig Costello, Michael Naehrig and Douglas Stebila (NXP, Microsoft Research, and Queensland University of Technology)

Proposes a new key exchange protocol for TLS, based on cryptographic techniques that are believed to be unbreakable even on quantum computers. The idea is that the signature (used to prove authenticity) can still be based on traditional public key cryptography (because we only need security of signature right now) while the key exchange needs to be secure years later. They propose a key exchange protocol based on the ring learning with errors problem, where by a clever encoding of the matrices involved, the keys are only 8 kB in size (instead of 245 kB for normal learning with errors based systems). They tested their ideas by actually integrating this in TLS, and found that this created a relatively small overhead (a factor 1.1 to 1.25). For security reasons, a small reordering of messages in TLS is required though.

"Post-Snowden Elliptic Curve Cryptography" by Joppe Bos, Craig Costello, Patrick Longa and Michael Naehrig (NXP and Microsoft Research)

After Snowden revelations there is a need for new elliptic curves to regain confidence and acceptance from the public. We can take this opportunity to create curves that have improved performance, and better security, including perfect forward security modes. "Nothing-Up-My-Sleeves"(NUMS) curve generation is important. The main idea is: 1) pick prime (according to some criteria), and then 2) deterministically create curves in Fp. But if efficiency is the main criteria, how do we select the primes? This depends on whether you use saturated or unsaturated arithmetic (where unsaturated arithmetic leaves some bits in computer words unused for representing values that can be used for intermediate results like carry bits). This is a challenge, because curve must be efficient for the widest set of possible platforms (where either saturated or unsaturated arithmetic is most efficient). They propose a new high security curve Ted37919 over Fp with p = 2379 − 19. It has 188 bits security and is NUMS generated and therefore rigid and safe. Ted37919 is implementation friendly for both saturated and unsaturated modes.

In case you spot any errors on this page, please notify me!
Or, leave a comment.
Highlights of the Real World Cryptography 2015 workshop, day
, 2015-01-09 09:51:54
(reply)

[…] – And what we can learn from this to develop successful privacy friendly apps and services. Highlights of the Real World Cryptography 2015 workshop, day #2 […]

Highlights of the Real World Cryptography 2015 workshop, day
, 2015-01-10 00:19:26
(reply)

[…] Jaap-Henk Hoepman On security, privacy and (occasionally) other stuff « Highlights of the Real World Cryptography 2015 workshop, day #2 […]

Dan Bogdanov
, 2015-01-11 14:11:09
(reply)

Hi,

A small note - it is not the goal of Sharemind to define away Personally Identifiable Information. Our goal is to make Sharemind a privacy enabler like no other. It’s just that, at this time, the legislation has not caught up to the properties of PETs.

Dan @sharemind