RFIDSec 2012: Day #2. Summary of presentations.

July 3, 2012
2

This is the second part of a (not so) brief report on RFIDSec 2012 held in Nijmegen from July 2 to July 3, 2012. The report on day one can be found here.

Note that next year's RFIDSec will be held in Graz, Austria.

Invited talk #2

Joan Daemen - Permutation-based symmetric cryptography

Joan presented his work, based on work done with the KECCAK team.

Hash functions F are usually constructed using a two layer approach (there are hardly any direct design): first define a fixed length compression function f, and then design an iterating mode.

Merkle-Damgård (MD) is a well known iterating mode. To make the iterating mode preserve collision resistance of the fixed length compression function, you need to pad the message with the length (called MD with strengthening). To prevent length extension attacks (where you can append a string to a known message and create a digest for the new message), you need to make the last step different, for example by applying f once more with a different IV (this is called enveloped MD). Finally if you need a much longer digest (called MGF), you can apply the hash construction several times on the same message padded with an increasing counter, and concatenate the digests to create the long digest.

The iteration mode should be sound: it should be property preserving (F inherits properties of f) and differentiability from a random oracle should be hard.

Then security depends on the compression function f. So, how to create it. In almost all proposals one uses a block cipher (where one uses the message block as the key, and one encrypts the chaining value). Without countermeasures, this makes it very easy to create collisions for this f (e.g. take an arbitrary output, pick an arbitrary key, and compute the input chaining value). To prevent this one adds a feed forward (Davies-Meyer) where the input chaining value is xor-ed with output of block cipher.

Note the hash function cliché of it being the Swiss army knife of cryptography is not true. It is block ciphers that are used for hashing, block encryption, stream encryption, MAC computation and authenticating encryption. But the question is: do we really need a block cipher when constructing a compression function?

Formally, an n-bit block cipher with k-bit key is family of 2k permutations from 2n to 2n. Alternatively, an n-bit block cipher with k-bit key is a b-bit permutation (where b = n + k) obtained by repeating an invertible round function, that has an (efficient) inverse, and where there is no diffusion from the data path to key part (needed to be able to compute the inverse). This reflects the typical structure of a block cipher, that computes the expanded key computed from the key using a key schedule (Note Joan says he doesn't really know good methods to create a good key schedule), and uses that to encrypt the data path. This should be set up in such a way that it should be easy to compute the inverse block cipher.

The main observation is that you don't need invertability for a hash function. Maybe its is beneficial to get rid of the costly inverse. Then the third restriction is not needed for hashing, which allows us to simplify the construction using an iterated permutation (where the inverse does not need to be efficient; you still need a permutation because otherwise you can easily compute local collisions). Removing this restriction also allows for more diffusion

The main new contribution is the sponge construction, defined as follows. Assume we have a b-bit compression function f. Set b = r + c where r is the rate and c is the capacity. To hash a message, first pad it to make it a multiple of r bits. Then apply f repeatedly, xor-ing in r bits of the messages with the r most significant bits of the previous stage, while forwarding the remaining c least significant bits unchanged to the next stage. This is the so called absorbing phase. What follows is the squeezing phase, which repeatedly applies f to the output of absorbing phase to generate an arbitrary length digest, delivering the r most significant bits of the output of f as output after each step in the squeezing phase.

The sponge construction is proven secure against generic attacks limited to 2c/2. The reasoning is that given r bits of the input and output of f, guessing the remaining c input (and output) bits is hard (expected 2c time). We see the capacity is tightly coupled to the security level; there is a tradeoff as it reduces the rate (as b is fixed)).

For a particular instantiation for a function f, in the end, in symmetric security, the only 'proof' of security is that many people tried to attack it, and nobody succeeded.

Sponges are very flexible and can be used in many modes; keyed modes are slow, but can be boosted. For example, using a spongelike construction, you can simply concatenate a key and a message to compute a keyed MAC (the reason HMAC is so ugly is because the underlying hash is not ok). Similarly, you can create a keystream using a sponge on an input of the concatenation of the key and an IV.

Finally, it could be used to implement authenticated encryption (where the idea is that authentication encryption is more efficient than computing encryption and MAC seperately). But if you try to do this, you mix the absorbing and squeezing phase (which is not allowed usually in a sponge). Therefore, they define a generalisation of a sponge called a duplex construction that in each stage gets some input and provides some output (that depends on all previous stages). The duplex object, based on the underlying permutation f, is one stage.

Security of a duplex is less strong, because using the extra input/output you can fix the r input bits to f to a value of your choice (e.g. 0), and so you can create multiple instances with the same input, and a guess for the remaining c input bits is correct if it matches an output of any of the multiple instances.

Several proposals for sponge functions exist: KECCAK, Quark, Photn, Spongent. They are all presented as lightweight hash functions. In this community this means low area.

Assume we want a security level of c/2 (eg 80 bits). In a Davies-Meyer ("narrow pipe") approach, for the chaining value (block size) we have n ≥ c, the key length k ≥ n, feedforward n and hence the total state is  ≥ 3c. In comparison, for a sponge ("huge state") the permutation width is c + r where r can be made arbitrarily small. The total state is c + r. So sponge constructions are not such a huge state after all. Therefore they are called lightweight hashfunctions.

Joan then went into detail on KECCAK performance in hardware, and how keyed sponge constructions are perceived possible but inefficient. He notes that unkeyed versions of hashfunctions in general much weaker than keyed modes (e.g. unkeyed MD5 badly broken, but in keyed version very little progress in 1st-image generation).

Basically it comes down to a comparison between permutations and block ciphers. Unique block cipher features are pre-computation of the key schedule, and even if user stupid you still have some level of protection (which is not the case for stream ciphers). The unique feature of permutations are diffusion across the full state and its flexibility: adjust rate and capacity.

Efficiency of keyed modes can be improved. Because the adversary has less information on internal state, you can reduce the number of rounds, for example.

Discussion: lightweight cryptography typically focuses on low size (ie small state). What do I need to do to achieve low power or low energy? For low power you need a design that is efficiently serialisable. Low energy is different: you need a lot of diffusion.

Session #4 : Cryptanalysis (2)

Luigi Sportiello - Weakening ePassports through Bad Implementations

Luigi presented his results on the security of electronic passports. Many countries issue passports with a contactless chip in the cover. Basic Access Control (BAC) mediates access to this chip using information stored on the Machine Readable Zone (MRZ). The MRZ can be read optically, and with that a access key and encryption key can be computed by the terminal.

The problem is that the data in the MRZ is predictable. Passport numbers are often correlated to date of issuing, for example. The entropy of the resulting keys is therefore low, decreasing BAC security. In Italy, the average MRZ entropy is 42 bits.

Moreover, the e-passport specifications is not entirely clear about the required response of a smart card in certain cases,and does not speicfy how the card should response for unknown or illegal commands. Therefore, implementations of different passports variations can be distinguished based on their behaviour on particular commands. This can be used to determine the issuing country of an e-passport. In Italy, this can be used to further determine the passport application version of an Italian e-passport, which strongly correlates with the date of expiry of a passport. This further reduces the entropy in the MRZ, to values below 40 bits, especially for the first batches of a new version of the e-passport.

When asking for only 1 byte of random data in the first stage of the BAC protocol (GET CHALLENGE), an Italian e-passport does not return an error but instead indeed returns a 1 byte random nonce (which is internally padded with 0's to 8 bytes). This allows for a man in the middle attack, where the man in the middle repeatedly asks the e-passports for a 1-byte random nonce until it returns 0. In that case, the internal random nonce is 0. In the second stage, this nonce is the first part of the plaintext encrypted. A lookup table can then be used to recover the encryption key used.

A new EC regulation mandates a eDriving License, that also involves a contactless chip and a MRZ. Will the lessons learned be applied there?

Daniel Bernstein and Tanja Lange - Never trust a bunny

Dan and Tanja both presented. The Hopper-Blum (2001) (HB) authentication protocol is based on the difficulty of learning parity with noise (LPN). Secret s. Reader sends random C. Tag sends T = Cs + e where each bit of e is set with probability t. The reader checks that T − Cs has  ≤ tn bits set (where t’ is typically t/2).

But HB can also be broken without solving LPN. For example, a reader can send a special matrix C with the first column set to 1, revealing the first bit of s. This is an active attack.

Lapin is an improvement of HB, where the challenge C is restricted to a nice subspace, and where the reader and the tag share two secrets s and s'. The tag computes the response T = R(Cs+s′) + e where R is a random invertible matrix (that is sent separately). The tag verifies that R is indeed invertible, and the associated response value as before. Security of this protocol is based on Ring-LPN.

The FSE 2012 paper claims that Lapin has speed "comparable" to AES and has "provable" security: Ring-LPN is as hard to break as LPN. But how hard is it to atack standard LPN. The standard strategy of Blum-Kalai-Wasserman (2000) is to combine responses where the starting bits of the underlying challenges are the same (and therefore cancel). Later improved by Levieil-Fouque (2006). But for Ring-LPN, Kirchner (2011) uses the invertability of the matrix R to make the attack even more efficient.

Using these results, Dan and Tanja break Lapin in 256 bytes of memory, 238 queries and 298 bit operations (which is better than the claimed 277 bytes of memory cited in the FSE Lapin paper), but which still has a high cost in terms of bit operations.

The conclusion was that looking only at memory usage was inconclusive and in general not a good idea. Also, what exactly amounts to a single bit operation is not very well defined.

Invited talk #3

Konstantinos Markantonakis - Interplay of Business Objectives and Academic Research - Holders of NFC Mobile Service Destination

Kostas structured his talk in three parts. Part 1: are smartcards the weakest link? Part 2: Near Field Communication (NFC): lessons learned. Part 3: user centric devices and NFC.

Part #1: are smartcards the weakest link?

Security of smart cards is provided at several levels: tamper resistance, OS logical security, organisational and overall security. Smart cards are only one element of a (much larger) system.

Smart card attack points are: data communication, data processing, data storage. The attack methods are: social, hardware, logical.

This is illustrated with three use cases

Case #1: EMV. Static data authentication (SDA) proves authenticity of data stored on the card. It does not prevent cloning. A PIN is used for cardholder verification (CHV). Wedge attacks are used to create counterfeit cards by copying data onto magstripe while recording the PIN. Industry knows about these attacks. Relay attacks require modified POS terminals (and modified cards or emulators). These are hard to prevent. A possibility is to factor in the location of the device with the authentication. Another interesting idea that Kostas mentioned is that both devices (think mobiles with accelerometer) should be involved in the same motion when communicating. Other countermeasures against relay attacks are: tamperproofing terminals (and consumers should inspect terminals, and merchants should inspect cards - to detect emulators), or distance bounding protocols. As a final option one could consider using a third, user controlled, intermediate device to confirm the transaction. This is an idea that needs further research.

Case #2: public transport. The Mifare was hacked because the security was based on obscurity, and a weak random number generator also didn't help.

Case #3: satellite TV. Typically this involves a set-top box with a Common Access Module (CAM) that accepts your smart card. The card has the cryptographic keys. It sends these keys to the CAM, which decrypts the scrambled signals. Attacks: "Logger" (our "interesting device") which serves as a man in the middle between smart card and CAM. It can intercept and then re-distribute keys to anyone interested. Moreover, it can block kill commands sent to the card. All these attacks are possible because card needs to rely on the CAM to decrypt the signals (as it is not powerful enough to do so himself)

Satellite TV providers counterattacked ("black friday" and "black sunday") by distributing an application update to cards, and after a few months sent a special (hidden) kill command that would received and executed by the updated application.

Kostas claims sattelite TV providers are winning by using more advanced cards, more complex protocols and because set-top boxes are getting more and more online.

Part #2: Near Field Communication (NFC): lessons learned

NFC was created by Sony, and NXP. It incorporates RFID/contactless technology with mobile devices. It allows a mobile phone to be either a contactless card or a card reader instead. There are a great many possible applications (payment, transport, access control, etc.). Payment is mostly deployed. It is starting to be implemented on a variety of devices (Android, Symbian, Bada, Blackberry, Windows).

Central is the concept of a secure element. There are several options for implementing this. The first is entirely in software. This is low cost, but has increased risk and weak security. Another option is a secure processor executing a software SIM. Then secure boot is possible, but performance is an issue, and certification requires combined hardware/software evaluation. Thirdly, one could use the conventional SIM. This is proven technology and security is established, however SIMs are controlled by the mobile network operator. Finally, secure MicroSD can be used as a secure element.

The introduction of NFC introduces new security concerns (not for the NFC devices themselves, but for contactless cards). NFC mobile phones are a good attack platform. They are cheaper (than say a Proxmark 3) and have some additional features (display, wireless network). Moreover, they have an acceptable form factor (they don't look like attack devices). They do have shortcomings too. They have a short operational range (a few centimetres). Moreover, for cloning, the problem is still that the Answer-To-Reset (ATR) of a genuine card cannot be generated by the NFC clone, preventing perfect clones.

Part #3: user centric devices and NFC.

If you look up the definition of innovation in a business dictionary, it says something like "an idea that must be replicable at a an economically acceptable cost, and serve a specific and real need".

Early smart cards were not really multi-application (had to be installed in ROM, and should follow a specific pre-agreed application structure). In 1999 smart card systems with OS in ROM and applications moved to EEPROM appeared (MULTOS, JavaCard). But what went wrong? Why didn't we see any real schemes that adopted multi application smart card technology?

The focus was too much on technology: no consideration for business and deployment models. Stakeholders already had a lot of vested interests, that were conflicting. So convergence was a bridge too far, and the issuer centric model kept dominating the multi-application smart cards.

According to Kostas the smart phone era is starting to change this. It can contain many smart card applications (based on NFC technology). However, security is a big concern. Moreover the underlying Trusted Service Manager concept: leads to market segmentation. There are several application examples: Google wallet (based on an NFC phone), Barclays PayTag (a sticker that you glue on the back of your phone), and Orange quick tap (that allow users to add debit and credit cards to a mobile phone through the quick tap app).

At Royal Holloway University, they started studying the user centric smart card ownership model in 2008, to put the user central. March 2012 Global Platform came up with the consumer-centric model which closely resembled this idea. Still, there are unresolved issues with using mobile phones. How to make a cost effective bootstrapping process? How to cater for advertising/branding? How to securely communicate with the user? How to balance issuer and user (security) requirements? How to efficiently (and cost effectively) do composite evaluations that involve both hardware and software?

Session #5 : Hardware implementation

Hannes Groß and Thomas Plos - On Using Instruction-Set Extensions for Minimizing the Hardware-Implementation Costs of Symmetric-Key Algorithms on a Low-Resource Microcontroller

Hannes presented. Protocols for RFID become more and more complex. Additionally, the following requirements must be met: short development cycle, short time to market, agility (in terms of adding applications), low power consumption, among others.

There are basically two approaches for making RFID tag hardware: micro-controller design vs finite state machine (FSM) design. FSM is best for simple functionality, while the micro controller design is better for complex functionality. Yet FSM has smaller chip size, low power and high performance. Goal of the research presented is to give some of these FSM advantages to the micro controller design as well. The approach was to allow for instruction-set extensions (ISE). Instead of using this to get more speedup, use this to reduce chip area instead, so we have a good hardware/software tradeoff.

The team implemented several block ciphers (SEA, PRESENT, XTEA) and provided three optimisations for each of these block ciphers (in speed, size, and balanced). The results were evaluated against implementation cost (gates), area savings due to ISE, the hardware cost of implementing the ISE, ROM efficiency, speed-up and power consumption (although the latter two were not the main focus).

Several ISE candidates were considered. S-boxes: they are expensive in software, but efficient in hardware. Permutations: take many cycles and instructions in software, while it is only wiring in hardware (but you need a multiplexer as well). Register swapping: happen a lot in Feistel networks. Arithmetic extensions were also considered (e.g. addition with carry, two's complement calculation). Other ideas for ISE were discarded (the barrel shifter for example) because they were not really feasible.

Results: upto 48% hardware implementation cost saved, and a speed-up between 1.2% to 4% was achieved. And it seems that a micro controller based design can even be smaller than a FSM based design.

Discussion: hardware size is reduced because by implementing special instructions, EEPROM size is reduced because it needs to contain less instructions. No clear benefits (or drawbacks) w.r.t. to power consumption found

Daniel Holcomb, Amir Rahmati, Mastooreh Salajegheh, Wayne Burleson and Kevin Fu - DRV-Fingerprinting: Using Data Retention Voltage of SRAM Cells for Chip Identification

Wayne presented. This talk is about circuit level Physical Uncloneable Functions (PUFs). PUF's can be used to authenticate RFID devices. They are based on physical randomness over space (over the silicon die), not over time (their responses should be consistent). There are many proposals for PUFs. Question: can we use existing circuitry (in this case SRAM) to create PUFs. Yes: SRAM cells power up to 0 or 1 pretty reliably on each chip, provided we can improve this success rate of chip ID based SRAM variations.

There is a good survey by Maes and Verbauwhede (2010) on PUFs (and a new version coming out in Maes's thesis soon).

Data Retention Voltage (DRV): SRAM cells can not reliably retain state below a certain voltage (typically 200-300 mV). We want to determine which cells will flip at which voltage, leading to DRV-fingerprinting. So instead of power-up fingerprinting, the authors study power-down fingerprinting. A second DRV measurement typically is close to the first measurement. DRV turns out to be a better discriminant than plain power-up measurement, providing better precision and recall. However, temperature has an impact on the measurements, which is a problem especially when a measurement is taken at a different temperature from the original sampling temperature.

The technique relies on precise control (10mV increments) over power supply. This is easy in a lab setting, but not necessarily in the field (where you want to apply authentication in practice). In the case of RFID, you might actually be able to use the natural decay of voltage in RFID. Moreover, it is time consuming as you need slowly decrease the voltage.

In case you spot any errors on this page, please notify me!
Or, leave a comment.
martijno
, 2012-07-04 08:24:10
(reply)

Interesting talks! Thanks for the summary. Will proceedings be published?

Martijn

Jaap-Henk
, 2012-07-04 08:34:17
(reply)

Draft papers are available through the conference website. Proceedings will be published by Springer later, probably Fall 2012