RFIDSec 2012: Day #1. Summary of presentations.
This is a brief report on the first day of RFIDSec 2012 held in Nijmegen from July 2 to July 3, 2012. This edition I was the program chair, together with Ingrid Verbauwehede. Lejla Batina and her team provided a perfect local organisation.
Invited talk #1
Florian Michahelles – When will RFID embrace our everyday lives?
Florian addressed the following simple question “why hasn’t RFID happened yet?”. For this, he first sketched the history of RFID (and barcodes).
In the 1970′s there were island solutions: specific tags, for specific applications. In 1990 standards started appearing and in 2000 AutoID centre was formed with the mission to create the 5cent tag (that still has not been reached yet). The focus was on supply chain management.
In the hype years (early 2000) 50% of the companies claimed that RFID had strategic relevance to them. The main “pullers” were Metro and WallMart. Metro, in a roadmap from 2000, claimed that all individual items in a shop would be tagged by RFID in 2006 (this has still not happened). They stepped back after a public opinion backlash on perceived privacy threats. They didn’t communicate properly and really had a customer relation management problem. This has resulted in a strong relationship among the general public between RFID and privacy (revealed when analysing combinations of keywords common in search queries). WallMart scheduled to force RFID on its suppliers in 2003 with a deadline of 2006, where all of the pallets it would receive from external suppliers should be tagged by RFID. In 2007 the finally publicly stepped down from this strategy and became very silent on RFID from 2009 on.
Actually this pattern is natural for technology: we first see a tech trigger, then a peak of inflated expectations, followed by a through of disillusionment (this is where RFID is currently), after which a slope of enlightenment takes is to the plateau of productivity.
The curve is similar to the adoption curve for barcode. It took 50 years after the inception of barcodes for 80% of retail stores to use barcode scanners. Standardisation (ISO 15420) happened 27 years after inception (and note that not the best technology made it to become the EPC standard). RFIS suffers from the same chicken and egg problem as barcode: you need a barcode printed on products before scanners are useful in retail store; but why would you print a barcode if no retail store has barcode scanners?
RFID/EPC is about at the start (1976) of the barcode usage curve. Estimate of business benefits for RFID of costs changed from 1975 to 1997, especially the soft benefits increased dramatically (factor 12), while the cost dropped by half.
The challenge is to set up a framework for companies to share information, as well as to share the costs and benefits of implementing RFID technology. The hype years are now over, and businesses are now focusing on what really works. There is a role for the RFIDSec community as well: to develop security tools that can be understood by consumers.
adoption = technology + acceptance (business and consumer)
In order for RFID to embrace our everyday life, the following three steps need to be taken.
Step #1: Construct a business case. It should be clear how any investment in RFID technology will deliver a profit. Why does it make sense to replace say barcode with RFID? What benefits does it bring? For example, RFID can be used to tie additional services to a product. Maybe we can build an “appstore of things”. Value is added if others can contribute to your product (this relates to the DIY scenario outlined below). Using sensors, products and things produce (and consume) data, that can be used to create a community around data that is collected as a by-product of something else.
Step #2: Make it out-of-the-box. Ensure a plug-n-play feeling of RFID technology. Ensure ease of installation, deployment, and maintenance. Build user-interfaces that support the mental model (trust and security should not only be formally proven correct). A good example is the tear-off antenna from IBM, that immediately makes clear to the user that the tag can no longer be read. Another interesting approach is to use social network access models to manage access to your things. Maybe things should have their own facebook page…
Step #3: Provide customer experience. Make sure that the immediate value for the consumer is clear. Build services that make sense in daily life (eg payment). Make it available and adaptable by everyone: make “RFID for everyone”, make it “fun”, apply to the Do-It-Yourself (DIY) feeling.
As an example scenario consider hybrid shopping, which combines the benefits of e-commerce to the brick-and-mortar stores. In this scenario, brick-and-mortar shops become the physical outlet for the on-line store. They become a showroom for the e-commerce store, so the physical shop doesn’t have to have more than 1 item in stock.
Concluding we see the evolution of the Internet of Things proceeds through the following stages. 1) the world is the index (enabled by RFID), 2) take the world online, 3) things start to talk to each other, and 4) things start to become intelligent,
During the discussion the following points came up. Wireless technology is unnerving: therefore you need extra benefit to convince consumers. RFID removes certain explicit actions (place your card in a reader) that you need to increase security and trust. The benefit of RFID tags over barcodes is that you can write/send to a tag as well. If an active device (a thing) is behind the tag, you can influence its state. Finally it was observed that RFID needs NFC to take off.
Session #1 : Cryptanalysis (1)
Masoumeh Safkhani, Pedro Peris-Lopez, Nasour Badheri, Majid Naderi
and Julio C. Hernandez-Castro – On the Security of Tan et al. Serverless RFID Authentication and Search Protocols
Julio presented some attacks on the Tan et al. authentication and search protocols. These are hash based protocols, that can operate if server is offline, and that provide mutual authentication of tags and readers. Speed of these protocols is increased by truncating the hash (that depends on the tag identifier) to a small number of bits. The novel contribution of these protocols is a set of search mechanisms that allow a reader to find a tag in a very large population of tags (where there is a tradeoff between efficiency and privacy). Moreover, the search mechanisms are based on the reader sending the first few bits of the identifier searched, and only tags that match will respond (menaingfully).
The attack works only if the hash used is a Merkle Damgard hash, belonging to the PGV compression group (where part of the message is used as key for a symmetric encryption), and does not use MD strengthening (which Julio admitted is not a standard assumption).
Moreover, the Tan et al. protocols are vulnerable to the following cloning attack: just intercept a query from a reader for a tag, and use the first two fields from that message as a response to impersonate that tag to a different reader. This exploits the symmetry between the query from the tag with the response from a reader, and the fact that reader nonces can be abused.
During the discussion, it was asked what is the best protocol that is currently not broken. Julio answered that there really wasn’t any (under reasonable assumptions), and that the field needs to become more mature. Maybe the focus should be on using efficient public-key approaches instead.
Xavier Carpent and Gildas Avoine – Yet Another Ultralightweight Authentication Protocol that is Broken
Xavier presented an attack on ultra lightweight authentication protocols (below the 1k gate count), like UMAP, Gossamer, etc. These protocols don’t use hashfunctions, ciphers or PRNG, and instead rely on simple xor, rotations, and modular additions. As a result they are stateful (where the state is update after a successful authentication run). Two type of attacks are considered: key recovery and tag traceability.
The attack applies to many similar protocols, in the presentation Xavier focused on the Eghdamian-Samsudin (2011) protocol for a key-recovery attack. The attack tries to establish the Hamming weight of the nonce used. This is successful in 90% of the time for a particular run (because the Hamming weight determines the rotation used when computing a response; all possible Hammign weights can be tested for consistency with the response). This Hamming weight is equals to the Hamming weight of the key in 1 out of 18 cases. If this is the case, then half of the key bits can be recovered.
Xavier the discusses general weaknesses for these classes of protocols: AND and OR are biased. For example, if D = (A OR B) XOR C, then ~D is a good estimate (75%) of C. Modular addition and XOR are linear and act the same on the LSB. As a result (X XOR A) = (X XOR B) + C can be solved quite easily. Finally, rotations preserve Hamming weight and structure. Therefore, almost all ultralightweight protocols are broken, a few months after publications.
In the discussion phase, Xavier mentioned that Gossamer survived so far, at least in terms of key recovery. In response to the question whether any lightweight protocol is essentially broken, Xavier mentions that in principle the same operations are used to implement (strong) hashfunctions.
Session #2: Authentication
Thijs Veugen and Michael Beye – Improved Anonymity for Key-trees
Thijs presented an improvement on key-trees approaches to speed up the search for a valid key to authenticate a tag that is currently presented to a reader. In such key-tree approaches, the key for a tag is split in several parts. Keys are basically reconstructed by travelling from a key tree down to the leaf (where the tags are). Key parts at higher level nodes in the tree are shared among all children that descend from this node. Buttyán suggested to have key trees with variable branching factor, to both increase privacy (more tags needed to break to compromise privacy) while still having low depth tress (that speed up searching)
The paper assumes static adversaries and measures the average anonymity set size. Tags remain anonymous if at least some of its key parts are unknown.
A few definitions. is the vector of branching factors at each level. Then we can define to be the product of all branching factors, and the sum of all branching factors. defines the number of leaves in the tree. is a measure of the search time. Finally, is defined to be the expected average anonymity set size after c compromises (where these compromises are chosen randomly), and is the resilience against compromise (normalised for tree size).
Buttyán considered the optimisation problem over choices of with is maximal. The authors consider a minor extension where is also allowed. They prove the following theorem: if and then . This again allows for a gready algorithm, which slightly improves Buttyán on given parameters.
Marek Klonowski, Krzysztof Majcher, Wojciech Macyna and Filip Zagorski -
Hidden Bits Approach for Authentication in RFID Systems
Filip discusses authentication protocols for RFID based systems. Traditional HB, HB+ protocols (Juels and Weis) are based on learning parity with noise. These are not very efficient. For a 128 bit key and noise level of 1 over 4, 20 KB has to be sent. CKK protocols are based on hidden identifier sets. In essence they compute inner product of a key vector with a random nonce. These are more efficient, but have suffered key-recovery attacks. The paper present an improvement of CKK, that has an asymmetric workload, being more resource friendly on the tag.
Compared to the original protocol, the new protocol does not send the random tag nonce used as input to the CKK protocol in the clear, but instead blinds some bits of that vector. In essence, this means the eavesdropper has to solve a quadratic system of equations. In general, solving such sets of equations is NP hard, making the system secure.
Gergely Alpar, Lejla Batina and Wouter Lueks – Designated Attribute-Based Proofs for RFID Applications
Gergely (my PhD student) gave the talk (so I am biased). A few weeks ago Gergely, Maarten and I were travelling in the train, and we found out we could read out his (Hungarian) passport with a Google Nexus phone (with NFC). Clearly this is a privacy concern. His research is about how to control who can read certain data from an RFID tag (like an e-passport). The idea is to designate certain readers to allow them to be able to read certain attributes. Moreover, users should be able to decide which attributes to reveal at all.
Gergely presented an implementation of these properties based on u-prove (from Stefan Brands, 1999) and the randomised Schnorr scheme (2008). Randomised Schnorr implements designation, encrypting another blind in the commitment phase that can only be decrypted by a verifier owning the corresponding private key. This technique can be incorporated into a u-prove showing protocol to allow designation of a zero-knowledge proof of multiple attributes. However, this is an all or nothing approach. Moreover, in u-prove, all attributes that are revealed are sent to the reader in plaintext, even if that reader is not designated for this attribute. The idea is to define separate designation keys for each of the attributes. Readers that are allowed to access certain attributes, receive the corresponding designation keys, and revealed attributes are encrypted using these keys.
Session #3 : Advanced protocols
Kaoutar Elkhiyaoui, Erik-Oliver Blass and Refik Molva -
T-MATCH: Privacy-Preserving Item Matching for Storage-Only RFID Tags
Kaoutar presented. Motivation for the research is the automation of safety inspections, where an alarm should be triggered if two or more dangerous chemicals are placed too close to each other. The idea is to tag each barrel with RFID, containing the chemical substance as attribute, and use readers to check whether attributes on different barrels match a security rule.
However there are security issues: knowledge of barrel contents to non authorised readers should be prevented (to prevent e.g. terrorist attacks). Therefore the attribute is stored (homomorphically) encrypted on the tag. Readers and the backend jointly compute matching, where the backend stores the security rules. Secret sharing is used to ensure that both need to be present to compute the match.
Tags store the encrypted attribute value, as well as a MAC on this ciphertext. Readers read two tags (the protocol does not extend to more tags), and re-encrypts both tag contents. Subsequently the reader and the backend run a Plaintext Equality Test (PET) protocol. In this protocol, the reader sends a pairing of the encoded attribute values to the backend. The backend uses the received value to compute a result for each of the rules in its database, resulting in a result vector, encrypted using its part of the share, which is sent back to the reader. The reader uses its own share of the secret to encrypt each of the vector components again. By construction, this will yield a value of 1 for any rule that should trigger an alarm.
Jens Hermans and Roel Peeters – Private yoking proofs: attacks, models and new provable constructions
Jens presented. A yoking protocol is a protocol that provides a proof that a certain collection of RFID tags were present during the protocol. The proof should be verifiable offline later. Applications are proving that all parts of a complex product (like an airplane engine) were still present when leaving servicing. In a privacy friendly version, the party constructing hte proofs should not be able to see the contents of the tags. Only a trusted offline verifier should be able to verify the proof later.
In this research they attacked a previous yoking protocol (by Batina et al. from 2012). The attack works because this protocol uses a traditional Schnorr protocol, but only for one of the tags. The part of the protocol run on the other tag does not have a commitment. This weakness is exploited in the attack.
The problem appears to be that there is no clear, formal, definition of a grouping proof (aka yoking protocol). There are many broken protocols. Finally, none of the papers considers the notion of time/non-repudiation: it appears that old yoking proofs can be re-used many times later. This paper provides such a definition, including such a time limit. The time stamp requires the presence of a trusted third party.
The protocol without a TTP has a structure that is somewhat similar to some protocols I developed years ago for the private handshakes problem. I wonder whether this is somehow significant.
Gesine Hinterwalder, Christof Paar and Wayne P. Burleson -
Privacy Preserving Payments on Computational RFID Devices with Application in Intelligent Transportation Systems
Gesine presented. The motivation is to achieve privacy in payments schemes for (public) transportation where payments should be executed quickly. This balance of speed and privacy is a challenge. The system presented is based on Brands untraceable offline cash scheme, that only reveals the identity of double spenders (after the fact). The protocol is developed on the UMass MOO, which operates on the UHF 915 MHz range and which integrates an MSP430Fxxx.
The underlying group is a 160 bit ECC group, based on a curve advised by Certicom.