Right after RFIDSec 2012 held in Nijmegen, IFIP Working Group 11.2 held its bi-annual seminar on July 4, 2012. This reports the informal talks given that day. The agenda and slides of the presentations are available.
If you like the topics discussed, you can become a member of the WG as well. Simply contact us for details.
Ivan aims, with this talk, to make BCI a more computer science researched topic.
Brain Computer Interfaces (BCI) allow for non muscular communication between user and external device. They were developed mainly in medical domain, to support people with severe muscular injuries.
Electroencephalogram (EEG) measures brain voltage fluctuations, reflecting the synchronous activity of millions of neurons. EEG has good time (5 ms) and signal level (40 mV) resolution, but there is very poor spatial resolution (unlike MRI). EEG also records muscular activity, eye movements and eye blinks, and even power-lines (50-60 Hz) interference. Interestingly, BCI is very good at recognising facial expressions (and these are quite useful as measurement of emotions).
EEG contains continuous rhythmic signals (alpha, beta, gamma and theta signals) and event-related potentials (ERP). One of them (P3/P300) is related to recognition activity. This one is used as the control channel in many BCI applications.
Wayne asked how much you need to understand how brain signals are caused in order to use it (compared to sidechannel analysis that treats the object as a black box). According to Ivan, a full black box approach is not feasible.
One application is neurofeedback, where users learn to control their EEG by using their brain activity to control a certain activity (eg a game). This works as an individual's behaviour is modified by its consequences.
Another application is alertness monitoring. EEG can be used to predict whether operators engaged in monotonous tasks are about to become less attentive (and make an error in a taks).
Neuromarketing (developed by companies like Neurofocus) aims to measure customer response to brands and products, and to determine purchase intent.
You need many electrodes to compensate signal-to-noise ratio, and to cover the whole area of the brain. The electrodes really need to be close: there is no chance to read brain signals over some distance. But companies try to reduce the number of electrodes, such that e.g. two electrodes behind your ears will suffice (cf a company like NeuroSky). Such consumer grade BCI devices are roughly 300 dollars.
But what if things go wrong? For example what if my neurocgame or the mouse control app actually, in secret, does neuromarketing on me. Threat model: BCI device and computer are trusted, but the applications are not.
So the app tries to do a recognition game (reveal some sensitive information about a victim) unnoticed. In order to be able to do that, the application needs to be calibrated. But this can be done passively: apps need to be calibrated for their real purpose anyway, so one can use a set of pictures where only one image is for sure recognised by victim as the training set.
Ivan and his team ran experiments that show that this technique can be used to extract personal information (e.g. first digit of a PIN code, your address, your month of birth) with more accuracy than using random guessing. This decreases the entropy, the amount of guessing needed to get at the real answer.
The question is how to defend against this. It's hard to avoid or bypass the training phase, as users have high incentive to use the app. Conscious defences are actually countereffective: lying could increase detection.
On the other hand, BCI might be used a biometric factor to increase security in certain applications.
Gildas gave a theoretical talk on distance bounding and relay attacks. During the talk a cat came in: the cat-in-the-middle.
Relay attacks allow a devices to be connected surreptitiously without their owners knowing and agreeing (this is my definition different from the one Gildas gave, that lacked the notion of consent). (This is in essence Conway's 1976 chess grand master attack). This can be used to wireless pickpocket contactless wallets, or to open cars at a distance. A distance bounding protocol aims to prevent this, by giving proof that the two objects connected are indeed within a certain distance. (Gildas notes that technically this does not prevent a relay attack between two close objects). Distance bounding exploits the fact that messages cannot travel faster than the speed of light.
Several generalisations of pure relay attacks exist that allow the adversary to be active: mafia fraud (the tag does not collude with adversary), distance fraud (the adversary is tag itself), terrorist fraud (the tag colludes with the adversary). The Hancke-Kuhn protocol does not prevent terrorist attacks, and only reduces the probability of the adversary to win to (3/4)n.
This is an active research field in RFID.
Jörn gave a general introduction to the issue of privacy in the Internet of Things.
Wayne: why is there no concept of adversary in privacy legislation. It is unclear what you need to protect against in order to achieve a certain level of privacy. Part of the explanation is that privacy, at least in Europe, is equal to data protection which is based on the concept that the citizen himself is able to control the flow of personal information about him.
There is a focus on controlling reader-tag communication, developing mutual authentication protocols for example. But access is much more context dependent (same reader should sometimes have and at other times not have access). And what happens with the processing of the information once it leaves the reader?
Pedro discusses grouping proofs (Saito & Sakurai, 2005), where a reader communicates with more than one tag at once (and only if these tags are present simultaneously). These are also called Yoking proofs (Juels, 2004). He notes this problem is only interesting if the tags do not have an always on connection to the verifier (ie the back end database).
The Juels yoking proof is not correct, as the MACs computed by both sides are independent. An adversary can therefore ask one tag to compute its part before the other tag is even present, violating the requirements.
Current grouping protocols have privacy issues, as they send the tag identifier in the clear. Other open problems in the area are the following. In grouping proofs involving more than 2 tags, it is hard to ensure dependence (where the output of a tag must depend on contributions given by the other tags). Another problem is forward security, that ensures that even after a tag compromise, earlier group proofs must still be secure and private. Also, a formal model appears to be missing.
Unfortunately I had to leave at this point in the program due to unforeseen circumstances. The slides of the following remaining presentations are available on the web.
[…] een recent seminar vertelde Ivan Martinovic, een security onderzoeker uit Oxford, dat hier ook een aantal […]