This week we run the Interdisciplinary Summerschool on Privacy in Berg en Dal, the Netherlands. Here is a summary of the talks of Thursday June 22.
Technology has changed how we communicate, obtain information and perform transactions (e.g. paying). Both the type and the scope of access has changed dramatically. At a traditional newsstand, when buying your newspaper, the seller didn't know you and you would pay with paper money or coins, also anonymous. A physical photo album was buried in the closet of the living room and only would be taken out by your mother to embarrass you in front of your new girlfriend.
Now payment is done swiping your phone (identifying), communications now is mostly done by email or messaging (much more efficient but traceable). We now live in a world where:
Secrets are lies. Sharing is caring. Privacy is theft. (Dave Eggers, The Circle)
Poeple don't really understand what exactly happens when using social networks and other platforms. They know that when sharing information with friends through Facebook, Facebook itself is involved and sees everything they do. But they are typically unaware of the many other "stakeholders" that are involved, beyond Facebook, like:
They are involved directly and indirectly, getting information about your activity on Facebook.
To analyse the security and privacy of systems within in computer science, we typically use Alice and Bob being the good guys, each having their own trusted (using the computer science interpretation of 'trust', meaning it will not behave against you) domain, that want to achieve a certain goal against certain threats. These threats are things like
But these three threats only consider security, and seem to not cover privacy.
But what is privacy. Are there words for privacy in other languages? Not really: not in Dutch, Arabic, Chinese. There other words are used to describe it or aspects of it. (It was noted that we should distinguish informational privacy vs other types of privacy, like bodily and spatial privacy.) (During the break I discussed this with a few people who questioned why Thorsten spent so much time trying to define privacy. The point is that for engineers they need to know, in precise sense, what they need to build before they can start building it.)
So does privacy mean we have to protect the data? Not really: its really about protecting the integrity of individuals and hence protect individuals from the processing of the data.
There are several ways one can classify data. E.g.
Or considering different types - content (e.g. pictures, comments, likes) - metadata (aka behavioural data e.g. time of action, group memberships, clickstream, location, etc.)
Or the way data is revealed:
Note that even non-personal data can be significant and can harm people. Working with or creating such data creates traces and metadata that is personal. (This goes beyond the issues Linnet discussed yesterday.)
Back in Dresden Thorsten teaches a lecture "Facebook Mining". In it students need to collect as much information as possible about individuals from social networks, and then try to see how well they can predict certain attributes (or even complete profiles) from the data collected.
The fact that you can predict complete profiles explains why WhatsApp data is so important: if you are not on Facebook but all your friends are, then as soon as you join WhatsApp, Facebook does know your friends, and hence can infer a lot about you. (Your WhatsApp account and Facebook account get connected e.g. through two-factor authentication, where you tell Facebook your mobile phone number, or because you have both the Whatsapp and Facebook installed on your phone both with access to the same unique information on your phone.)
In a 24 hour period of time, students are able to infer, with high accuracy:
Unfortunately, Thorsten did not have the time to discuss technical measures to protect privacy, because of the great, interesting, lively discussions during his talk.
Talked about the shift from the Internet of Things (IoT) to the Internet of Bodes (IoB), based on a forthcoming paper (and also the theme for CPDP 2018).
The internet of things comes with four flaws (as explained in detail further on):
The Internet of Things (IoT) is riding a "Better with bacon" wave (the U.S. idea that any dish is better with bacon, including ice cream!): devices are fitted with often gratuitous internet connectivity. In 2013, an April fools joke was going around on the internet about an connected toaster with its own Twitter feed. In 2017 a botnet of toasters can bring down the internet. (See also Bibi van den Berg's talk on Tuesday.
Adding connectivity comes with hidden costs, like diminished reliability and security. So before adding it, we should analyse the fit (what is the purpose of the device), functionality (what functions will be added) and feasibility (will this work in practice to meet the fit).
It is hard to opt out of the IoT. It is not easy to find a 'dumb' television. Cars increasingly contain vehicle to vehicle communication, or emergency calling functionality. Regarding infrastructure we don't even have the option to opt out of Internet connected and controlled infrastructure like dams, power grids, etc. They are simply there. Hotels are putting connected, always on, devices (like Amazon's Alexa or Apple's Siri) in the room.
(Andrea talked about a military smart suit, a kind of exoskeleton, that responds faster than the person inside. So who is control: the person, or the suit. Actually, you could see a smart car as a virtual exoskeleton, raising similar questions.)
People like it; it's simply terrifyingly convenient. But it leads to new possibilities to collect, aggregate and repurpose data, and new security vulnerabilities (which already materialise with direct physical consequences, like car crashes). Security issues are caused by builder bias (that focus on building something that works, quickly) versus the breaker or fixer mindset (that tries to find and fix problems). And more intelligence in devices (like cars) create a whole class of new problems (as shown by artist James Bridle).
Once these issues start crossing over to health devices, the IoT turns into an Internet of Bodies (IoB), and a 'blue screen of death' may actually lead to death. We can distinguish three generations of the IoB:
We are currently midway the second generation.
All the unresolved technical and legal problems both from the internet and medical perspective, suddenly become highly relevant with this blending of code and flesh. Some examples:
The main question is how to scale regulatory structures to address this level of penetration of IoT devices and the concerns they raise. Things to consider are e.g. product liability or independent audits by the legislator.
Will known security problems be fixed by IoB vendors? One quick fix is to allow or even mandate patches. Note that FDA (US Food and Drug Administration) guidance allows for updates of software in medical devices. Approaches of regulation that do not allow this kind of patching are 'bad' because time is essential to stop ongoing attacks (like the recent WannaCry ransomware attack), while thorough code audits take (too much) time.
What do standard violations mean? Many don't even know that standards exist. Caselaw undeveloped for security
In the IoB, the tension between consumer protection and intellectual property protection will be unsustainable.
The IoB will force us to think what it means to be human; and there will be competing visions. (Note: The Rathenau Institute recently issued a report on Human rights in the robot age)
At the very least we should carefully study incentives, manufacturing and design defects, favouring consumer protection. We should not conflate privacy; perhaps other (existing or new) rights are more appropriate to address the issue. We should fight internet exceptionalism: the Internet is yet another technology that deserves the same level of scrutiny, critical thinking, and regulatory action as any other technology. Finally, we should defend security research, defend tinkering. Our life could depend on it.