Interdisciplinary Summerschool on Privacy (ISP 2017), day #4

June 23, 2017

This week we run the Interdisciplinary Summerschool on Privacy in Berg en Dal, the Netherlands. Here is a summary of the talks of Thursday June 22.

Thorsten Strufe

Technology has changed how we communicate, obtain information and perform transactions (e.g. paying). Both the type and the scope of access has changed dramatically. At a traditional newsstand, when buying your newspaper, the seller didn't know you and you would pay with paper money or coins, also anonymous. A physical photo album was buried in the closet of the living room and only would be taken out by your mother to embarrass you in front of your new girlfriend.

Now payment is done swiping your phone (identifying), communications now is mostly done by email or messaging (much more efficient but traceable). We now live in a world where:

Secrets are lies. Sharing is caring. Privacy is theft. (Dave Eggers, The Circle)

Poeple don't really understand what exactly happens when using social networks and other platforms. They know that when sharing information with friends through Facebook, Facebook itself is involved and sees everything they do. But they are typically unaware of the many other "stakeholders" that are involved, beyond Facebook, like:

  • 'the public', like other Facebook users, parties like Google that index the web, and others.
  • institutions, like the government, police, intelligence services.
  • partners, like advertisement networks.
  • cloud/content delivery networks, like Cloudflare, Akamai, that optimise the delivery if content to web users.
  • network providers, i.e. the parties that maintain the network infrastructure like your internet service providers.

They are involved directly and indirectly, getting information about your activity on Facebook.

To analyse the security and privacy of systems within in computer science, we typically use Alice and Bob being the good guys, each having their own trusted (using the computer science interpretation of 'trust', meaning it will not behave against you) domain, that want to achieve a certain goal against certain threats. These threats are things like

  • Data loss (confidentiality)
  • Manipulation and forgery (integrity)
  • Disruption (availability)

But these three threats only consider security, and seem to not cover privacy.

But what is privacy. Are there words for privacy in other languages? Not really: not in Dutch, Arabic, Chinese. There other words are used to describe it or aspects of it. (It was noted that we should distinguish informational privacy vs other types of privacy, like bodily and spatial privacy.) (During the break I discussed this with a few people who questioned why Thorsten spent so much time trying to define privacy. The point is that for engineers they need to know, in precise sense, what they need to build before they can start building it.)

About Data

So does privacy mean we have to protect the data? Not really: its really about protecting the integrity of individuals and hence protect individuals from the processing of the data.

There are several ways one can classify data. E.g.

  • non-personal (e.g. simulated, or measurement)
  • personal

Or considering different types - content (e.g. pictures, comments, likes) - metadata (aka behavioural data e.g. time of action, group memberships, clickstream, location, etc.)

Or the way data is revealed:

  • consciously
  • unconsciously

Note that even non-personal data can be significant and can harm people. Working with or creating such data creates traces and metadata that is personal. (This goes beyond the issues Linnet discussed yesterday.)

Back in Dresden Thorsten teaches a lecture "Facebook Mining". In it students need to collect as much information as possible about individuals from social networks, and then try to see how well they can predict certain attributes (or even complete profiles) from the data collected.

The fact that you can predict complete profiles explains why WhatsApp data is so important: if you are not on Facebook but all your friends are, then as soon as you join WhatsApp, Facebook does know your friends, and hence can infer a lot about you. (Your WhatsApp account and Facebook account get connected e.g. through two-factor authentication, where you tell Facebook your mobile phone number, or because you have both the Whatsapp and Facebook installed on your phone both with access to the same unique information on your phone.)

In a 24 hour period of time, students are able to infer, with high accuracy:

  • gender
  • age
  • education level
  • expected tenure with employer
  • sexual preferences
  • religious beliefs
  • political preferences

Unfortunately, Thorsten did not have the time to discuss technical measures to protect privacy, because of the great, interesting, lively discussions during his talk.

Andrea Matwyshyn

Talked about the shift from the Internet of Things (IoT) to the Internet of Bodes (IoB), based on a forthcoming paper (and also the theme for CPDP 2018).

The Internet of Things (IoT)

The internet of things comes with four flaws (as explained in detail further on):

  • Unquestioned Internet connectivity and reliance.
  • Customers cannot opt out easily.
  • Creating new forms of data collection, aggregation and repurposing. possibilities.
  • Creating extreme levels of security vulnerabilites.

The Internet of Things (IoT) is riding a "Better with bacon" wave (the U.S. idea that any dish is better with bacon, including ice cream!): devices are fitted with often gratuitous internet connectivity. In 2013, an April fools joke was going around on the internet about an connected toaster with its own Twitter feed. In 2017 a botnet of toasters can bring down the internet. (See also Bibi van den Berg's talk on Tuesday.

Adding connectivity comes with hidden costs, like diminished reliability and security. So before adding it, we should analyse the fit (what is the purpose of the device), functionality (what functions will be added) and feasibility (will this work in practice to meet the fit).

It is hard to opt out of the IoT. It is not easy to find a 'dumb' television. Cars increasingly contain vehicle to vehicle communication, or emergency calling functionality. Regarding infrastructure we don't even have the option to opt out of Internet connected and controlled infrastructure like dams, power grids, etc. They are simply there. Hotels are putting connected, always on, devices (like Amazon's Alexa or Apple's Siri) in the room.

(Andrea talked about a military smart suit, a kind of exoskeleton, that responds faster than the person inside. So who is control: the person, or the suit. Actually, you could see a smart car as a virtual exoskeleton, raising similar questions.)

People like it; it's simply terrifyingly convenient. But it leads to new possibilities to collect, aggregate and repurpose data, and new security vulnerabilities (which already materialise with direct physical consequences, like car crashes). Security issues are caused by builder bias (that focus on building something that works, quickly) versus the breaker or fixer mindset (that tries to find and fix problems). And more intelligence in devices (like cars) create a whole class of new problems (as shown by artist James Bridle).

The Internet of Bodies (IoB)

Once these issues start crossing over to health devices, the IoT turns into an Internet of Bodies (IoB), and a 'blue screen of death' may actually lead to death. We can distinguish three generations of the IoB:

  • First generation: quantified self, external monitoring devices; in theory they are optional.
  • Second generation: devices go internal, into the body, like 'digital pills', pacemakers, robotic surgeries; these are even less optional.
  • Third generation: "wetware", hardwired technology like brain implants and other internet-connected body parts.

We are currently midway the second generation.

All the unresolved technical and legal problems both from the internet and medical perspective, suddenly become highly relevant with this blending of code and flesh. Some examples:

  • There is currently a requirements for visually impaired people to wear glasses when driving a car. This is an example of legal requirements that bind us to augment our bodies. Will, in the future, robotic augmented arms be a requirement for particular jobs?
  • There was a case of patient with increased libido (in Belgium) as side effect after hospital placed an implant. Patient did not see this is a problem, while the spouse saw it as an issue and sought legal redress to force the hospital to remove the implant and fix the problem.
  • Only security expert know and understand the full extent of the security problem related to internet connected devices, leaving patients and doctors in the dark.
  • What is a disease? Is deafness a disease? The definition of what is the default shapes our conversations, discourse and how we think about possible solutions. E.g. deaf parents may not want their children to get a cochlear implant for fear of loosing them to a culture of non-deaf people they do not know.
  • Pacemaker data was used as evidence in criminal proceedings in a case involving insurance fraud.
  • Resolv shut of their main servers and henced 'bricked' (i.e. made unusable) their IoT devices after the company was bought by Google Nest.
  • Wearables in the workplace: can employees meaningfully consent to being monitored at work?
  • What if shodan starts listing vulnerable internet connected body devices. (And even if shodan does not list them, anybody else can do the scanning too.)

How to regulate this?

The main question is how to scale regulatory structures to address this level of penetration of IoT devices and the concerns they raise. Things to consider are e.g. product liability or independent audits by the legislator.

Will known security problems be fixed by IoB vendors? One quick fix is to allow or even mandate patches. Note that FDA (US Food and Drug Administration) guidance allows for updates of software in medical devices. Approaches of regulation that do not allow this kind of patching are 'bad' because time is essential to stop ongoing attacks (like the recent WannaCry ransomware attack), while thorough code audits take (too much) time.

What do standard violations mean? Many don't even know that standards exist. Caselaw undeveloped for security

In the IoB, the tension between consumer protection and intellectual property protection will be unsustainable.

  • There are large competition concerns and lock-in/interoperability failures (e.g. can you meaningfully switch vendor for a robotic arm or a chip in your brain?)
  • Shouldn't you be allowed to investigate and fix any security vulnerability in your own bodily devices? Or is this an intrusion of a protected network?
  • Is code a product or a service?
  • When is contractual liability limitation impermissible?
  • What is the status of data off a chip in the brian? Does this fall under the exemption of self-incrimination?

The IoB will force us to think what it means to be human; and there will be competing visions. (Note: The Rathenau Institute recently issued a report on Human rights in the robot age)

At the very least we should carefully study incentives, manufacturing and design defects, favouring consumer protection. We should not conflate privacy; perhaps other (existing or new) rights are more appropriate to address the issue. We should fight internet exceptionalism: the Internet is yet another technology that deserves the same level of scrutiny, critical thinking, and regulatory action as any other technology. Finally, we should defend security research, defend tinkering. Our life could depend on it.

In case you spot any errors on this page, please notify me!
Or, leave a comment.