This week we run the Interdisciplinary Summerschool on Privacy in Berg en Dal, the Netherlands. Here is a summary of the talks of Monday June 19.
Eleni provided an overview of the main provisions of the GDPR (summarising two full academic law courses into a two-hour lecture). I will not try to summarise the summary. Instead there was one piece of discussion I thought was quite significant, related to data portability.
The GDPR contains a right to data portability. The intention of this provision was to prevent lock-in to specific digital services. So according to Eleni, the right to data portability really means that European users should be able to take all their data from Facebook and move to another social network (say Google+, or Safebook (a proposal for a P2P social network from Torsten Strufe and others from the University of Dresden). This has enormous ramifications, well beyond the realm of privacy! She pointed me later to this Art 29 WP opinion.
Claudia discussed how privacy is typically studied from three different perspectives: the social, institutional and anti-surveillance paradigm. (This classification is studied in detail in this paper and is related to the privacy as control, privacy as confidentiality, or privacy as practice classification of Seda Gürses).
Covers issues around technical systems mediating interactions between people, that change how people interact, often in ways that users do not expect, and that have an effect on self-presentation and identity construction. It's hard, when using such systems to make the right decisions: systems are hard to understand, the long term consequences are not easy to foresee and to offset against instant gratification.
In this paradigm, users define the privacy problem.
Goals: meet privacy expectations (i.e. "don't surprise the user"), provide privacy controls, support people in deciding what to do, and help users develop appropriate privacy practices (like email etiquette, not tagging pictures, etc.)
Technologies: things that offer appropriate defaults, make privacy settings easy to navigate and change, provide contextual feedback (e.g. show how your profile looks to friends or others) and privacy nudges (that explicitly aim to let users stop and think about what they are doing).
Challenges and limitations: research in this domain focuses more on what happens in the front end (instead of what happens in the back end), user studies typically involve non-representative samples, and there is focus on privacy expectations (i.e. if there is no privacy expectation, then there is no privacy problem). This latter point aligns with business interests that want to make users more comfortable with sharing information in their systems.
Here the organisation defines the privacy problem.
Technologies: dashboards, appropriate defaults, tools to make privacy policies easier to understand or deal with, tools to help organisations define and enforce privacy policies (e.g purposed-based access control), auditing systems.
Challenges and limitations: the organisation is (at the very least) assumed to be semi-honest and to act in the best interest of the user. As a consequence there are no technical protection measures; this approach relies heavily on legal protection. Focus is on the prevention of abuse of personal data, instead of limiting the collection of personal data; it does not prevent the creation of large databases of personal data. Mostly driven by legal compliance.
Studies data disclosure through leaky technologies, which are (ab)used by service providers and governments for surveillance purposes. When studying privacy from this perspective people often want to protect many other related fundamental rights, like free speech, freedom of association, etc.
Here the technologists define what the privacy problem is.
Goals: limit disclosure through technical means, and only reveal explicitly disclosed data to intended recipients. Do this in a way that relies as little as possible on trusted parties or components. Note that this moves the trust from institutions (or people) to a need to trust the technologies instead (which may be harder to trust for non-technical people). Perhaps combining approaches from this domain (e.g. open source, transparency) with approaches from institutional privacy, and formal organisations adopting these PETs, might help increase the overall trust.
Technologies: end-to-end encryption, Tor, advanced crypto protocols and obfuscation protocols (like track-me-not).
Challenges and limitations: the focus is on preventing disclosure (which is a very narrow interpretation of privacy), offering no protection after the disclosure. Embedding formally verified secure protocols in actual system implementations in a secure way is hard. There is a big reliance on the security of end-user devices.
Claudia then continued to discuss several approaches to protect privacy.
Privacy protection, breaking the link between actors and their actions, can either be anonymity based (hiding the identity of the actor, e.g. Tor) or private-action based (hiding the action being performed, e.g. private information retrieval).
A different distinction is between crypto-based solutions versus obfuscation based approaches. Crypto-based approaches are binary (data is either fully disclosed, or not revealed at all) and are only concerned with protecting the inputs of the system, while disclosing the outputs. Obfuscation-based approaches have 'grey' trust assumptions: noisy or aggregated data is revealed to everybody, while still making inferences of individual inputs hard (but not necessarily impossible). Example of the crypto-based approach is secure multiparty computations (MPC), while statistical disclosure control (SDC) is an example of the obfuscation-based approach.
As an example of a specific technology Claudia discussed PreTP, a privacy friendly way to implement electronic toll pricing (cf. the European Electronic Toll Service (EETS) Decision).