Summary of presentations and discussions of day two of the For Your Eyes Only conference held in Brussels on November 29 and November 30. My main findings can be found here.
Panel: Ashkan Soltani (Independent researcher), Frank Piessens (Distrinet KU Leuven), Dave Clarke (Department of Computer Science KU Leuven), Claudia Diaz (COSIC/ESAT KU Leuven), Seda Gürses (COSIC/ESAT KU Leuven).
Piesens claimed that privacy is harder than security, because the latter can be solved at one level and the first has to be addressed at all levels (I happen to strongly disagree: also security needs to be addressed at all levels. Internet banking is a good example that needs protective measures at the network and application layer). And like security, privacy must be addressed at the user layer as well, which is hard. Another difference according to Piessens is that the adversary can run software in my private context. But again, in Internet banking a hostile browser plugin can break security. What is different, is that in security the user is the adversary towards the service provider, whereas in privacy the service provider is the adversary towards the user. He also claimed that information flow analysis deserves further study and discussed a Firefox plugin they developed. Information flow analysis prevents the flow of private information to public information.
Ashkan described several of the more obscure ways in which social networks (in particular Facebook) collects personal information. For example, if you send someone a private message containing a link to a site, that fact is shared with the owner of that site (in fact the like counter for a like button on that page is increased). Also, iOS apps that connect to facebook connect actually connect through the local iOS facebook app.
Clarke distinguished two methods to control the spread of personal information: access control (a priori) and accountability (a posteriori). He sees 3 different classes of problems w.r.t. the spread of personal information. First of all, contexts are ambiguous and several distinct contexts may sometimes collapse into one. So it is not always clear where exactly information shared in one context ends up. Second, audiences are sometimes invisible, leading to the same effect. Thirdly, the border between public and private spaces increasingly gets blurred (a similar effect is seen in the Bring Your Own Device (BYOD) trend). In the context of access control, one of his PhD students did research in making the context more explicit. For example by showing, for a picture being posted, who may see it, and who actually saw it. Machine learning makes it possible to sentiment analysis (to determine the mood of the poster or the message being posted), which allows the user to be warned when he/she is about to post a possibly offensive message. Thirdly, user interface design is important to make it easier for the user to understand the context a message will be posted in. (All this is similar to the work of Cranor, see below). W.r.t. accountability as means for a posteriori enforcement, Clarke notes that this is inherently at odds with privacy and that most of the time logging is used to hold the user accountable, not the service provider. In order to achieve that, users have to log things locally for themselves (which is not so easy to do in a centralised social network setup like Facebook has).
Diaz gave a comparative analysis of privacy protection in social networks. In terms of the importance of privacy protection, there are basically two narratives. The first talks about getting into trouble with your own private, social circle (social privacy) and the second talks about getting into trouble with the government (institutional privacy), when personal information is leaked. (A member of the audience noted that actually a third narrative, about getting in trouble in the world around you, e.g. when doing business, was missing.) There are basically two types of privacy protection: privacy settings (to achieve social privacy) and PETs (to achieve institutional privacy). The challenge is to integrate both approaches. The question is: who defines what the problem is: the user or the expert? And how do we avoid to be paternalistic? Second, what information is in scope: implicit or explicit information? PETs are usually content agnostic, yet for social privacy semantics make a huge difference. Thirdly, how do we define the privacy risk: based on potential harms? Or on actual damage? (For most people a direct causal relationship matters).
Gürses served as discussant. She observed that the issue of logging versus privacy depends on who does the logging: logging by myself is fine, but if the service provider does it this is potentially evil. W.r.t. Diaz presentation she observed that the main issue is who gets to decide. In this sense both law enforcement, Facebook but also the PET designers (!) make normative assumptions and/or set the norms. In general the question is whether we can trust the machine to make decisions for us (and who controls the machines that make these decisions... a potential source for recursion). The main theme of the SPION project therefore is where to assign the responsibilities.
In the discussion afterwards, the main observation is that incentives steer the application of a certain technology. Because unique identifiers (e.g. ad impressions and click through rates) are the currency 'du jour' of the Internet, uptake of privacy enhancing technologies is low. It is a telling fact that of the four main browsers, Safari is the only browser not backed by a company making its money with advertising, and in fact Safari is the only browser that has privacy settings enabled by default. Ashkan suggested that Mozilla should build the "user's browser" that has the level of privacy protection that a user would expect (and does not hurt his browsing experience). Unfortunately, Mozilla is funded by Google. Ashkan further suggested that the future will not be about trusting machines but trusting brands/corporations (but this requires open and transparent APIs... something I will write about some other time...).
Panel: Airi Lampinen (Helsinki Institute for Information Technology HIIT & University of Helsinki), Kate Raynes-Goldie (Curtin University, Australia), Bettina Berendt (Department of Computer Science at KU Leuven), Rula Sayaf (DistriNet KU Leuven), Ralf De Wolf (iMindsSMIT Vrije Universiteit Brussel), Shenja van der Graaf (iMinds-SMIT Vrije UniversiteitBrussel), Laurence Claeys (Vrije Universiteit Brussel).
Lampinen studies privacy as a process of interpersonal boundary regulation, much in the spirit of Altman. She asked three questions (without answering them). What if we looked beyond online (e.g. couchsurfing)? What if we looked beyond the individual (e.g. even in normal life you yourself are not in full control)? What if we had better access to our own data (e.g. last.fm)? (actually not sure what she meant by that question and the example).
Raynes-Goldie argues that Facebook is on a mission to make the world radically transparent, and that this belief is hardwired in its architecture. This architecture was built in such a way that "Facebook helped to push people over a hurdle" as Zuckerberg put it (using both peer pressure and rewards). Raynes-Goldie wonders whether the same tactics could be used to push people back in the other direction again...
De Wolf develops PETs that are usable. He turns back to the question of informational/institutional and social privacy again (see Diaz above) and who gets to define what privacy means and how it should be protected. His advice is to test the expert definition of a particular privacy problem and its solution with users, to avoid being to normative. Designers need to be aware that a user is never floating in thin air: he is embedded in society and a member of a community. W.r.t. usability he distinguishes perceived usefulness (does it help to solve a problem) and perceived ease-of-use (is it easy to use). Like wild animals, technology needs to be tamed: domestication of technology.
According to Sayaf, current access control methods fail to protect privacy because the context is missing: contexts can be ambiguous, conflicting, or be changed. To this end the possible and actual audiences need to be visible, the context needs to be explicit, and the most relevant context for a message needs to be elicited and the message must be restricted to this context (see Clarke above and Cranor below). Policies should be extended with the obligation that a message posted in a context can only be shared within the same context.
Berendt argued to stop thinking small, and include the environment, mental schema and behaviour in the equation.
The discussant stressed we should cross the online-offline boundaries, and not only focus on Facebook but also mobile devices and sensor networks/the Internet of Things. We should create PETs that people really want and actually use. We could focus on alternative solutions, e.g. first observe how people manage their audience in the real world, and then design tools that support these strategies online.
During the discussion, a developer sketched the dilemma they face when designing apps: Android and iOS dummify their users by advising to get rid of settings that only 20% of the users use. This means that expert settings that allow fine grained control get dropped. Also, half of the users actually want their contactlist being downloaded automatically so that their friends are automatically connected to them as soon as they start to use the same app...
Panel: Alessandro Acquisti (Carnegie Mellon University), Lorrie Faith Cranor (Carnegie Mellon University), Sandra Petronio (Indiana University-Purdue University Indianapolis) Adam Joinson (Centre for Information Management, University of Bath, UK), Eleni Kosta (Tilburg Institue for Law Technology and Society).
Acquisti presented a few studies that show that small changes in settings can have a huge impact on the (privacy related) behaviour of users. For example, when people are given a $10 gift card that is anonymous, and are then given the option to switch to a $12 gift card that tracks their spending, 52% of the people keep the anonymous card. yet, if they are given the tracking $12 card first, only $10 will opt for the anonymous card afterwards when presented with that option. In other words, picking the default matters.
Cranor made a compelling argument for privacy nudges. Nudges are a form of soft paternalism: the rules are not hardcoded so you can still change the settings. Nudges can be designed to make privacy more cure, cool or sexy. They can be used to reward or punish certain behaviour. They can be used to show the consequences of sharing, and make it easier to select the privacy friendly option while adding friction to the privacy reducing option. She quoted "regret studies" that show people often regret having posted a message on facebook or Twitter, and that these regrets happen within a day. Cranor and her team designed the following three nudges (see also Clarke above): a 'timer' nudge that make people stop and think, allowing them to cancel a post, a 'sentiment' nudge that measures the sentiment of a message, and a 'scope' nudge that reminds the user of the audience that will receive a message (by picking a few random members of the currently selected audience). People found the timer and scope nudge useful, but the sentiment nudge annoying.
Petronio presented her Communication Privacy Management Theory, based on work with Altman.
Joinson asked the question whether nudges are enough, and what are we in fact nuding? According to him a recent UK government research concluded that "non regulatory measures, like nudges, are less likely to be effective". The problem is that nudges really make only users responsible. Social marketing research shows that transparent interventions are better for people that want to change, whereas invisible interventions are better for people that don't want to change. Attitude is not the same as behaviour, so we need not change attitudes but instead need to change how people interpret a (privacy invasive) situation. Experiments have shown that people using Facebook are less focused on themselves. At the same time, social theory says that people that are less focused on themselves are less likely to engage in self-disclosure. This suggests that using Facebook should actually lead to less self-disclosure!
Missed discussions and tail of last presentations because I had to catch the train back home...
In closing: a nice conference with good speakers from a balanced mix of backgrounds, and an engaging audience.
A note on the format though. I guess as a computer scientist I do not very much like the panel setup where questions are postponed to after the discussant has summarised the presentations and the panel has first discussed among each other. Especially the first few presentations in a panel get less attention and clarifying questions are not asked this way. And because in general the presentations take longer than planned, there is not always enough time to engage the audience.
[…] week I attended the For Your Eyes Only conference held in Brussels on November 29 and November 30. These are my, personal, main […]
[…] Summary of presentations and discussions of day one of the For Your Eyes Only conference held in Brussels at November 29 and November 30. […]