Technological developments impact our privacy. In the late nineteenth century cheaper, easier to operate camera’s combined with improvements in printing technology allowed newspapers containing pictures to be more widely circulated. Fearing that “what is whispered in the closet shall be proclaimed from the house-tops”, Warren and Brandeis formulated the right to be let alone. In 1999, Scott McNealy, then CEO of SUN Microsystems, famously proclaimed: “You have zero privacy anyway. Get over it.” Has a century of progress, especially in the area of information and communication technology, really killed privacy?
(This is the second myth discussed in my book Privacy Is Hard and Seven Other Myths. Achieving Privacy through Careful Design, that will appear October 5, 2021 at MIT Press. The image is courtesy of Gea Smidt.)
The invention of computers and digital networks and their proliferation in all aspects of our lives undeniably has changed the privacy landscape beyond recognition. Archiving information used to be hard. Digital documents on the other hand are easily copied, and it is hard to ensure that all copies are truly deleted. Computers make searching for information a breeze. Networks allow information from different sources to be combined, and to be retrieved anywhere. Social networks and the “web 2.0” allow users to generate content as well.
But the change is not limited to such volunteered data that users explicitly share or provide on request. Much more pernicious is the use of observed data that can surreptitiously be collected because a lot of the technology we use is ‘leaky’ by design. Computers, and in particular the smartphones with their sensors that we carry with us almost all of the time, automatically collect vast amounts of detailed information about how we use them: where we are, what we read, which websites we visited, etc. Due to its open design, everything we do on the internet can be observed by properly placed entities: internet service providers and web servers for example. Using cookies, our surfing history can be collected across many different websites. Based on all these different sources of observed data, detailed information about our health, our wealth, our beliefs, and our preferences can be derived.
Does this mean we necessarily have zero privacy anyway, and that we should get over it? I don’t think so. The leaky design of our computers, networks, smartphones and other devices is not inevitable. A guiding principle of my book is that technology does not develop in isolation, and does not have an independent, inherent purpose or destiny of its own (as the Silicon Valley crowd wants us to believe). Instead technology is made by people and is shaped according to their agendas and beliefs, embedding these beliefs in how the technology functions, what it affords us to do, what it prevents us from doing, and what it does regardless of our own intents and wishes. At the moment, information technology is used to invade our privacy. As we will show throughout the book, this is not necessarily so: technology can also be used to protect privacy, through a process called privacy by design.
Privacy by design works alongside legal protections of privacy, like the General Data Protection Regulation (GDPR) in Europe. Privacy by design is important because purely legal means to protect privacy are not enough: privacy law does not always apply, and even when it does, enforcing it is not always easy. Moreover, legal protections assume a certain level of trust in the actor offering a certain service. In fact I once heard a legal expert summarise the GDPR as ‘be reasonable when processing personal data’. The problem is that not all actors can be assumed to be reasonable.
Privacy by design is a engineering approach popularized by Ann Cavoukian in the 1990s. The essential idea is that privacy should be considered as a design requirement from the very beginning and then throughout the life-cycle of a system. This is necessary for two reasons. Early design decisions have a strong privacy impact that cannot easily be changed later on in the design process. Moreover, by considering privacy together with all other requirements from the outset, designers will be forced to think of how to meet all other requirements in a privacy-friendly way.
The power of privacy by design thinking is best illustrated by an example. Suppose you want to leave your coat at a cloakroom when visiting the theatre. You’d probably be unpleasantly surprised if the assistant would ask for your name and record that while taking your coat, and checking your name when you retrieve it after the show. If a cloakroom would work this way, it would be quite privacy invasive, compiling a record of who visited the theatre, and when. (It would also be much less efficient.) Luckily cloakrooms have applied privacy by design (although probably for efficiency reasons, and not because they care about privacy): when taking your coat, the assistant hands you a numbered token in return. With that token, you can reclaim your coat after the show. This systems records no personal information whatsoever.
This is of course a trivial example, but it does show the kind of tilted perspective needed to properly design systems in a privacy friendly way. Many technologies and approaches exist to support such privacy friendly designs. The rest of my book is devoted to explaining them in a way that allows laypersons to understand how technology can be used to protect privacy, instead of invading it.
(For all other posts related to my book see here)