Using revocable privacy to mitigate the risk of false accusations

July 7, 2021

Every year I teach a privacy seminar, where groups of students pick a topic, to present in class and to write a paper about. Sometimes students pick revocable privacy, one of my research topics. This year, a group of students again did, and while studying it articulated a very interesting reason why revocable privacy is a useful construct. The impact of a false accusation may deter people from voicing the accusation at all. And using revocable privacy approaches may mitigate this.

Revocable privacy aims to design systems in such a way that the architecture of the system guarantees that personal data is revealed if and only if a predefined rule has been violated. The idea being that instead of relying on procedural, legal or organisational means (which may be overridden or sidestepped) to protect the privacy of most well meaning users of a system, the predetermined rules are enforced by the technical architecture and as such cannot be circumvented. This also prevents function creep. Revocable privacy is not a totally new concept: the core idea was already present in David Chaum’s proposal for digital coins, that were truly anonymous unless someone decided to double spend them: in that case the identity of the perpetrator would be revealed. Other use cases are for example speed checks (revealing only the license plates of vehicles exceeding a speed limit) or detecting anomalies in logs (e.g. failed login attempts for a certain account, or strange domains used to control botnets).

One primitive that we developed is distributed encryption, where the same (potentially privacy invasive) plaintext is encrypted by different entities (at different locations, or at different points in time), and where the corresponding ciphertext shares can only be combined to recover this plaintext if more than a fixed threshold of them have been received. This primitive can, for example, be used to reveal the identity of double spenders of digital coins if the identity of the spender of each coin is encrypted using distributed encryption with a threshold of 2. This way honest users (that never spend the same coin more than once) stay anonymous.

Another, somewhat more sensitive, use case (studied by the students) is reporting child abuse. This can be implemented using distributed encryption, encrypting the (canonised) name of the victim (and using any case relevant information as additional input that can only be recovered when the the threshold condition is met). The original idea behind this use case was that individual reports of professionals (or even people from the immediate vicinity of the victim) might not be enough to immediately investigate a suspicion of child abuse. Given the gravity of the accusation, such investigations will have a tremendous impact on both the child and the suspects. The argument why revocable privacy is relevant thus focuses on the potential impact of invading someone’s private life, who in actual fact has not done anything wrong. Indeed this is the primary argument to study revocable privacy in the first place.

What the students did was to articulate much more strongly another reason why revocable privacy is relevant. (To be honest this argument is present in the paper describing the use cases, but somewhat inconspicuous so that I myself had actually forgotten about it.) In cases where corrective actions or other enforcement steps depend on individual people reporting an incident (whether this involves child abuse, tax fraud, or deanonymising forum or Wikipedia comments), people may refrain from doing so if the consequence are severe and they are not entirely certain that an actual incident took place. In other words: when reporting an incident will surely lead to action, the reluctance to report such an incident my increase with the severity of the potential consequences. If the severity is very high, people may be so reluctant to report an incident that very few people press ahead and actually report it. As a result, severe cases (like child abuse) may be reported less often than desirable.

Using a revocable privacy approach, using a threshold based scheme like distributed encryption, individual reports do not lead to action and false accusations have no immediate consequences. However, if several people report about the same case such that the threshold is exceeded, action will ensue. This may put people more at ease reporting incidents, knowing that something will only happen when several other people believe something is wrong and thus confirm the suspicion. Whether this would work in practice, and even whether severe cases are actually underreported in current crime reporting schemes, is of course a crucial question to study first.

In case you spot any errors on this page, please notify me!
Or, leave a comment.