Tests that detect Covid-19 antibodies are becoming available. This allows authorities to test whether people already contracted the virus, and therefore are now immune and no longer a carrier of the virus. Once tested, such people could be exempted from certain restrictions (like staying inside or working from home), or could volunteer to help vulnerable people that stay in quarantine. The question is: how do you reliably prove that you have tested positive for such an antibody test, while protecting the privacy of people being tested. After all, we do not know yet what the long term health effect is of having been infected by the virus. It may be a perk and a badge of honour now, but it may be a stain in your health dossier in the years to come.

For this reason people have been proposing digital credentials, and especially privacy friendly attribute based credentials, to solve this problem. Such a credential can be selectively disclosed (meaning that the bearer can reveal to be Covid-19 immune in cases where this matters, while hiding this fact in all other contexts). Moreover, such credentials are stored and managed by the user on his or own device (e.g. their smartphone), meaning that there is no need to maintain a central database registering all people tested Covid-19 immune (although it is certainly possible and even desirable to track anonymous statistics concerning Covid-19 immunity). It goes without saying that such a credential cannot be forged.

But there is a problem. A credential saying that the bearer is immune for Covid-19 is extremely valuable, given the freedoms and perks it offers. It should be very strongly tied to the actual person to whom it pertains, to prevent it from being forged or deliberately being shared with someone in need. And essentially all forms of digital identity management, including those implementing attribute based credentials, have a problem ensuring a strong binding between digital credentials and the person they belong to. Forging such credentials, or stealing them from someone can quite easily be prevented. But it is much harder to prevent a person from sharing his or her credential with someone else, especially in fully virtual, online, environments.

But context matters, and in this particular case we are not really considering virtual use cases, but rather real-life use cases where people can use such a credential to prove that they are free to leave home or enter a quarantined area, for example. In fact, in terms of functionality we would like to have a kind of ‘corona immunity stamp’ in our passports, that we can selectively disclose (or not), and that automatically vanishes after say six months (if that happens to be the period for which immunity can be guaranteed).

Can we design a ‘corona credential’ that functions like such a ‘corona immunity stamp’, given that they are only relevant in real-life use cases where we can rely on physical inspection to verify the binding between the person and the credential being presented?

Perhaps we can, if we rely on biometrics to strengthen the binding. It should be stressed that the use of biometrics comes with all kinds of caveats concerning false-reject and false-accept ratios, that depend for example on age and genetic dispositions. In other words: certain groups may be favoured over other groups when relying on this approach.

Consider for example the following setup, using facial scans as the biometric.

People can install an app on their smartphone that allows them to manage all kinds of attribute based credentials. We in Nijmegen have for example been working on IRMA for quite some time now.

After being tested positive for Covid-19 antibodies, the accredited testing station can issue a credential stating this. To tie this credential to the person just tested, the testing station needs to take a picture of the person tested, derive a so-called biometric template from this picture, and store this template together with the (positive) test result in the credential. The credential is issued to the smartphone of the user, and only stored there. The testing station destroys any information about the person and the picture it took, and only records the test result (without any identifying information) for statistical purposes.

To prove immunity for Coivd-19, people can choose to reveal this credential. They can only do so when this credential was issued to their phone. To prevent someone from using someone else’s phone, people revealing the credential are asked to also reveal the facial template stored in there. So if someone wishes to enter a quarantined area using such a credential, someone present at the entrance should take a picture and match that with the biometric facial template contained in the credential. Again, the picture should be discarded immediately after the credential is verified.

There are issues with such an approach, for example the fact that it normalises camera surveillance. More fundamentally observe that for privacy it relies on destroying pictures both at the testing station and any check point. This reliance on operating procedures to protect privacy seems inherent to this particular case, given that we somehow need to physically verify the binding between a person and his or her credentials. The decision to use such an app should therefore not be taken lightly.

My main interest here was the more fundamental question whether a digital, privacy friendly, credential could be used to solve this problem given the fact that in general digital identities are only weakly bound to their bearers. My initial thought was that this was not possible. Interestingly enough, given the particular ‘physical’ aspects of this case, the answer turns out to be somewhat more nuanced, given the caveats mentioned above.