Last week I participated in a Lorentz workshop on Fair patterns for online interfaces, organised by Hanna Schraffenberger, Raphael Gellert, Colin Gray, Arianna Rossi and Cristiana Santos. The workshop was super interesting, and I would like to thank the organisers for the great work they did in preparing such a stellar event. (BTW: the Lorentz Center offers a great location and a great deal of support to organise your own workshop at no cost. They are always happy to receive workshop proposals!).
At the workshop dinner, Arianna asked me what I learned, and I provocatively quipped: “fair design patterns do not exist”. Of course the truth is much more nuanced, which I will try to unpack a bit in this blog post, to perhaps start a more in depth discussion and study.
The study of fair design patterns follows the exploding body of research on so called dark patterns: features of interface design crafted to trick users into doing things they may not want to do, but which benefit the business in question. Think of pre-checked opt-in check boxes, highlighted defaults to accept all cookies, or hard to find interface elements that allow a user to change their privacy settings. I only really got immersed in that research field when organising the 2019 Interdisiciplinary Summerschool on Privacy covering dark patterns as its theme.
Now you would think that it would be straightforward to define fair design patters, given we have an easy to understand definition of dark patterns. For example by defining fair patterns as features of interface design crafted to allow users to do things they actually want to do. Think of dialogs that do not favour a particular user response, and clearly explain the available choices. Truth is, it turned out to be not that easy. During the workshop we tried to establish certain aspects that constitute fairness, for example autonomy, lack of bias, transparency, and accountability. But there was considerable disagreement among participants what aspects exactly contribute to fairness.
As a result, defining what a fair design pattern is turned out to be even harder. I believe that part of the problem is that the objective characteristics of an interaction design do not necessarily say anything about whether it is fair. What matters is the context, and the intention with which it is applied. In other words, whether the outcome or effect of the interaction is fair. This, by the way, is also an issue when defining dark patterns. Highlighting certain default choices might actually yield a fair outcome, for example if the highlighted default button of a cookie banner would be to reject cookies. (Although others might object that such a paternalistic approach conflicts with user autonomy.)
In one of the presentations during the workshop discussing the business perspective of applying fair design patterns, it was shown that applying fair patterns in the end leads to better conversion than dark patterns (that in the short term lead to higher conversion but also show significant levels of churn later on). So the user is given autonomous choice, but in the end the business wins because of improved conversion. Is this fair? Depends to what extent you support the free market principle I guess ;-)
For me it was instructive to learn about the ontology of dark patterns by Gray, Bielova, Santos and Mildner, presented during the workshop. The high level patterns it discerns (like ‘Obstruction’, ‘Sneaking’ or ‘Forced Action’) are more easily matched with objective characteristics of interaction design (although the pattern names are perhaps sometimes chosen to frame them negatively), and are less dependent on the outcome or effect. Perhaps this ontology could be an inspiration to come up with more neutral names for high level patterns that could contribute to fair design.
Another problem we face when talking about fair design patterns is that it ignores the system dimension: we can’t see the forest for the trees. When interaction design is the focus of our study, we tend to view fairness through the lens of user choice, and hence focus on user autonomy. This is particularly the case when interaction design is interpreted to mean interface design, ignoring the full customer journey that users pass through when interacting with a service.
The overall design of such a ‘funnel’ (that word does not resemble tunnel for nothing) has a significant impact on the outcome of the autonomous choices a customer makes. For example, when shopping for a rental car, it makes a difference whether you are presented with the bare bones price (with all kind of more or less necessary extra’s only offered to you for a significant price in later steps) or with the all-inclusive price (with the option to decline certain extras to arrive at a lower price in the end). Fairly designed steps throughout the customer journey may still result in the user getting the worst possible deal, even though they are making autonomous choices.
More importantly, the overall service architecture, the way it works, the functionality it offers (or does not offer), drives a certain intended use towards a particular expected outcome. It always favours certain uses, certain outcomes, certain stakes over others. Again we see that code (like law, like society, like markets) is a regulating, normative, force. Whether a system is fair depends on its uses, these outcomes. To only focus on the narrow aspect of its interaction design is risky. We need to consider the system (and the context in which it is applied) in its entirety to judge its fairness.
Proper interaction design is a necessary but not sufficient condition. And I would hesitate to call them fair for the reasons I explained here. Perhaps we should develop a separate ontology (‘non-coercive’, ‘transparent’, ‘…’) to understand fairness in interaction design better.