Review of 'Stand out of our light' by James Williams

March 26, 2019

In 2016, the Nine Dots Prize was awarded for the first time, to James Williams. His prize? The opportunity to develop his 3.000 word idea into a full-length book: 'Stand Out of Our Light: Freedom and Resistance in the Attention Economy' (available from Cambridge University Press as open access). This is a review of that book.

Summary

Increasingly our experiences and interactions with others are mediated by information and communication technology, through our smartphones, the apps we install on them, and the websites and platforms we connect with. We trust this new, wondrous technology. We trust it to be on our side, to serve our interests, to act on our behalf.

But is that technology really on our side? The book strongly argues this is not the case.

A lot has been said about the problem of our increasing reliance on information technology from the privacy perspective, i.e. the risks associated with the collection of huge amounts of personal data without our consent or without any meaningful sense of control. We are not the users; we are the product being sold.

This book however approaches the problem from the autonomy perspective. It discusses the risks of the so called 'digital attention economy', and the threat it poses to human freedom as 'systems of intelligent persuasion [] increasingly direct our thoughts and action'. Williams delivers his message in three stages.

  1. Systems are designed to distract us by design, to capture our attention to keep us engaged as long or as often as possible.
  2. This erosion of our attention erodes our free will, it changes what we aim to achieve. This has profound societal consequences, especially for politics and our democracy.
  3. To counter this, we should tell the designers of these systems (like Diogenes told Alexander the Great) to "stand out of our light".

1: Distraction by design

First of all, Williams observes, information used to be scarce. Now (with the advent of digital technology) it has become abundant. Once information becomes abundant, according to the economist Herbert Simon attention becomes the scarce resource. The goal is to maximise 'the amount of time you spend with their product, keeping you clicking or tapping or scrolling as much as possible, or showing you as many pages or ads as they can.' Instead of supporting our intentions the technology is designed to capture our attention. The collection of large swaths of personal data is only possible if we are tricked, seduced, into using these technologies more and more. And the personal profiles derived from that personal data can only be monetised if we engage with these technologies often enough, so our attention can be sold to companies targeting their adds to us based on these personal profiles. The technology has created 'empires of the mind', that persuade in influence us in a way that (if anything) can best be compared to things like religion, myths, or totalitarianism.

2: Clicks against humanity

Second, as Williams argues, the effect of this attention grabbing transcends the mere domain of behavioural advertising, and affects society at large. It changes our values; it influences our 'free will'; it undermines the very construct of citizenship and thus changes our capability to function in a democratic society. In other words, it's not just the user that is the product, the citizen is the product.

The system is feeding on our attention; we pay by our attention, and as a result our ability to pay attention, to remain focused, deteriorates. The technology starts to get in our way, to 'stand in our lights'. In fact Williams distinguishes the following three "lights" affected by this attack on our attention (I am quoting their definitions from the book).

  • The spotlight (doing) : This corresponds to our immediate capacities for navigating awareness and action toward tasks. It enables us to do what we want to do. The spotlight is the target of functional distraction, like notifications. Our spotlight is also affected by the specifically engineered addictedness of the apps and services we use, as explained in part one of the book.
  • The starlight (being) : This equals our broader capacities for navigating life "by the stars" of our higher goals and values. It enables us to be who we want to be. Our starlight is affected by what Williams calls pettiness, the tendency to spend an insubordinate amount of time on low-level goals like giving likes, or making connections. This in turn means getting attention becomes important too. In fact, fame has become a heuristic for determining what and who matters. At a meta-level, social networks are now the main tool to establish and express our identity, and clearly shape how we can and cannot do this (thus affecting our starlight). Moreover, personalised web services (like social networks, search, but also content providers like Netflix and YouTube) create filter bubbles or echo chambers that are hard to escape, and feel like straight-jackets for our identity.
  • The daylight (knowing): These match our fundamental capacities – such as reflection, metacognition, reason, and intelligence – that enable us to define our goals and values to begin with. It enables us to "want what we want to want." This is the target of epistemic distractions, distractions that make it harder to see the forest from the trees, to see long term developments, to detect general principles. These distractions undermine our capabilities to see what is true. As a result it becomes harder to reflect on ourselves and the world around us. Williams discerns two such epistemic distractions. The first is the lack of leisure, of boredom, caused by our constant engagement with the technology around us. But by far the strongest epistemic distraction is the moral outrage that is fed by the constant cycle of bad news these technologies spread: 'mob rule is hard-coded into the design of the attention economy'.

3. Freedom of attention

Thirdly, and finally, Williams argues we need to act, and we can and should act now! Williams does not really make clear how, though. He observes that the core problem is that the goals of the technology are not our (societal) goals. According to him a language for talking about the full depth of this problem is missing. I am not sure I agree here… Isn't this what political science is all about? In fact, Williams immediately after this observation argues that tackling these problems is a political undertaking, a political struggle even!

Blaming users (by framing the problem as an addiction issue), or putting the onus on the user (by asking him or her to unplug or detox) will not help.
We should also not do rely on self-regulation. And, according to Williams, we should not blame the designers of these technologies themselves, as it is not each of them personally but rather the emergent behaviour of the system as whole that causes the adversarial consequences.

Instead we should 'treat[] the design of digital technologies as the ground of first struggle for our freedom and self-determination'. Williams is clear and upfront about his lack of sufficiently thought-through approaches to achieve this. He does offer a few suggestions though:

  • rethink the nature and purpose of advertising,
  • deepen the language of 'attention' and 'distraction' to cover human will, and talk about 'users' as humans (instead of 'eyeballs'),
  • 'chang[e] the upstream determinants of design', i.e. broaden the incentives for companies beyond mere financial goals towards other social good goals, for example by introducing strong legislation and by developing strong standards, and
  • advance mechanisms for accountability, transparency, and measurement. Williams suggests a 'Designers Oath' as possible way to let designers commit to a more responsible way of designing systems, for example.

It doesn't come much more concrete than that, unfortunately. Which for me was a bit of disillusionment to be honest, for a book that started of quite promising.

Discussion

The first part of the book is the strongest, most compelling. The way current systems are designed to draw our attention and keep us engaged is very clearly explained. I totally subscribe to the observation that current digital technology is not on our side and does not act in our best interest. This is nothing new however. Many people have essentially argued the same for years.

The book makes very clear that by framing the problem of digital technologies in informational terms, we only worry about the management of information, makes us only discuss privacy, security surveillance. This turns us blind to other important consequences, like how it changes our attention, how it creates opportunities for persuasion, and in the end erodes our own autonomy. (And perhaps even the bigger, capitalistic, picture; see also Morozov's epic critique of Shooshana Zuboff book on "Surveillanve Capitalism" that can be summarised as: never mind surveillance if you are not willing to address capitalism itself.) This is important because, quoting Oxford philosopher Neil Levy,

Autonomy is developmentally dependent upon the environment: we become autonomous individuals, able to control our behavior in the light of our values, only if the environment in which we grow up is suitably structured to reward self-control.

Persuasion (as opposed to coercion) plays an important role throughout the book. The difference between coercion and persuasion is not always very clear, however. I would argue that the placement of escalators in shopping malls or the layout of cities force people to move in certain ways, and hence are coercive in nature. Similarly, the choice architectures embedded in digital systems are coercive in nature. Now one could argue that the difference is purely semantic in nature, but I believe the difference is significant with respect to the possible or necessary responses against this coercion. Coercion requires a stronger, top down, response than persuasion. Williams concurs observing that Aldous Huxley's persuasive "Brave New World" (Sweet Sister as a metaphor for the dangers of being pampered and the corresponding lack of autonomy) has been less of a worry to us than George Orwell's coercive "Nineteen Eighty-Four" (Big Brother as a metaphor for the dangers of surveillance and the corresponding lack of privacy).

(Side note: it is interesting to note that we tend to think that persuasion is better than coercion in order to effectuate change in people. We prefer nudging, offering rewards, creating incentives over forcing change through peoples throats.)

Now the real question is whether with the advent of new technology, our predicament has fundamentally changed. As Williams concedes:

"Yet all design is “persuasive” in a broad sense; it all directs our thoughts or actions in one way or another" (p27)

But he continues to argue that in the twentieth century persuasion has become industrialised. In particular, advertising has become more personalised, more scalable and more profitable, and thus became the primary business model for the internet. The resulting digital attention economy is compromising the human will.

But I fear that this focus on persuasion through advertising, this idea that the digital attention economy is the sole source of the current state of affairs, is too narrow, too limited. It is one thing to argue that this technology has a direct influence on what we do (our 'spotlight'); it is quite another to argue that it influences what we want to do (our 'starlight') and even more to argue that it influences our free will (our 'daylight'). The second part of the book does not make a compelling argument that this is indeed the case. If anything, it shows that the situation is much more nuanced, more complex than that.

There are so many more factors playing a role here, that focusing on the attention economy, focusing on persuasion totally ignores the important role played by mere coercion. In fact, I think it is a much more decisive factor that digital technology is all around us, mediating all our interactions with our friends, our colleagues, our representatives, our government. It influences how we can be lovers, be friends, work together, can be citizens, simply by the choices embedded in their designs. This is independent of the attention we do or do not give to those technologies. The simple fact that they are literally 'in the way' gives them tremendous power and influence.

Moreover, many technological developments like the rise of platforms as Uber, Deliveroo, or Airbnb have a direct influence on our society and our politics, as they fundamentally change the capabilities of governments to regulate and control public transport, labour, or housing. This influence has nothing to do with the attention economy at all.

Nor are problems we are facing just the result of such technological developments. The threat to our liberal democracies (alluded to on page 61 of the book) like the decreasing number of people that believe 'democracy' is essential or the increasing number of people that approve of 'military rule' cannot simply be explained by referring to 'fake news' or secret mood manipulation experiments. In fact such reasoning is a very dangerous fallacy, ignoring very real societal and economic developments that deserve far more attention than current politicians are willing to devote.

Now very early in the book Williams states:

We trust these technologies to be companion systems for our lives: we trust them to help us do the things we want to do, to become the people we want to be. (page 9)

But I beg to differ. I don't think anybody is as naive as to really trust technology to help us become the people we want to be? I do trust my car to drive me where I want to go. But I do not trust the car to help me become the person I want to be. (Perhaps for some people the car passively helps them to create a certain image; but do we really expect the car to actively support us to become something or someone?). Similarly, I trust my browser to browse the web, my social network to connect me with my friends and relatives. But I do not expect those to help me become a better person or something. I would agree though that we simply do not expect technology to stand in our way to become the person we want to be. That is to say: we do not expect it to help us with that; we simply do not expect it to actively prevent us from achieving our goals.

And it is this latter aspect, the fact that these technologies sometimes actively try to prevent us from achieving what we want to achieve, is the real problem we are facing today. Indeed, as Mitch Kapor already said in 1991:

Architecture is politics.

So it is high time we, citizens, get a seat at the design table.

In case you spot any errors on this page, please notify me!
Or, leave a comment.