Why Peerby Works - And what we can learn from this to develop successful privacy friendly apps and services.

January 1, 2015
1

Today I read this (several months old) blog post, explaining that many peer-to-peer marketplaces fail because they do not solve a real problem. But I think the issue is more complex than presented there, and I think this is also relevant for the question of how to make more people use privacy friendly apps and services.

The original argument given in the blog is that even though many people may like and embrace the concept of some kind of marketplace (for example because it contributes to some common good, provides a sense of community, is sustainable, etc.), once the marketplace is introduced people only actually use it if it helps them to get a lower price, better quality, or increased convenience. In other words, a marketplace is only successful in practice if it provides lower friction than the status quo.

Generalising, this actually means that any app or service is not just successful if it solves a real problem. It is only successful if it does so in a way that improves (in terms of friction) other, existing, solutions or ways to deal with the situation. In other words, it is not only the problem that matters, the solution itself is also relevant.

But the example given in the blog actually shows that this is the case in a quite interesting way. Consider the 'problem' of many people owning lots of tools (power drills, screwdrivers, lawn mowers, ...) which they only occasionally use. What if we could set up marketplace that would allow people to share these tools with their neighbours, thus reducing the total number of tools in circulation. This would decrease costs, reduce waste, and create a neighbour community in the process. It would solve a real, although admittedly small, problem.

Several startups have tried this, but failed, because they used an inventory based model for their solution. In this model, people list the tools they are willing to lend to others. But Peerby has reversed this model, and appears to be much more successful. Peerby uses borrowing requests as the underlying model. Users register with Peerby, and this allows them to broadcast requests for items they would to like to borrow from fellow Peerby users in the neighbourhood.

This significantly reduces the emotional friction in two important ways. First of all, people are much more reluctant to ask a particular person for help (as required in the inventory based model), than asking for help in general (shouting the request in Peerby). Second of all, the inventory model requires a user to reveal all items she is willing to lend to all potential borrowers. This may lead to awkward situations where the lender would rather like to deny a request for a certain item from a particular borrower. People hate to say no. And therefore they may not put more sensitive items on display in their inventory. In Peerby, on the other hand, you may decide to lend something in response to particular request. You never have to say no, you just choose each and every time to say yes (or say nothing).

This latter aspect also very cleverly aligns direct personal incentives with more abstract personal beliefs (wanting a better, more sustainable world). People like offering help, from the goodness of their own hart (not because they are forced to). Peerby offers the opportunity to do so: it helps people to feel good about themselves, to feel happy that they helped someone. And this direct personal incentive directly contributes to these more abstract personal beliefs (create a community and reduce waste).

This example shows that the question is not whether an app or service solves a real (societal) problem. The question is whether the solution creates a personally perceived advantage. Moreover, one should not underestimate the emotional components of friction when estimating this advantage.

What can we learn from this observation for the development of successful privacy friendly apps and services?

Many such apps and services exists, yet they are hardly used. There are several obvious reasons for this: they are often hard to use, have limited functionality, etc. Rationally one might be convinced it would be better to use these privacy friendly apps. But the personally perceived advantage is simply too small (and often even negative).

Of course improving usability and functionality would help, but even then uptake of such apps is slow. A privacy friendly and secure messaging app like Textsecure has a limited user base. But when Whatsapp announced it was going to use the underlying messaging protocol from Textsecure, suddenly 300 million people will be using a privacy friendly messaging app (except for the contact list: that one still goes to Facebook... ;-).

So what else can we do to increase the perceived advantage of a privacy friendly app or service. I think reframing the solution can help. Here are some examples.

  • Frame privacy as a from of personal security that will prevent against, for example, identity fraud. This shifts the focus on the potential but concrete personal damage that can result from a privacy infringement. This increases the perceived advantage for potential users of adopting the solution.
  • Frame privacy as a way stop price discrimination and to get a better deal than you would ordinarily get when shopping online. This could apply to solutions that try to stop profiling, for example. (This frame also shifts the common misconception that current profiling practices give people better deals: in fact these mechanisms try to determine the highest price you are willing to pay for a product.)
  • Frame privacy as convenience: a solution that simply works and allows you to securely connect and share with your friends without the need to read a complex privacy policy or to adjust your privacy settings, for example. Both social networks and cloud services could be framed like this. Of course this is not an easy battle because many convenient social networks and cloud services already exist. Still, these are either insecure or privacy invasive Other examples are 'butler' like apps that help manage your privacy on line for you.
  • Frame privacy as breaking walled gardens: one new way to increase convenience over existing social networks and cloud services would be a system that allows to connect to people in the different walled gardens (e.g. Facebook vs. Google+ or Whatsapp vs iMessage or iCloud vs OneDrive). XMPP is a good example of such an effort.
  • Frame privacy as an enabler to do something. This is similar to security which is often not sold or even visible from the outside, but without which services like e-banking would be impossible.
  • Frame privacy as a way to reduce cost: you do not need to protect data that you do not collect. Moreover, data you do not collect cannot get lost or stolen, and cannot be retained for law enforcement purposes (at significant additional costs).
  • Frame privacy as a way to help others. This is for example the way the Tor project tries to convince people to run a Tor relay.
  • Frame privacy as becoming part of a community with benefits. Any benefit will do, really. Maybe incentive schemes like those present in bitcoin can be applied for privacy friendly apps as well.

I'd love to hear about other ways to frame privacy. Please submit your ideas in the comments below.

In case you spot any errors on this page, please notify me!
Or, leave a comment.
Patrick
, 2015-01-06 21:28:20
(reply)

Frame privacy as a way to stop price discrimination is a specific form of undesireable personalisation. Another specific form is: prevent information bubble like how google personalises your search queries, which could also allow google to shape your opinion.

Frame privacy as an enabler is too generic without an example imho. A proposed example is: Single Sign On based with protected data attributes, to a certain extend you can see this working with a Mobile Apps store.

Although I really like this approach, there is something else which needs to be tackled. Your examples will be too much information for everybody and would be absorbed better if distilled down to a few basic rules of thumb.

When in the past people said “don’t post everything you think or do online”, some didn’t care, others asked questions. The questions allowed understanding of the underlying problem and therewith the related delayed advantages. Nowadays most people simply already start off by saying: “Everything you post on the web stays there”, instead of saying “Your future employer, girlfriend, boyfriend will see it too”. But I think that awareness came to be because the people who didn’t care, did care what trusted third parties say. Some of the trusted third parties do ask the questions and I think those are the target audience.

Still, distilled information stores better. I am suddenly realising you actually did that with the story about creating awareness by using existing goals. It might be just me, but maybe finishing with a summary of that instead of the examples would help the article.