When and why do we cooperate?
Now that many of our systems and designs involve a social
dimension, it is becoming important to understand the hows and whys of
user-user interaction. I can think of
dozens of use cases for this. Think
about the social aspects of a product like FitBit. If users are going to
compete or compare their times against complete strangers, some level of
cooperation is needed. If we are
designing multi-player online games, players often team up to achieve quests or
other in-game goals ((e.g. in World of Warcraft). Or they might trade virtual goods in some
form on on-line virtual economy (e.g. on Farmville).
We also see user-user cooperation emerge in more basic
services like commenting sections on a company’s content article (e.g. a web
site’s customer support FAQs) or a news site’s article (perhaps CNN). Or on a company’s social media page (how
about this blog?).
Other use cases might include full models of crowdsourcing
user-generated content (e.g. Wikipedia entries) or crowd-voting for company or
user submitted materials (e.g. Threadless, Kickstarter).
Clearly, the more we know about user-user cooperation the
better. So of course I was grateful to
read this article on Brain Blogger that summarizes some recent research on
cooperation. In a Prisoner’s Dilemma
environment, it doesn’t make much sense to cooperate unless you think that you
might develop a long term relationship with the other player and learn to
mutually cooperate. This is called
direct reciprocity. But in cases where
you are only interacting with each person once, the best choice is to defect.
And yet many people don’t.
We are not utility optimizers.
Sometimes we act nice. We cooperate with this person, even though they
have every incentive to defect. But
why? That is what the research reviewed
on Brainblogger was investigating. It is
called indirect reciprocity and is defined sort of like having the notion that what goes around comes
around. They identity two kinds:
Reputation-based indirect reciprocity makes immediate
sense. If the other person has a
reputation for cooperating, we can assume that they will cooperate with us,
even on this first and only transaction.
So we cooperate too. Their
previous cooperation now comes around to benefit them, and our current
cooperation might build us a good reputation that will come around to benefit
us later. But this only works when there
are reputation signals built into the system.
This would include reviews and recommendations that are built into
social media where you can report on whether your Uber driver was nice or whether
your EBay seller shipped the promised product on time.
But they also found a “pay-it-forward” reciprocity. Even if the person there are no reputation
signals built in to the system, I might choose to take a shot and
cooperate. But why? There is no possible benefit? The other person might defect and screw me
over. And I don’t get any reputation boost
from cooperating. Why would I take this
chance? Probabilistically, this move
might be good for society (if cooperation gets built into our DNA), but as an
individual it is more likely to be negative.
So my economic utility maximizer (my selfish gene) should be telling me
to defect. I will win this round, and no
one will ever know.
The cited research was a neuroscience study. They used brain scans to try to figure out
what parts of the brain led to this altruistic cooperative behavior. And it turned out to emanate from the
striatum. The striatum in general is the
part of the brain that processes value and executes voluntary behavior. So reputation-based reciprocity involves the
striatum asking the cognitive areas of the brain if we are maximizing economic
utility by cooperating with a reputable person.
And cognitively the answer is yes, so the striatum decides to cooperate.
But when no reputation information is available, the
striatum can ask the cognitive areas what to do, but gets no answer. So instead, it asks the emotional and
empathic areas. Since we can’t go by
economic utility, we have to go with what will make us feel good. And luckily, lots of us feel good when we
take a chance on somebody. We have faith
in their good behavior. We cooperate on
the hopes that they will too.
The researchers don’t attribute it to feeling good and
having faith in our fellow human. They
say it is simply a shortcut the brain is taking. If cooperating probabilistically works
(because usually we have reputational information) then we can assume that the
other person will cooperate unless we have evidence otherwise. So it isn’t faith, it is a simple
overgeneralization bias of a rational utility maximizing rule.
But I am going to attribute it to faith, hope, and
charitable instincts. I will believe in
you, unless and until you give me a reason not to.
No comments:
Post a Comment