Everyone's familiar with the Prisoner's Dilemma. You and a friend are arrested. If both stay silent, you'll get short sentences. If one of you confesses, that person can reduce their sentence even further at their friend's expense. If both confess, you both get long sentences. https://twitter.com/BulbousOtter/status/1275374390432030720
This paints a rather grim picture of human interaction. Whatever your partner's behavior, you will do better by betraying them.

How, then, do we explain our world in which so much is accomplished by good-faith human cooperation? Surely the Prisoner's Dilemma implies otherwise.
Well, most human interactions are not one time affairs like the Prisoner's Dilemma problem describes. As a wise man once said, "We live in a society."

What if we played the Prisoner's Dilemma repeatedly, adjusting our behavior in to match information gleaned from experience?
In an Iterated Prisoner's Dilemma, you might get some points for defecting on me once, but you risk losing my cooperation in subsequent rounds.

In 1980, the political scientist Robert Axelrod held an Iterated Prisoner's Dilemma tournament hoping to find the optimal strategy.
Contestants submitted many different programs. Some strategies were incredibly complex, incorporating a high-degree of randomness. Others just defected every round, or cooperated.

The winning program was very simple. It was called Tit for Tat.
Tit for Tat would always cooperate with its partner on the first round. After that, it would simply mimic whatever its partner had done the previous round.

If the partner program cooperated on the first round, Tit for Tat would cooperate on the second round. If not, it defected
Though Tit for Tat would never outperform any individual partner program (it could do equally well at best), it was the most successful program *overall*

After every program had played every other, Tit for Tat came out on top. Its strategy was the most versatile.
Axelrod published these results and held a second tournament. Contestants scrambled to incorporate these results into their strategies, to improve on Tit for Tat, or learn to exploit it.

Anatol Rapoport, Tit for Tat's creator, resubmitted Tit for Tat unaltered - and won again.
Rapoport described Tit for Tat's success with four maxims.

1. Be nice - try to cooperate at the outset
2. Be retaliatory - strike back when struck
3. Be forgiving - no grudges, be open to cooperation
4. Be predictable - simplicity pays off
Perhaps Tit for Tat's success tells us something about human interaction. Perhaps it just tells us what we already knew.

Cooperation is a virtue, but it is a virtue secured by the credible threat of retaliation against bad actors - and by openness to potential allies.
The Rationalists have been discussing game theory in the wake of Scott Alexander's doxxing, have they?

Let them start with the lesson of Tit for Tat.

Be willing to punish an enemy if he strikes you, and be willing to find friends in unlikely places...
You can follow @kwamurai.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled:

By continuing to use the site, you are consenting to the use of cookies as explained in our Cookie Policy to improve your experience.