For Living in Communities, Computers Teach Us to Play Nice

Hawk Dove conceptual numbers matrix

In a previous post, I proposed a game called Commons Poker, where all players would win big if all cooperated, but cooperation was not guaranteed and selfish behavior would be rewarded. It was a useful little thought experiment, but would it stand up to rigorous computer simulation?

Luckily, the world doesn’t have to rely on my programming skills to learn the answer to that question.

My game is a more communal version of something called The Prisoner’s Dilemma. The dilemma goes like this: you and your accomplice have been caught in a crime. The evidence against you isn’t strong, but the prosecutor wants a conviction. Here are your choices:

  • If you agree to testify against your accomplice, you’ll get a six-month sentence and your accomplice five years.
  • You know that the prosecutor has offered your accomplice the same deal, which would get you five years in jail.
  • If neither of you testify against the other, you’ll likely get a one year sentence.

What’s your move? Clearly, the best collective outcome is for neither of you to testify against the other. But you have no guarantee that your partner will keep quiet.

The branch of economics called Game Theory gives us tools to figure the probabilities and outcomes in situations like this. And because it’s math and probabilities, Game Theory can be simulated with computers.

For his book The Evolution of Cooperation, Robert Axelrod hosted tournaments featuring computers playing against each other using various stratagems in The Prisoner’s Dilemma. In repeated competitions, one strategy has been shown the consistent winner: Tit for Tat.

Tit for Tat a simple strategy:

  • Start by cooperating, that is, making the move that is best for the group.
  • On subsequent moves, copy the last move of your opponent.

If we all played this strategy in life, we would all cooperate. That might be a good thing. The moment one person starts not cooperating, though, we would revert to more selfish behavior, at least until someone cooperates again.

Axelrod found that winning strategies had four qualities:

  • Nice: they were never the first to not cooperate
  • Retaliatory: they met noncooperation with noncooperation
  • Forgiving: they rewarded a return to cooperative behavior
  • Not Greedy: they didn’t try to score more than their opponent

To me, this is strong reinforcement of our social nature: mathematical proof that we all win when we all cooperate with one another.

As you play through your day, what strategy are you using?

(The Evolution of Cooperation and other books mentioned in this blog are available in the bookstore.)


  1. […] a previous post, I explored how game theory helped us understand the value and power behind social cooperation, […]

  2. […] a previous post, I discussed David Axelrod and his book The Evolution of Cooperation. Axelrod used computers and […]


Leave a Reply

Your email address will not be published. Required fields are marked *

I accept that my given data and my IP address is sent to a server in the USA only for the purpose of spam prevention through the Akismet program.More information on Akismet and GDPR.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Scroll to top