The web browser you are using does not have features required by the tutorials and game simulators.
Please use the most recent version of one of the following:
This tutorial shows how cooperation might evolve. It assumes that you have already completed the Conflict I and Stable Strategies tutorials and have a basic understanding of payoffs and symmetric games. If you have not yet done those tutorials, you should go through Conflict I before continuing here. If you work through all examples in detail, this tutorial should take about 20 minutes. This tutorial also refers to the tutorial on Relatedness (~10 minutes).
There are many examples of cooperation between individuals of the same species and even occasionally between individuals of different species. Yet, explaining the evolution of cooperation has always been a challenge. Let’s consider an abstract case to see why this is so. Suppose that two individuals can cooperate. One who receives cooperation gets a benefit, b, while one who cooperates bears a cost, c. There are two strategies, to cooperate or not to cooperate (called defect). This is the payoff matrix:
If both cooperate, then both get b-c, while if neither one cooperates, there is no cost or benefit. If one cooperates and the other defects, then the cooperator gets −c and the defector gets b.
Defect is the only ESS when c > 0. That is why this situation poses such a problem for the evolution of cooperation. The payoff for defection is always greater than the payoff for cooperation, regardless of what one’s partner does, because b > b-c and 0 > -c. (See the Stable Strategies tutorial if it isn’t clear why this makes defection the ESS.) You can verify this in the simulator for any values of b and c greater than zero. No matter how small the cost, one can always do better by cheating against a cooperative partner.
So how could cooperation evolve? First of all, maybe the Cooperation game as given above isn’t a very good model for cooperation. Maybe the cost of cooperation is shared if both players cooperate (so the Cooperate vs. Cooperate payoff might be b−c/2 instead of b−c). Or maybe the full benefits of cooperation accrue only if both players cooperate (so the Defect vs. Cooperate payoff might be b/2 instead of b). Try these variations in the above simulation and see what effects they have. (You may need to try several combinations of b and c to determine whether Defect remains a pure ESS under these conditions.)
Reducing the cost when there’s mutual cooperation changes nothing at all, because the payoff for Defect vs. Cooperate is still better than that for Cooperate vs. Cooperate as long as c > 0. Reducing the benefit to Defect vs. Cooperate has complex effects. Whether Cooperate or Defect dominates depends on the initial proportions when c ≤ b/2, but Defect always dominates when c > b/2.
Neither of these alterations changes the outcome of the Cooperation game much, and we are left with the mystery of how cooperation could evolve. Evolutionary biologists have come up with two main types of explanation, kin selection and reciprocity.
Going back to the original Cooperation game, what if the two players are genetically related? Let r be their coefficient of relatedness, the probability that they share an allele due to recent common ancestry. (Thus we ignore genes that two individuals share simply by virtue of being the same species or population of a species.) Relatedness can range from 0 (completely unrelated) to 1 (clones). In diploid organisms, parents and offspring are related by 0.5, as are siblings. (See the Relatedness tutorial for help calculating relatedness.)
When is Cooperate a pure ESS if the players are related by the amount r? We can go back to the reasoning behind the definition of a pure ESS — that when the whole population plays that strategy, it cannot be invaded by a rare mutant playing the alternate strategy.
A cooperator in a population of cooperators gets the payoff b−c, simply because it meets only cooperators and no defectors. What would a rare mutant defector get in this population of cooperators? With a probability of r, its partner is another defector (because a related partner shares the defection gene). With a probability of 1−r, its partner is a cooperator. The defector’s average payoff is r × the Defect vs. Defect payoff plus (1−r) × the Defect vs. Cooperate payoff. That is, r(0) + (1−r)(b), or (1−r)b. Cooperate is a pure ESS if the payoff for a cooperator is greater than the payoff for a defector, that is, if b−c > (1−r)b, which can be rearranged to r > c/b. That is, cooperation is favored when the coefficient of relatedness exceeds the cost-to-benefit ratio.
Thus Cooperate is a pure ESS when the genetic relatedness (r) is high enough, the benefit (b) is high enough, or the cost (c) is low enough. The inequality r > c/b is called Hamilton’s Rule after the great British evolutionary biologist who formulated it1.
Try adding relatedness to the Cooperation game simulation below. (Check the box labeled related near the lower right of the simulator and set a value for r next to it.) Can you show that Hamilton’s Rule applies?
We already know that Cooperate is an ESS when r > c/b, which implies that Defect is a pure ESS when r < c/b, but let’s derive that, too, as an exercise. In a population of defectors, a defector gets the payoff 0 because it only meets defectors. What is the average payoff to a rare cooperator? With a probability of r, its partner is a cooperator (having also inherited the cooperation allele), giving it the payoff b−c. With a probability of 1−r, its partner is a defector, giving it the payoff −c. Thus its average payoff is r(b−c) + (1−r)(−c), or rb−c. For Defect to be a pure ESS, its average payoff must be greater than the payoff to Cooperate, that is, 0 > rb−c, or c > rb, or r < c/b as we expected.
One last point about Hamilton’s Rule. The players don’t have to know that they are related, nor do they need the ability to recognize kin. They just cooperate or not based on their genes. A cooperator cooperates with anyone and a defector defects with anyone.
Kin selection is a powerful explanation of cooperation, but many examples of cooperation are between unrelated or only distantly related individuals. How can that be explained?
The strategies that we’ve discussed so far, Cooperate and Defect, are simple ones in which the players need neither memory of previous interaction nor the ability to recognize each other as individuals. What if players are unrelated but repeatedly encounter each other? This gives them the opportunity to act based on the results of their last meeting. A strategy called Tit-for-Tat (TFT) takes advantage of this. In this strategy, one always cooperates on the first encounter with a partner. The next time one meets that partner, one does what that partner did the last time. Thus if your partner cooperated last time, you cooperate again this time, and if he defected last time, you defect this time.
How do we work out the cumulative payoffs for TFT and Defect, assuming that they interact with the same partner n times? The payoffs involving TFT are tricky, but give them a try and check your answer each after each one. Keep in mind that we want the total payoff over n interactions.
The only tricky thing about these payoffs is taking into account the total payoff over n interactions. We could instead have calculated the average payoff per interaction over n interactions (dividing each of the above payoffs by n). It would make no difference to the analysis.
Try the TFT game in the simulator. Can you figure out when TFT is a pure ESS? When Defect is a pure ESS?
TFT is a pure ESS when its payoff against itself is greater than the payoff of Defect against TFT. That is, when n(b−c) > b, which can be rearranged to b(n−1)/n > c. In other words, repeated cooperation is a pure ESS if n is high enough, b is high enough, or c is low enough. However, Defect is a pure ESS when its payoff against itself is greater than the payoff of TFT against Defect, that is, when 0 > −c. Thus Defect is also a pure ESS.
How can TFT get started in a population of defectors? When c > 0 and n(b−c) > b, both TFT and Defect are pure ESSs. If a single TFT mutant is introduced into a population of defectors, it will not spread, because it will never receive cooperation. If two or more TFT players are present, they can get good cooperative payoffs from each other, mixed with the poor payoffs they get when meeting defectors. For TFT to take over, it must start at a critical mass.
You can test this in the simulator below. Make both Defect and TFT pure ESSs (set c > 0 and n(b−c) > b) and start with the proportion of TFT at zero. Gradually increase it until TFT takes over. That minimum starting proportion is the critical mass. Try different values for b, c, and n and see how they affect the critical mass.
Anything that makes TFT more favorable (increasing its payoff, n(b−c)) reduces the critical mass required for TFT to take over. Thus reducing c or increasing n or b will reduce the critical mass.
So far, we have assumed that players meet randomly and do not know what strategy their partner will play. What if interactions are non-random, so that players are more likely to meet others like themselves? For example, animals may stay in one area and interact mainly with neighbors. Such interactions have two effects that tend to increase cooperation: (1) They increase the number of repeated interactions experienced by TFT players, thus reducing the critical mass required for cooperation to become dominant. (2) If animals do not disperse before reproducing, then neighbors will also tend to be kin. (For animals that cannot recognize each other or remember past interactions, only the kinship effect occurs.) For animals that can recognize each other and remember past interactions, both effects occur. The combined effect is powerful, greatly lowering the critical mass required for TFT to take over (try adding relatedness to the TFT simulation). Simulations of spatial interaction (beyond the scope of this tutorial) show just this result. Starting from a random distribution of cooperators or TFT players, small clumps of cooperators form and then grow over time2.
The Cooperation game we have been discussing is one of a class of similar games. Here are more examples:
|Hunt Stag||Hunt Hare|
The Cooperation and Prisoners’ Dilemma games have the same structure. The worst payoff goes to one who cooperates while his partner defects, the best payoff goes to one who defects while his partner cooperates, while mutual cooperation is better than mutual defection. Put more formally, where EXY is the payoff to X against Y, EDC > ECC > EDD > ECD.
The Stag Hunt game is slightly different, in that Cooperate-Cooperate (hunting stag together) is the best payoff and there is no particular advantage to defection when one’s partner cooperates. Here, ECC > EDC ≥ EDD > ECD. Now there are two pure ESSs, mutual cooperation and mutual defection, despite the fact that mutual cooperation offers the better outcome.
Now consider the Hawk-Dove game that we developed in the Conflict I tutorial:
Arranged in this way, Hawk-Dove looks like a cooperation game, with Dove = Cooperate and Hawk = Defect. As in the Cooperation and Prisoners’ Dilemma games, EDC > ECC > EDD > ECD when v > c. Thus Defect (Hawk) is the only pure ESS despite the fact that the players would do best by agreeing to cooperatively share resources (Dove).
All of these games have in common the unfortunate fact that players must forgo the overall optimum payoff in order to avoid the “sucker’s payoff” that results from cooperating with a defector. Try adding relatedness to these games in the simulations below. How does that affect cooperation?
Try adding relatedness to the Hawk-Dove-Assessor game from the Conflict I tutorial. Do you expect it to change the usual outcome, in which Assessor is the only ESS?
Wikipedia has good articles on the evolution of cooperation, Prisoners’ Dilemma, Stag Hunt, and TFT. The Stanford Encyclopedia of Philosophy has comprehensive (and often technical) articles on evolutionary game theory, Prisoners’ Dilemma, and related topics. This tutorial covered just the basics of the evolution of cooperation. You can follow up by exploring any of the following topics:
For Cornell BioNB 2210 students: download the Game Theory Quiz on Blackboard - see Semi-Monthly Quiz Folder. We urge you to complete all three tutorials (Conflict I, Conflict II, and Cooperation) before taking the quiz. This will be Part A of Semi-Monthly Quiz 5. There will be short answer questions that will NOT be autograded, thus your grade will not be immediately available.