The web browser you are using does not have features required by the tutorials and game simulators.
Please use the most recent version of one of the following:
This tutorial shows how to find stable states in symmetric games. It assumes that you have a basic understanding of symmetric games from starting the Conflict I tutorial. If you work through all the example problems in detail, this tutorial should take about 30 minutes.
An evolutionarily stable strategy (ESS) is a strategy that cannot be invaded by another strategy.
We can determine whether a strategy is evolutionarily stable by a simple thought experiment. Imagine that the strategy in question is used by the whole population. All members play that strategy, so all get the payoff of that strategy played against itself. Now ask what would happen if a small number started using an alternate strategy. Playing against the majority strategy, would this minority do better or worse than the majority? If they do better, they would increase in number over time, and the original strategy is not evolutionarily stable. If they do worse, then they cannot invade, so the original strategy is an ESS.
That is, if the entire population plays the ESS strategy, a mutation that made some members play another strategy would be eliminated. Here’s a trivial example:
If all population members are X, all get a payoff of 2. If Y mutants appear, they get a payoff of 1 against all of the Xs they meet. Thus Y does worse than X, so Y cannot invade a population of Xs. If we instead start with a population of all Ys, everyone gets a payoff of 1. If an X mutant appears, it rapidly takes over, since it gets payoffs of 2 against the majority Ys, while the Ys only get payoffs of 0 against each other.
You can see this in the simulation below. Starting with a population of all Y (proportion of X = 0, proportion of Y = 100), change the proportion of X to 1. In about 200 generations, the population switches to all X. Now reverse the situation, setting the proportion of X to 100 and Y to 1. Can Y invade X?
A population of Xs cannot be invaded by Y, while a population of Ys can be invaded by X. Thus X is the only evolutionarily stable strategy in Game 1.
What about Game 2?
Test this in the simulator as well. Starting with a population of all Y (proportion of X = 0, proportion of Y = 100), change the proportion of X to 1. Does the proportion of X increase over time? Now reverse the situation, setting the proportion of X to 100 and Y to 1. Does the proportion of Y increase?
Both X and Y are ESSs in this game, because neither can be invaded by the other. The strategy that dominates over time is the one that starts in the majority. You can demonstrate this by trying different proportions of X and Y in the Game 2 simulator above. For example, what happens if you start with the proportions of X and Y at 99 and 100, respectively? At 100 and 99?
How can a game have two ESSs? You’ve seen it in the simulation, now reason it out as a thought experiment. If the entire population were Xs, each X would receive 2, while a rare mutant Y would get a payoff of 1 against all of the Xs, so X cannot be invaded, and thus X is an ESS. Conversely, if the entire population were Ys, each Y would receive 2, while a rare mutant X in a population of Ys would get a payoff of 1, so Y cannot be invaded. Y is also an ESS. Whichever strategy is in the majority meets itself (payoff of 2) more often than it meets the other strategy (payoff of 1), so the majority strategy does well, while the minority strategy usually meets the majority (payoff of 1) and rarely meets itself (payoff of 2) and does poorly.
A pure ESS is a strategy that cannot be invaded by another strategy.
The ESSs in Games 1 and 2 are pure. Now consider Game 3:
What might be the ESS here? Test it with the simulation below. Starting with a population of all Y (proportion of X = 0, proportion of Y = 100), change the proportion of X to 1. What happens to the proportion of X? Now reverse the situation, setting the proportion of X to 100 and Y to 1. What happens to the proportion of Y?
Either X or Y can be invaded by the other, so there isn’t a pure ESS in this game. As the simulation shows, the proportions of X and Y become equal within 500 generations after one is introduced into a population of the other. In fact, no matter what proportions we start with, this game will equalize X and Y. Why does this happen? If we start with all Xs, everyone gets a payoff of 1. If a rare Y mutant appears, it meets the numerous Xs most of the time, getting a payoff of 2, which allows Y to increase. So X is not an ESS, because it can be invaded by Y. Similarly, if we start with all Ys, everyone gets a payoff of 1. If a rare X mutant appears, it will increase due to the payoff of 2 it gets when meeting the more numerous Ys. So Y isn’t an ESS either.
Does that mean there is no ESS in Game 3? No, it means that the ESS in Game 3 isn’t a pure ESS. Let’s look more closely at what happens when we start with a population of all Xs and add a few Ys to it. At first, the number of Ys will increase, since the rare Ys get payoffs of 2 when meeting the majority Xs. However, as the number of Ys increases, their average payoff declines, because the number of Xs they meet (payoff of 2) decreases and the number of Ys they meet (payoff of 1) increases. Also, as the number of Ys increases, the average payoff to Xs increases, since X vs. Y gets a payoff of 2. So there is a balance of Xs and Ys where each gets the maximum payoff from the other. That balance, the proportion of X and Y, is a mixed ESS.
In this case, it’s obvious that the best payoff for X and Y is when they are equal in the population. At that point, each X will meet Xs and Ys about equally, getting an average payoff of 1½, while each Y will meet Xs and Ys equally, also for an average payoff of 1½. Thus for Game 3 we would say that the ESS is 50% X and 50% Y.
If the proportion of Xs were to rise above 50%, their payoffs would decline because each X would now be more likely to meet another X (payoff of 1) and less likely to meet a Y (payoff of 2). However, if the proportion of Xs were to decrease below 50%, their payoffs would increase because each X would now be more likely to meet Ys (payoff of 2) than Xs (payoff of 1). Thus the ratio of X to Y tends to stabilize at 1:1. You can demonstrate this with the Game 3 simulation above. Set any ratio of X to Y that you like, and it will revert to 1:1. If you change payoffs from 1 and 2 to different values, you’ll see the stable proportion change.
A mixed evolutionarily stable strategy is a mix of tactics in a stable ratio.
A mixed evolutionarily stable state is a genetic polymorphism of strategies in a stable ratio.
Thus if a strategy (or tactic) increases beyond its ESS ratio, it becomes less favorable (and decreases back to the ratio), and if a strategy (or tactic) decreases below that ratio, it becomes more favorable (and increases back to the stable ratio).
In Game 3, we could see that a ratio of 1:1 for X:Y is evolutionarily stable, but other games are less obvious. How can we determine the ESS for a more general case? For a ratio of two strategies to be stable, the average payoff for the two strategies must be equal when they are in that ratio. If we can determine the average payoff for each strategy and set them equal, then we can solve for the stable ratio using basic algebra.
What is the average payoff for a strategy? It’s the payoff you would get for playing that strategy against everyone else in the population. So, for two strategies X and Y, the average payoff to X is the payoff for playing X against all the Xs and Ys in the population. Of course, if there are more Xs than Ys, then the X vs. X payoff occurs more often than the X vs. Y payoff, so the average payoff depends on the proportions of Xs and Ys.
To simplify things, we’ll use variables instead of descriptions:
Using these values,
To find the ESS, we just set these two payoffs equal and solve for p:
pEXX + (1−p)EXY = pEYX + (1−p)EYY
We’ll do this with the payoffs for Game 3, which we already know has a mixed ESS with a ratio of 1:1 for X:Y.
|X||EXX = 1||EXY = 2|
|Y||EYX = 2||EYY = 1|
Try it yourself with each of these examples and check your answer with the ✓ button after each one:
You may have noticed something odd about the last calculation. This is the same as Game 2, in which X and Y are both pure ESSs, and the one that dominates over time is the one that starts in the majority. So why does the calculation give us ½ as the proportion? If the population started with exactly equal numbers of X and Y, and each member interacted with equal numbers of both, the population could stay at a 1:1 ratio. However, if the balance were tipped, even a little, toward one strategy, that one would rapidly take over. You can see this in thesimulation. Start with X and Y in equal proportions (say 100 and 100), then increase one of them slightly and watch it take over. While 1:1 is an equilibrium, it is not a stable equilibrium.
This illustrates an important point about calculation of a mixed ESS: The result is meaningful only if there is no pure ESS. If you go back to, which has a one pure ESS, and try to do the calculation, you will see that it doesn’t even work out mathematically (you’ll get 1/0 or 1 = 0, depending on how you do the algebra).
If you calculate a mixed ESS of ½ in a game like the one above, your answer is incorrect. There is no mixed ESS in that game, there are instead two pure ESSs. Before calculating a mixed ESS, be sure to first check that there are no pure ESSs.
When there’s one pure ESS, you can see it in the payoff matrix. It is the strategy that does better than all the others regardless of the strategy it is paired with. Look for the best payoff in each column of the matrix. If the best payoff is always in the same row, then that row’s strategy is the pure ESS. Here’s an example:
In each column, the payoff for row Y is better than the payoffs for rows X and Z, so Y is the pure ESS.
Sometimes when there are three strategies, one isn’t clearly better than the others. However, there may be one strategy that is never the best strategy against any opponent. If so, you can eliminate that strategy and then look for a pure or mixed ESS in the remaining 2×2 matrix. For example:
There isn’t a clear ESS at first because the best strategy in each column isn’t the same: Z is best against X, but X is best against Y. However, Y is not the best strategy in any column. If we eliminate Y (both the row and the column) and examine the result, Z is a pure ESS:
To remove a losing strategy, it doesn’t have to have the worst payoff in every column (Y is better than X against X in Game 5), but it cannot even be tied for best strategy in any column.
Removing a losing strategy isn’t guaranteed to reveal a pure ESS, but it will at least reduce the matrix to a size where you can more easily calculate a mixed ESS. Furthermore, if you have a matrix that’s larger than 3×3, you can repeat the process of removing the weakest strategy more than once.
In these examples, click the button next to the pure ESS and check your answer with the ✓ button after each one. You may find this easier with a large matrix if you write the matrix on paper and then cross out strategy rows and columns as you eliminate them.
Every two-strategy game has at least one pure ESS or a mixed ESS. What about games with more than two strategies? In the previous section, you saw 3-4 strategy games with pure and mixed ESSs, but some games with >2 strategies are inherently unstable and have no ESS at all. Consider the rock-scissors-paper game, in which rock beats scissors, scissors beats paper, and paper beats rock. We can represent the game with this matrix:
First of all, convince yourself that this matrix really does represent the game correctly. To keep things simple, winning has a payoff of 1, losing has a payoff of −1, and a draw just gets 0. If you try to eyeball it to determine the ESS, you see that each strategy is equally good. In fact, if this game were to start off with exactly equal proportions of Rock, Paper, and Scissors, it would stay there indefinitely. However, that is not a stable equilibrium, just as in Game 2 where the calculated ESS of ½ is unstable.
Unlike, which settles down to one of two extremes depending on the initial proportions of its strategies, when rock-scissors-paper is imbalanced, it oscillates, with first one strategy dominating, then the strategy that beats that one dominating, and then the strategy that beats that one. For example, it might go rock, paper, scissors, rock, paper, scissors, rock... forever. To see why this happens, imagine starting with Rock in the majority. This is not a good situation for Scissors, which will decline, but it’s great for Paper, which will increase on the strength of the +1 payoffs it gets meeting all those rocks. Now as Paper becomes more common, Scissors starts to get good payoffs, so it will increase. When Scissors dominates, Rock can increase, and the cycle begins again. You can explore this with the following simulation:
Rock-scissors-paper is an artificial game, but there are natural situations that mimic it. For example, there is a lizard in which three male mating strategies alternate in their dominance. Oscillation is even more common in asymmetric games (covered elsewhere) such as predator-prey or parasite-host interactions that may result in cycles of population growth and crashes.