MAT109 · Erich Prisner · Franklin College · 2007-2009

More on Simultaneous and Sequential Gaames

Randomness in Simultaneous Games

Very often, the payoffs in a simultaneous game do not only depend on the choices of the players, but also have a random component. Consider the following example:

variant of TWO BARS: Two bars charge $2, $4, or $5 for a beer. Marginal cost can be neglected. As before, every month 4000 beers are drunk by natives, who go to the bar with lower prices. Tourists just choose one of the two bars randomly. Depending on whether a planned hotel will be build or not, (both outcomes are equally likely) 7000 or 4000 beers are drunk by tourists. What prices would the bars select?

Later we will see that sequential games involving randomness are modelled by defining a random player, called "Nature". For simultaneous games this is not necessary. All we have to do is to replace the payoffs by expected payoffs.

In our example, assume both set a price of $2. Then both will sell 2000 beers to natives. With 50% probability both will sell 3500 additional beers to tourists, each making therefore 2·(2000+3500)= 11000 dollars, and with 50% probability both will sell only 2000 additional beers to the tourists, each making 2·(2000+2000)= 8000 dollars. The expected value for each is (1/2)·11000+(1/2)·8000= 9500 dollars. The same analysis could be done for the other eight entries. Of course, in this simple example we could also use the deterministic model (and the same Excel sheet) with an expected number of (1/2)·7000+(1/2)·4000=5500 tourists.

The modified bimatrix is
  2    4    5  
  2    9500, 9500    13500, 11000    13500,13750  
  4    11000, 13500    19000, 19000    27000, 13750  
  5    13750, 13500    13750, 27000    23750, 23750  

Two 3-spinner (7,5,4) and (1,8,6) are given. You have to select one and play against the other. The player whose 3-spinner shows the larger number wins. Which 3-spinner would you select?
There are 9 outcomes possible, 7-1, 7-8, 7-6, 5-1, 5-8, 5-6, 4-1, 4-8, 4-6, and all of them are equally likely. In outcome 1, 3, 4, 7, the left player wins, in the other 5 outcomes, the right player wins. Therefore the (1,8,6) spinner is better.

Spinning Applet: Select three values for the left spinner and three values for the right spinner (change the values). Then spin, to see which one is better.
Left SpinnerRight Spinner


2. Breaking Ties in Sequential Games

What happens in the Backwards Induction Prodecure above if in step (P3) there are two or more successor vertices, let's say "W" and "U", giving both the maximum value for player Xania to move? Player Xania would be totally indifferent whether to move to W or to U. The value for Xania at V is obviously the value for Xania at W (which is the same as the value for Xania at U), but still the decision where to move would influence the payoffs of the other players. The most natural solution to these problems is to assume that Xania will move to W or U with equal probability. This is already a so-called "behavior strategy"---at certain vertices a player would not be determined what to play but rather alternate two or more moves in a random way. Then we call Xania a "neutral" player. Obviously the value for any other players at V would be computed as the average (expected value with 50-50 probabilities) of the values for that player at W and U.

In a two-person game, a slightly different approach could also be taken. If player X is slightly "hostile" against her opponent, she would choose that vertex of W and U that has the lower value for her opponent. If X is slightly "friendly" she would choose that vertex of W and U that has the higher value for her opponent. Note that she still cares only about her payoffs, only in a tie situation of her own payoff she would consider the other's payoff. Hostile players can be modeled by subtracting a small fraction (let's say 0.1%) of her opponent's payoffs from her payoffs, a fraction so small that the ordering of the payoffs is not changed, only in case of ties suddenly the situations are ordered. Similiarly, a friendly player would add a small fraction of the opponent's payoffs to her own payoffs.

Note that friendly players are far from being altruistic. However, altruistic behavior can also be modeled using the same trick, but for real altruism one would add a considerable amount of the opponent's payoff to the player's own payoff.

There are even more ways of breaking ties, also for more than one player. A player may be friendly to some players and hostile to others. This pattern could even change at different vertices (being hostile to Ann in the first half of the game, but friendly later), also with elaborate randomizing rules. Each of these approaches would yield a Zermelo-Kuhn equilibrium, thus even though "normally" sequential games would only have one of these equilibria, if tie situations occur there are always many of them---infinitely many if we allow behavior strategies, which will be described later.

2. Sequentialization I

In real life, where the rules are not always carved in stone, two players playing a simultaneous game may have the possibilities of either moving a little earlier than the other, or delaying their move slightly until they see what the other player played. Even if moving earlier is not possible, sometimes it is possible to publicly announce what you will play, to make a (hopefully convincing) commitment. That means that the simultaneous game is transformed into a sequential game, and we are going to discuss now to whose advantage that is.

Obviously there are two roles in the sequentialization of the game, moving first or moving second, and which one is preferred is probably often decided by the temperament of the players. Some like to move first and dictate the action, and others prefer to wait and see what the other has played, leaving the other in the dark about their intentions. But actually this question of whether to move first or last in a sequentialization should not be left to temperament but to an analysis of the game.

Backwards analysis of the sequential version carries over to bimatrix notation quite easily: If Ann moves first and Beth second, then Ann determines the row with her move. Each row corresponds to a situation Beth may face, and in each row, Beth obviously would choose the cell maximizing her payoff. We color all these cells. Note that the colored cells include all pure Nash equilibria, since the condition for coloring is just one of two required for pure Nash equilibria. Note also that every row contains at least one colored cell.

In general, Ann chooses the row with

using the three variants to break ties discussed in the previous section.

The simplest case is where in each row only one cell is colored. We call this property unique response for Beth. That means that every one of Ann's moves has exactly one best response. In this case, Ann chooses the row whose colored cell has maximum first (Ann's) entry. Similiarly, unique response for Ann would mean that in every column there is exactly one entry maximizing Ann's payoff.

Theorem: Under the assumption of
  1. "unique response for Beth",
the first-moving Ann can select any pure Nash equilibrium of the simultaneous game as outcome of the sequential version. Under the additional condition that
  1. there are some pure Nash equilibria in the simultaneous game
the first moving Ann is not worse off than in the simultaneous game. Under the additional assumption that
  1. the simultaneous game is symmetric
the first moving player will achieve at least as much as the second-moving player. Then there is a "first move advantage".

Look at the following symmetric game, where both player's moves are called "1", "2", and "3":
 1  2  3 
 1  1, 1  2, 2  3, 1 
 2  2, 2  3, 3  1, 3 
 3  1, 3  3, 1  2, 2 
Conditions 2 and 3 of the Theorem above are fulfilled, but not condition 1. There is one pure Nash equilibrium, where both player play move 2 and get a payoff of 3. However, in the sequential version with neutral Beth, Ann only achieves an average payoff of 2, since Beth will react with either playing "2" or "3" to Ann's playing "2". Therefore in the sequentialization, the player moving first is worse off than in the simultaneous version. Of course, this example doesn't fulfill the requirement of the theorem above.

What happens in the same game with hostile or friendly Beth?
hostile Bethfriendly Beth
 1  2  3 
 1  1, 0.99  2, 1.98  3, 0.97 
 2  2, 1.98  3, 2.97  1, 2.99 
 3  1, 2.99  3, 0.97  2, 1.98 
 1  2  3 
 1  1, 1.01  2, 2.02  3, 1.03 
 2  2, 2.02  3, 3.03  1, 3.01 
 3  1, 3.01  3, 1.03  2, 2.02 

For example, consider the game CHICKEN. If your opponent publicly removes his steering wheel before you both start, leaving him no possibility to avoid a collision himself, then you will certainly chicken and look bad to the ladies!


On the other hand, you wouldn't want to move slightly earlier at ROCK-SCISSORS-PAPER.

Also, SEBATE RACE II has second mover advantage if x (the value of winning) > y (the value of compromising your political views).
Excel sheet for sequentialization

However, if Beth has no unique best response to some of Ann's moves, then Ann moving first can result in a lower outcome for Ann, as can be seen by the following example.
 1  2  3 
 1  0,3  1,0  4,3 
 2  1,4  4,4  1,0 
 3  3,4  6,1  0,4 
There are two pure Nash equilibria, option 1 versus option 3, with payoffs of 4 and 3, and option 3 versus option 1 with payoffs of 3 and 4. But in the sequential version, Beth would respond to Ann's move "1" with "1" and "3" with probability 50% each. therefore the expected outcome for Ann would be 50%·0+50%·4=2. Beth would respond to Ann's move of "2" with either "1" or "2", giving an expected value of 50%·1+50%·4=2.5 for Ann. Finally, Beth's response to Ann's move of "3" is either "1" or "3", giving an expected value of 50%·3+50%·0=1.5 for Ann. Therefore Ann could expect the highest payoff of 2.5 when playing "2", and would therefore play this. The expected payoffs for Ann and Beth would be 2.5 and 4, a clear disadvantage for Ann.


Sequential Legislator's Vote Analysis
Matching chairs with appropriate payoffs ... friendly and hostile version ...


  1. Analyse the following game:
    TWO-ROUND BARGAINING(5,3): There is a fixed number, say 5, of dollar bills for both players. Ann makes an offer how to share them, which Beth can either accept or reject. If she accepts, the bills are divided as agreed upon. If the rejects, two dollar bills are taken away and only three remain on the desk. Then Beth makes an offer how to share these three dollars, which Ann can accept or reject. If Ann accepts, the bills are divided accordingly, if she rejects, nobody gets anything.
    Remember that you don't have to use a Game Tree---you can also use a digraph to simplify matters. Draw the extensive form of the game, and perform backward induction analysis for neutral players, for friendly players, and for hostile players. Remember that you don't have to use a Game Tree---you can also use a digraph to simplify matters.

    Here is the digraph:

    And here after doing backward induction with neutral Ann and Beth, choosing the options with equal probability in case of several options leading to the same payoff. Interestingly enough Beth has a slight advantage in this game.

    If both Ann and Beth are friendly to each other, or if both are hostile to each other, the payoffs are even 2 for Ann and 3 for Beth.

  2. .....
  3. .....
  4. .....


  1. Project: WHO's NEXT(n) Three players play a sequential game with n moves. At each move, the player who has to move decides which one of the other two players has to move next. Ann starts in move 1. In move n, the player to move decides which one of the other two players wins a payoff of 1. The two players that don't win get a payoff of 0.
    Since this is a sequential game with a lot of payoff-ties, the question is how small sympathies and antipathies between the players would affect the outcome. What happens if A likes B and C, but B hates A and likes C, and C likes A and hates B, for instance. There are obviously many cases to consider.

  2. Project: FINAL EXAM: This game is played by three players: The teacher, student X and student Y. Of course, there are also 20 other students, but they don't make decisions and therefore do not really participate in the game. Student X makes the first move by waering his cap to class or not. Seeing this, Student Y has the same option, cap or not. Seeing the two students with or without caps, the teacher makes the decision to have a special eye (a) on X, or (b) on Y, or (c) the teacher could also relax. Seeing the teacher react, each one of students X and Y decides for himself whether to cheat or not.
    Cheating, when not noticed by the teacher, brings a better grade, a payoff of 1. If the teacher notices cheating, disciplinary measures are taken, a payoff of -6 for the student. Cheating is noticed by the teacher
    • with probability 1/5 if the student has no cap to hide and the teacher has no special eye on the student,
    • with probability 1/10 if the student wears a cap but the teacher has no special eye on the student,
    • with probability 1/3 if the student wears no cap and the teacher has a special eye on the student,
    • with probability 1/6 if the student wears a cap and the teacher has a special eye on the student.
    The teacher has also payoffs. It is the sum of a value of -1 if he or she has to keep an eye on a student, compared to 0 if he or she can relax, and a value of -2 if cheating has occured but gone unnoticed.
    How will the players play? Modify the model if you deem it necessary.

  3. .....
  4. .....
  5. .....
  6. .....