## Do people play according to Nash Equlibrium? Shivangi Chandel has the answers.

“If people don’t play the way the theory says, their behavioUr has not proved the mathematics wrong, anymore than finding that cashiers sometimes give the wrong change disproves arithmetic.”

*– *Colin F. Camerer in *Behavioral Game theory: Experiments in Strategic Interaction*

Nash Equilibrium remains one of the most powerful concepts in Game theory telling us how people *should *play in any particular game. But, how do people actually play in any particular game? You may recall from my article on Prisoner’s Dilemma that Dilbert did not adhere to the best-response strategy as dictated by Nash Equilibrium. He chose to remain silent when confessing was the best-case scenario. This article is aimed to be a discussion on experimental findings that explore both human behavior and the “performance” of Nash Equilibrium. When I say performance of this equilibrium concept, I mean to ask the following questions:

- Do players play dominating strategies?
- Are they being rational at the time of choosing their strategies? (A rational player attempts to maximizes his or her payoff )
- If players keep playing the game repeatedly, do they eventually learn to play payoff-maximizing strategies?

Various laboratory experiments have been conducted to see how different the behavioural patterns of the “participants” are from those predicted in the theory. Many researchers observe that people rarely start out by playing Nash Equilibrium. However, in some cases, with repeated rounds of the game, they might eventually learn to play dominant strategies. This is not as surprising to experimental game theorists as it is to us. They prefer participants to be as little informed as possible of the concepts from Game theory. They prefer to just give the participants a sheet of paper or a computer monitor with numbers. The reason behind this is pretty obvious. Will any experiment predict human behaviour correctly if the set of participants contains only Game theorists? Of course not! They will all choose best-response strategies from Nash Equilibrium!

Consider Prisoner’s dilemma game, for example. As the theory tells us, confessing or not-cooperating with your partner is the dominant strategy for each player. However, experimental evidence points out that participants choose to cooperate with each other more often than predicted by theory (even when you match a player with complete strangers). This non-conformity to Nash Equilibrium could be due to several reasons. It could be that the players might have some social preferences such as *Altruism* (where people receive utility from being nice to others), *Fairness* (where people receive utility from being just to others) or *Vindictiveness* (where people like to punish those who seem unfair or unkind), which motivate them to co-operate with their partner. It could also be that the players will choose to remain silent if they believe for sure that their partner will remain silent too (*Trust*).

Unfortunately, the making of the Prisoner’s dilemma game is such that it does not give us enough ammo to predict human behaviour. It is difficult to understand if the players are cooperating with each other because they are altruistic by nature or because after gaining experience of say 20 rounds, they just expect cooperation in the next.

One game which is commonly used to predict and interpret human behaviour is the so-called **Ultimatum Game**. The simplest version of the game goes like this; suppose there are two players who want to divide 100 rupees between them. Player 1 (or the “proposer”) goes first and offers a split such that he gets “X” and the other player gets “100 – X”. Player 2 (or the “responder”) then chooses whether to accept or reject. If player 2 rejects the offer, they both get nothing. If he accepts the offer, they split the money as was proposed and the bargaining ends.

Now, we can categorize the many actions that player 1 can take into two options: the first is to offer an equal and fair split of 50 rupees each whereas, the second is to offer an unfair split where player 1 gets to keep a higher share than player 2, for eg. offers like (60, 40), (80, 20) or (90, 10).

Below is the game illustrated as a Game tree where we compare one such unfair offer of (90, 10) with the fair offer.

How do you read this Game tree? The node (small circle) depicts the position of the decision making player and the branches depict the player’s alternative actions. Therefore, first node depicts player 1’s turn to choose between making a fair offer and making an unfair offer. The second level of nodes, shows that player 2 moves to accept or reject the offers *after* observing player 1’s action. The four possible outcomes of this game are *{(Fair, Accept), (Fair, Reject), (Unfair, Accept), (Unfair, Reject)}* with the payoffs *{(50, 50), (0, 0), (90, 10), (0, 0)} *respectively. We can also write the above game in the form of a payoff matrix:

You can also come to the same conclusion by looking at the game tree drawn earlier. Consider the portion of the tree from the second node onwards. When player 1 gives a fair offer, player 2 will choose to get 50 rather than 0. Likewise, when player 2 gives an unfair offer, player 2 will choose to get 10 rather than 0. When it comes to the first node, player 1 now has to compare his payoffs from the outcomes belonging to the action branch “accept” i.e compare 90 with 50. Thus, in Nash Equilibrium, player 1 will propose an unfair offer of (90, 10) which player 2 will accept.

This split of (90, 10) is just one of the many unfair offers. A “self-interested” Player 2 is expected accept any non-zero share that is offered to him or her. Even as low as 1 rupee or 50 paise! And that is exactly what a “self-interested” player 1 should do, keep almost all the money with him or her and give a miserly sum to his opponent.

Are you getting the feeling that this cannot be right? If yes, then you are probably thinking that player 2 should not accept this atrocious offer of (99, 1), where player 1 is totally *scamming* him. Well, the experimental results are not too far from what you are thinking. Various experimental studies conducted in several countries suggest that on average, proposers offer about a split of (60, 40) to the responders. Moreover, offers are rejected as many as 15-20% of the time. You too, like many responders in the lab experiments, would reject some offers citing them as “too low”. You too, like many proposers in the lab experiments, would offer a more generous split, fearing such rejections or in the spirit of being “fair”.

Are you and those other participants thinking *rationally? *Perhaps. We just have to broaden the definition of rationality here by not associating self-interest only with monetary benefits or losses. Other than the monetary additions to your wallet, you are driven by one or many behavioral attributes like altruism, fairness, reciprocity or vindictiveness etc. in your decision-making. And in order to capture the complete picture of the game, the payoff matrix or the game tree has to be written with “full payoffs” and not just the monetary rewards.