By Avinash Dixit and Barry Nalebuff

Game theory is the science of strategy. It attempts to determine mathematically and logically the actions that “players” should take to secure the best outcomes for themselves in a wide array of “games.” The games it studies range from chess to child rearing and from tennis to takeovers. But the games all share the common feature of interdependence. That is, the outcome for each participant depends on the choices (strategies) of all. In so-called zero-sum games the interests of the players conflict totally, so that one person’s gain always is another’s loss. More typical are games with the potential for either mutual gain (positive sum) or mutual harm (negative sum), as well as some conflict.

Game theory was pioneered by Princeton mathematician john von neumann. In the early years the emphasis was on games of pure conflict (zero-sum games). Other games were considered in a cooperative form. That is, the participants were supposed to choose and implement their actions jointly. Recent research has focused on games that are neither zero sum nor purely cooperative. In these games the players choose their actions separately, but their links to others involve elements of both competition and cooperation.

Games are fundamentally different from decisions made in a neutral environment. To illustrate the point, think of the difference between the decisions of a lumberjack and those of a general. When the lumberjack decides how to chop wood, he does not expect the wood to fight back; his environment is neutral. But when the general tries to cut down the enemy’s army, he must anticipate and overcome resistance to his plans. Like the general, a game player must recognize his interaction with other intelligent and purposive people. His own choice must allow both for conflict and for possibilities for cooperation.

The essence of a game is the interdependence of player strategies. There are two distinct types of strategic interdependence: sequential and simultaneous. In the former the players move in sequence, each aware of the others’ previous actions. In the latter the players act at the same time, each ignorant of the others’ actions.

A general principle for a player in a sequential-move game is to look ahead and reason back. Each player should figure out how the other players will respond to his current move, how he will respond in turn, and so on. The player anticipates where his initial decisions will ultimately lead and uses this information to calculate his current best choice. When thinking about how others will respond, he must put himself in their shoes and think as they would; he should not impose his own reasoning on them.

In principle, any sequential game that ends after a finite sequence of moves can be “solved” completely. We determine each player’s best strategy by looking ahead to every possible outcome. Simple games, such as tic-tac-toe, can be solved in this way and are therefore not challenging. For many other games, such as chess, the calculations are too complex to perform in practice—even with computers. Therefore, the players look a few moves ahead and try to evaluate the resulting positions on the basis of experience.

In contrast to the linear chain of reasoning for sequential games, a game with simultaneous moves involves a logical circle. Although the players act at the same time, in ignorance of the others’ current actions, each must be aware that there are other players who are similarly aware, and so on. The thinking goes: “I think that he thinks that I think . . .” Therefore, each must figuratively put himself in the shoes of all and try to calculate the outcome. His own best action is an integral part of this overall calculation.

This logical circle is squared (the circular reasoning is brought to a conclusion) using a concept of equilibrium developed by the Princeton mathematician john nash. We look for a set of choices, one for each player, such that each person’s strategy is best for him when all others are playing their stipulated best strategies. In other words, each picks his best response to what the others do.

Sometimes one person’s best choice is the same no matter what the others do. This is called a “dominant strategy” for that player. At other times, one player has a uniformly bad choice—a “dominated strategy”—in the sense that some other choice is better for him no matter what the others do. The search for an equilibrium should begin by looking for dominant strategies and eliminating dominated ones.

When we say that an outcome is an equilibrium, there is no presumption that each person’s privately best choice will lead to a collectively optimal result. Indeed, there are notorious examples, such as the prisoners’ dilemma (see below), where the players are drawn into a bad outcome by each following his best private interests.

Nash’s notion of equilibrium remains an incomplete solution to the problem of circular reasoning in simultaneous-move games. Some games have many such equilibria while others have none. And the dynamic process that can lead to an equilibrium is left unspecified. But in spite of these flaws, the concept has proved extremely useful in analyzing many strategic interactions.

It is often thought that the application of game theory requires all players to be hyperrational. The theory makes no such claims. Players may be spiteful or envious as well as charitable and empathetic. Recall George Bernard Shaw’s amendment to the Golden Rule: “Do not do unto others as you would have them do unto you. Their tastes may be different.” In addition to different motivations, other players may have different information. When calculating an equilibrium or anticipating the response to your move, you always have to take the other players as they are, not as you are.

The following examples of strategic interaction illustrate some of the fundamentals of game theory.

The prisoners’ dilemma. Two suspects are questioned separately, and each can confess or keep silent. If suspect A keeps silent, then suspect B can get a better deal by confessing. If A confesses, B had better confess to avoid especially harsh treatment. Confession is B’s dominant strategy. The same is true for A. Therefore, in equilibrium both confess. Both would fare better if they both stayed silent. Such cooperative behavior can be achieved in repeated plays of the game because the temporary gain from cheating (confession) can be outweighed by the long-run loss due to the breakdown of cooperation. Strategies such as tit-for-tat are suggested in this context.

Mixing moves. In some situations of conflict, any systematic action will be discovered and exploited by the rival. Therefore, it is important to keep the rival guessing by mixing your moves. Typical examples arise in sports—whether to run or to pass in a particular situation in football, or whether to hit a passing shot crosscourt or down the line in tennis. Game theory quantifies this insight and details the right proportions of such mixtures.

Strategic moves. A player can use threats and promises to alter other players’ expectations of his future actions, and thereby induce them to take actions favorable to him or deter them from making moves that harm him. To succeed, the threats and promises must be credible. This is problematic because when the time comes, it is generally costly to carry out a threat or make good on a promise. Game theory studies several ways to enhance credibility. The general principle is that it can be in a player’s interest to reduce his own freedom of future action. By so doing, he removes his own temptation to renege on a promise or to forgive others’ transgressions.

For example, Cortés scuttled all but one of his own ships on his arrival in Mexico, purposefully eliminating retreat as an option. Without ships to sail home, Cortés would either succeed in his conquest or perish. Although his soldiers were vastly outnumbered, this threat to fight to the death demoralized the opposition, who chose to retreat rather than fight such a determined opponent. Polaroid Corporation used a similar strategy when it purposefully refused to diversify out of the instant photography market. It was committed to a life-or-death battle against any intruder in the market. When Kodak entered the instant photography market, Polaroid put all its resources into the fight; fourteen years later, Polaroid won a nearly billion-dollar lawsuit against Kodak and regained its monopoly market. (Polaroid’s focus on instant film products later proved costly when the company failed to diversify into digital photography.)

Another way to make threats credible is to employ the adventuresome strategy of brinkmanship—deliberately creating a risk that if other players fail to act as you would like them to, the outcome will be bad for everyone. Introduced by Thomas Schelling in The Strategy of Conflict, brinkmanship “is the tactic of deliberately letting the situation get somewhat out of hand, just because its being out of hand may be intolerable to the other party and force his accommodation.” When mass demonstrators confronted totalitarian governments in Eastern Europe and China, both sides were engaging in just such a strategy. Sometimes one side backs down and concedes defeat; sometimes tragedy results when they fall over the brink together.

Bargaining. Two players decide how to split a pie. Each wants a larger share, and both prefer to achieve agreement sooner rather than later. When the two take turns making offers, the principle of looking ahead and reasoning back determines the equilibrium shares. Agreement is reached at once, but the cost of delay governs the shares. The player more impatient to reach agreement gets a smaller share.

Concealing and revealing information. When one player knows something that others do not, sometimes he is anxious to conceal this information (his hand in poker) and at other times he wants to reveal it credibly (a company’s commitment to quality). In both cases the general principle is that actions speak louder than words. To conceal information, mix your moves. Bluffing in poker, for example, must not be systematic. Recall Winston Churchill’s dictum of hiding the truth in a “bodyguard of lies.” To convey information, use an action that is a credible “signal,” something that would not be desirable if the circumstances were otherwise. For example, an extended warranty is a credible signal to the consumer that the firm believes it is producing a high-quality product.

Recent advances in game theory have succeeded in describing and prescribing appropriate strategies in several situations of conflict and cooperation. But the theory is far from complete, and in many ways the design of successful strategy remains an art.

About the Authors

Avinash Dixit is the John J. F. Sherrerd ’52 University Professor of Economics at Princeton University. Barry Nalebuff is the Milton Steinbach Professor of Management at Yale University’s School of Management. They are coauthors of Thinking Strategically.

Introductory

Ankeny, Nesmith. Poker Strategy: Winning with Game Theory. New York: Basic Books, 1981.Brams, Steven. Game Theory and Politics. New York: Free Press, 1979.Brandenburger, Adam, and Barry Nalebuff. Co-opetition. New York: Doubleday, 1996.Davis, Morton. Game Theory: A Nontechnical Introduction. 2d ed. New York: Basic Books, 1983.Dixit, Avinash, and Barry Nalebuff. Thinking Strategically: A Competitive Edge in Business, Politics, and Everyday Life. New York: W. W. Norton, 1991.Dixit, Avinash, and Susan Skeath. Games of Strategy. 2d ed. New York: W. W. Norton, 2004.“Game Theory.” Wikipedia. Online at: http://en.wikipedia.org/wiki/Game_Theory.Luce, Duncan, and Howard Raiffa. Games and Decisions. New York: Wiley, 1957.McDonald, John. Strategy in Poker, Business and War. New York: W. W. Norton, 1950.Osborne, Martin. An Introduction to Game Theory. New York: Oxford University Press, 2003.Porter, Michael. Competitive Strategy. New York: Free Press, 1982.Raiffa, Howard. The Art and Science of Negotiation. Cambridge: Harvard University Press, 1982.Riker, William. The Art of Political Manipulation. New Haven: Yale University Press, 1986.Schelling, Thomas. The Strategy of Conflict. Cambridge: Harvard University Press, 1960.Williams, J. D. The Compleat Strategyst. Rev. ed. New York: McGraw-Hill, 1966.

Advanced

Fudenberg, Drew, and Jean Tirole. Game Theory. Cambridge: MIT Press, 1991.Gibbons, Robert. Game Theory for Applied Economists. Princeton: Princeton University Press, 1992.Myerson, Roger. Game Theory: Analysis of Conflict. Cambridge: Harvard University Press, 1997.Neumann, John von, and Oskar Morgenstern. Theory of Games and Economic Behavior. Princeton: Princeton University Press, 1947.Ordeshook, Peter. Game Theory and Political Theory. Cambridge: Cambridge University Press, 1986.Osborne, Martin, and Ariel Rubenstein. A Course on Game Theory. Cambridge: MIT Press, 1994.Shubik, Martin. Game Theory in the Social Sciences. Cambridge: MIT Press, 1982.