Does game theory have predictive power?  The place to start if you want to examine this question is the theory of zero sum games where the predictions are robust:  you play the minimax strategy:  the one that maximizes your worst-case payoff.  (This is also the unique Nash equilibrium prediction.)

The theory has some striking and counterintuitive implications.  Here’s one.  Take the game rock-scissors-paper.  The loser pays $1 to the winner.  As you would expect, the theory says each should play each strategy with 1/3 probability.  This ensures that each player is indifferent among all three strategies.

Now, for the counterintuitive part, suppose that an outsider will give you an extra 50 cents if you play rock (not a zero-sum game anymore but bear with me for a minute), regardless of the other guy’s choice.  What happens now?  You are no longer indifferent among your three strategies, so your opponent’s behavior must change.  He must now play paper with higher probability in order to reduce your incentive to play rock and restore your indifference.  Your behavior is unchanged.

Things are even weirder if we change both players’ payoffs at the same time.  Take the game matching pennies.  You and your opponent hold a penny and secretly place it with either heads or tails facing up.  If the pennies match you get $1 from your opponent.  If they don’t match you pay $1.

Suppose we change the payoffs so that now you receive $2 from your opponent if you match heads-heads.  All other outcomes are the same as before.  (The game is still zero-sum) What happens?  Like with RSP, your opponent must play heads less often in order to reduce your incentive to play heads.  But since your opponent’s payoffs have also changed, your behavior must change too.  In fact you must play heads with lower probability, because the payoffs have now made him prefer tails to heads.

How can we examine this prediction in the field?  There is a big problem because usually we only observe outcomes and not the players’ payoffs which might vary day-to-day depending on conditions only known to them.  To see what I mean, consider the game-within-a-game between a pitcher and a runner on first base in baseball.  The runner wants to steal second,  the pitcher wants to stop him.  The pitcher decides whether to try a pick-off or pitch to the batter.  The runner decides whether to try and steal.

When he runs and the pitcher throws to the plate, the chance that he beats the throw from the catcher depends on how fast he is and how good the catcher is, and other details.  Thus, the payoff to the strategy choices (steal, pitch) is something we cannot observe.  We just see whether he steals succesfully.

But there is still a testable prediction, even without knowing anything about these payoffs.  By direct analogy to matching pennies, a faster runner will try to steal less often than a slower runner.  And the pitcher will more often try to pick off the faster runner at first base.  Therefore, in the data there will be correlation between the pitcher’s behavior and the runner’s.  If the pitcher is throwing to first more frequently, that is correlated with a faster runner which in turn predicts that the runner tries to steal less often.

This correlation across players (in aggregate data) is a prediction that I believe has never been tested.  (It’s strange to think of non-cooperative play of a zero-sum game as generating correlation across players.)  And while many studies have failed to reject some basic implications of zero-sum game theory, I would be very surprised if this prediction were not soundly rejected in the data.

(The pitcher-runner game may not be the ideal place to do this test, can anyone think of another good, binary zero-sum game where there is good data?)