You are currently browsing the tag archive for the ‘sport’ tag.
The World Cup starts tomorrow and I just filled out my bracket. In academia Americans are a minority and people are intensely nationalistic. So the optimal bracket strategy is to have USA advance as far as I can before even I burst out laughing (it turns out that’s the semi-finals this year) and also give preference for under-represented countries. Based on a cursory survey of our department’s demographics, the team that maximizes quality per department representative is Spain. So Spain is my team to win it all this year.
The World Cup is paradoxical because the group stage is exciting and the elimination stage is extremely boring. There are probably many reasons for this but often people focus on the penalty shootout. You hear arguments like this. Playing it safe gives you essentially a coin flip. And if the other team is playing it safe, taking risks and playing offensively can actually be worse than waiting for the coin flip.
I have heard proposals to hold the penalty shootout before extra time. The winner of the shootout will be the winner of the match if it remains tied after extra time. The uncertainty is resolved first, then they play.
The rule would have ambiguous effects on the quality of play. For sure, the team that won the shootout would play defensively and the disadvantaged team would be forced to play an attacking game. There would be exactly one team attacking.
But that would be less exciting than a game in which both are attacking so the rule change would be a net improvement only if most extra-time games would otherwise have neither team attacking.
Here is a theoretical analysis of the question by Juan Carillo. I am not sure I can summarize his conclusions so help would be appreciated. Here is an empirical analysis.
In a famous paper, Mark Walker and John Wooders tested a central hypothesis of game theory using data on serving strategy at Wimbledon. The probability of winning a point conditional on serving out wide should equal the probability conditional on serving down the middle. They find support for this in the data.
A second hypothesis doesn’t fare so well. Walker and Wooders suggest that the location of the serve should be statistically independent over time, and this is not borne out in the data. The reason for the theoretical prediction is straightforward and follows from the usual zero-sum logic. The server is trying to be unpredictable. Any serial correlation will allow the returner to improve his prediction where the serve is coming and prepare.
But this assumes there are no payoff spillovers from point to point. However it’s probably true that having served to the left on the first serve (and say faulted) is effectively “practice” and this makes the server momentarily better than average at serving to the left again. If this is important in practice, what effect would it have on the time series of serves?
It has two effects. To understand the effects it is important to remember that optimal play in these zero-sum games is equivalent to choosing a random strategy that makes your opponent indifferent between his two strategies. For the returner this means randomly favoring the forehand or backhand side in order to equalize the server’s payoffs from the two serving directions. Since the server now has a boost from serving, say, out wide again, the returner must increase his probability of guessing that direction in order to balance that out. This is a change in the returner’s behavior, but not yet any change in the serving probabilities.
The boost for the server is a temporary disadvantage for the returner. For example, if he guesses down the line, he is more likely to lose the point now than before. He may also be more likely to lose the point even if he guesses out wide, but lets say the first outweighs the second. Then the returner now prefers to guess out wide. The server has to adjust his randomization in order to restore indifference for the returner. He does this by increasing the probability of serving down the line.
Thus, a first serve fault out wide increases the probability that the next serve is down the line. In fact, this kind of “excessive negative correlation” is just what Walker and Wooders found. (Although I am not sure how things break down within-points versus across-points and things are more complicated when we consider ad-court serves to deuce-court serves.)
(lunchtime conversation with NU faculty acknowledged, especially a comment by Alessandro Pavan.)
Younger siblings are said to be more prone to risky behaviors than their elders. This usually means stuff like drugs and sex, but now it means stealing bases:
For more than 90 percent of sibling pairs who had played in the major leagues throughout baseball’s long recorded history, including Joe and Dom DiMaggio and Cal and Billy Ripken, the younger brother (regardless of overall talent) tried to steal more often than his older brother.
Cap tap: Ron Siegel.
Yesterday’s ruling came out of an appeal on a narrow case brought by American Needle, Inc. who complained about exclusive licensing of apparel/logos to Reebok. The league formed a single entity which centralized licensing for all teams. The Supreme Court ruled that this constituted concerted action by separate profit-maximizing entities and therefore falls under the purview of Section 1 of Sherman Act.
Here are the main points from the decision, which is an interesting read.
- The ruling suggests a broader scope than just licensing. The court notes that teams compete with one another not for just apparel sales but for gate receipts, to attract fans and managerial and player personnel.
- Rule of reason is suggested for determining which cooperative activities are permissible and which are not.
- For consideration of any activity, whether or not the teams should be considered competitors will be based not on legalistic distinctions like whatever contracts are in place. Instead the judgment is made on the basis of “a functional consideration of how they actually operate.”
Its easy to interpret these as opening the door to challenges to the systems of collective bargaining in place in all professional sports leagues.

I blogged about this before and in honor of the start of the French Open I gave it some thought again and here are two ideas.
Deuce. Each game is a race to 4 points. (And if you are British 4 = 50.) But you have to win by 2. Conditional on reaching a 3-3 game, the deuce scoring system helps the stronger player by comparison to a flat race to 4. In fact, if being a stronger player means you have a higher probability of winning each point then any scoring system in which you have to win by n is better for the stonger player than the system where you only have to win by n-1.
You can think about a random walk, starting at zero (deuce) with a larger probability of moving up than down, and consider the event that it reaches n or -n. The relative likelihood of hitting n before -n is increasing in n.
This is confounded by the fact that the server has an advantage even if he is the weaker player. But it will average out across service-games.
Grouping scoring into games and sets. Suppose that being a stronger player means that you are better at winning the crucial points. Then grouped scoring makes it clear which are the crucial points. To take an extreme example, suppose that the stronger player has one freebie: in any match he can pick one point and win that point for sure.
In a flat (ungrouped) scoring system, all points are equal and it doesn’t matter where you spend the freebie. And it doesn’t change your chance of winning by very much. But in grouped scoring you can use your freebie at game- or set-point. And this has a big impact on your winning probability.
Conjecture: freebies will be optimally used when you are game- or set-point down, not when it is set-point in your favor. My reasoning is that if you save your freebie when you have set-point, you will still win the set with high probability (especially because of deuce.) If you switch to using it when you are set-point down, its going to make a difference in the cases when there is a reversal. Since you are the stronger player and you win each point with higher probability, the reversals in your favor have higher probability.
Any thoughts on the conjecture? It should have implications for data. The stronger players do better when they are ad-down then when they have the ad. And across matches, their superiority over weaker players is exaggerated in the ad-down points.
My French Open forecast: This could be the year when we have a really interesting Federer-Nadal final.

Tyler Cowen tweeted:
Why do chess players hold their heads hard, with their hands, when they are thinking? If it works, why don’t more thinkers do it?
To prevent overheating of course. You’ll notice that they typically extend their fingers and cover their foreheads which is the hottest part. They are maximizing surface area in order to increase heat dissipation.
Here is a suggestion for how to super-cool your cranium and over-clock your brain. On a more serious note, here is a pipe that is surgically implanted in the skull of epileptics to reduce the intensity of seizures.
Jonathan Weinstein is blogging now at The Leisure of the Theory Class. His first post is a nice one on a common fallacy in basketball strategy.
if a player has a dangerous number of fouls, the coach will voluntarily bench him for part of the game, to lessen the chance of fouling out. Coaches seem to roughly use the rule of thumb that a player with n fouls should sit until n/6 of the game has passed. Allowing a player to play with 3 fouls in the first half is a particular taboo. On rare occasions when this taboo is broken, the announcers will invariably say something like, “They’re taking a big risk here; you really don’t want him to get his 4th.”
The fallacy is that in trying to avoid the mere risk of losing minutes from fouling out the common strategy loses minutes for sure by benching him.
Jonathan discusses a couple of caveats in his post and here is another one. The best players rise to the occasion and overcome deficits as necessary. But they need to know how much of a deficit to overcome.
Suppose you know that a player will foul out in 1 minute. There are 5 minutes to go in the game. If you keep him in the game now he will have to guess how many points the opponents will score in the last 4 and try to beat that. This entails risk because the opponents might do better than expected.
If you bench him until there is 1 minute left then all the uncertainty is resolved by the time he comes back. Now he knows what needs to be done and he does it.
If Jonathan’s argument were correct then there would be no such thing as a “closer” in baseball. At any moment in the game you would field your most effective pitcher and remove him when he is tired. Instead there are pitchers who specialize in pitching the final innings of the game.
The role of a closer is indeed misunderstood in conventional accounts. Just as in Jonathan’s argument there is no reason to prefer having your best pitcher on the mound in later innings, other things equal. All innings are the same. But this doesn’t mean you shouldn’t save your best pitcher for the end of the game.
Suppose he can pitch for only one inning. If you use him in the 8th inning the opposition might win with a big 9th inning and then you’ve wasted your best pitcher. It would have been better to let them score their runs in the 8th. That way you know the game is lost before you have committed your best pitcher. You can save him for the next game.
I coach my 7-year-old daughter’s soccer team. It’s been a tough Spring season so far: they lost the first three games by 1 goal margins. But this week they won something like 15-1.
I noticed something interesting. In all of the close games the girls were emotionally drained. By the end of the game they didn’t have much energy left. Many of them asked to be rotated out.
But this week nobody asked to be rotated out. In fact this week they had the minimum number of players so each of them played the whole game and still nobody complained of being tired. Obviously they were having fun running up the score but they didn’t get tired.
Incentives are about getting players to want conditions to improve. So incentives necessarily make them less happy about where they are now. Feeling good about winning means feeling bad about not winning. That’s the motivation.
But encouragement is about being happy about where you are now. And it has real effects: it energizes you. You don’t get tired so fast when you are having fun.
There is a clear conflict between incentives and encouragement. At the same time incentives motivate you to win, they discourage you because you are losing. A coach who fails to recognize this is making a big mistake.
And I am not giving a touchy-feely speech about “it’s not whether you win or lose…” I am saying that a cold-hearted coach who only cares about winning should, at the margin, put less weight on incentives to win.
If my daughter’s team loved losing, is it possible they would lose less often? Probably not. But that’s because the love of losing would give them an incentive to lose. They would be discouraged when they win but that would only help them to start losing. (Unless the opposing coach used equally insane incentives.)
Nevertheless, to love winning by 10 goals is a waste of incentive and is therefore a pure cost in terms of its effect on encouragement when the game is close. Think of it this way: you have a fixed budget of encouragement to spread across all states of the game. If you make your team happy about winning by 10 goals, that directly subtracts from their happiness about winning by only 1 goal.
My guess is that, against a typically incentivized opponent, the optimal incentive scheme is pretty flat over a broad range. That range might even include losing by one goal. Because when the team is losing by one goal, the positive attitude of being in the first-best equivalence class will keep them energized through the rest of the game and that’s a huge advantage.
We noticed that professional golfers today have ads on their hats, sleeves, collars, belt-buckles, shoes, etc. while in the past few had more than one or two ads. At an individual level this makes sense but collectively it shows that the PGA would do better to centralize their negotiations with advertisers.
When Phil Mickleson considers selling another ad he has to lower his price. He trades off the additional sale versus the reduction in the price to decide whether it is worth it. He doesn’t take into account how his increased supply lowers the price of ads for all PGA golfers. When this negative externality is not internalized, the PGA as a whole sells too many ads. PGA-wide ad revenue would increase if they could negotiate ads as a group rather than individually.
Why don’t they? In the short-term it would be simple. Each golfer reports the ad revenue he is currently earning. Then an agent for the PGA negotiates with advertisers to sell a block of ads and distributes them optimally across golfers. This optimization would not only involve keeping quantity low but it would also take into account complementarity between golfer and ad, screen time, diversification, etc. Then, the total ad revenue would be shared among the players in some way that gives each player at least as much as he was earning individually. Since total revenue would be higher, there would be money left over to divide up in some way.
The problem is how to manage this over time. In order to keep a majority of players willing to go along with it, they will have to be promised at least as much as their autarky value. But the most recent public information about that value was recorded just before they entered the cooperative agreement. Over time that information depreciates as players rise and fall and new players arrive.
But privately, each individual player would be able to estimate their ad revenues should he go it alone. When the players bargain over shares, each individual player will exaggerate his earnings potential and insist on compensation for his outside option. When public information is weak enough, these demands can add up to more than the group can earn, at which point bargaining breaks down and autarky prevails.
My memory is not so good but it seems to me that professional golfers didn’t used to look so much like race cars.

Perhaps they have been consulting with auction theorists. Selling ad space on your shirt is like a multi-unit auction but with an interesting twist. Like any auction you want to insist on a reserve price to keep revenues high. The reserve acts as a threat not to sell unless bids are high enough and this induces more agressive bidding. Normally this leads to under-supply, just as a textbook monopolist restricts output to keep prices high.
But here’s the twist. After you have sold the ad on your hat, your auction for an ad on your lapel is a threat against the advertiser on your hat. If you sell an ad on your lapel it’s going to take some focus off the hat.
That means it is in both yours and the hat-advertiser’s interest to have him bid for the lapel ad. Yours because more competition is better, and his because he wants to keep the competitors off your lapel. Now think about how your reserve price for the lapel-auction works. Just as before, for the new bidders it is an inducement to bid higher. But for the hat-guy it’s an inducement to lower his bid for your lapel. If you set a high reserve then he can safely lose the auction for your lapel and expect that nobody else will win, which for him is just as good as winning.
This leads you to set a lower reserve on your lapel than you otherwise would. In effect this is a threat to the hat-hawker that if he doesn’t bid high enough to keep your lapel clean, you are going to put someone else’s logo there. That is, you are over-supplying ads (relative to the situation in which the ads had no spillovers.)
When these principles are put to use, two kinds of outcomes can occur. If there is a high enough bidder you will sell exclusive advertising to that bidder. If not, you will sell lots of little ads to little bidders.

While we are on the subject, here are recent prices for apparel real-estate.
The primary rationale for tenure is academic freedom. A researcher may want to pursue an agenda which is revolutionary or offensive to Deans, students, colleagues, the public at large etc. However, the agenda may be valuable and in the end dramatically add to the stock of knowledge. The paradigmic example is Galileo who was persecuted for his theory that the Sun is at the center of our planetary system and not the Earth. Galileo spent the end of his life under house arrest. Einstein considered Galileo the father of modern science. Tenure would now grant Galileo the freedom to pursue his ideas without threat of persecution.
From the profound to the more prosaic: the economic approach to tenure. For economists, tenure is simply another contract or institution and we may ask, when is tenure the optimal contract? My favorite answer to this question is given by Lorne Carmichael’s “Incentives in Academics: Why Is There Tenure?” Journal of Political Economy (1996).
Suppose a university is a research university that maximizes the total quality of research. Let’s compare it to a basketball team that wants to maximize the number of wins. Universities want to hire top researchers and basketball teams want to hire great players. Universities use tenure as their optimal contract but basketball teams do not. Why the difference?
On the basketball side of things it’s pretty obvious. Statistics can help to reveal the quality of a player and you can use the data to distinguish a good player from a bad player. And this can inform your hiring and retention decisions.
On the research side, things are more complicated. Statistics are harder to come by and interpret. On Amazon, Britney Spears’ “The Singles Collection” is #923 in Music while Glenn Gould’s “A State of Wonder: the Complete Goldberg Variations” is #3417. Even if we go down to subcategories, Britney is #11 in Teen Pop and Glenn is #56 in Classical.
So, is Britney’s stuff better than Bach, as interpreted by Glenn Gould? I love “Oops..I did it Again”, but I am forced to admit that others may find Britney’s work to be facile while there is timeless depth to Bach that Britney can’t match.
I’ve tried to offer an example which is fun, but it is also a bit misleading as the analogy with scientific research is flawed. First, music is for everyone, while scientific research is specialized. Second, there is an experimental method in science so it is not purely subjective. But the main point is there is a subjective component to evaluating research and hence job candidates in science. There is less of this in basketball. Shaq is less elegant than Jordan but he gets the job done nonetheless. The subjective component actually matters a lot in science because of the specialization. Scientists are better placed to determine if an experiment or theory in their field is incorrect, original or important. And they are better placed to make hiring decisions, when even noisy signals of publications and citations are not available.
Subjective evaluation is the starting point of Carmichael’s model of tenure. If you are stuck with subjective evaluation, the people who know a hiring candidate’s quality best are people in the department that is hiring him. If the evaluators are not tenured, they will compete with the new employee in the future. If the evaluators hire who is higher quality than they are themselves, they are more likely to get sacked than the person they hire. In fact, the evaluators have the incentive to hire bad researchers so they are secure in their job. This reduces the quality of research coming out of the university. On the other hand, if the evaluator is tenured, their job is secure and this increases their incentive to be honest about candidate quality and leads to better hiring. If there are objective signals as in sport, there is less need for subjective evaluation and hence no need for tenure.
This is the crux of the idea. It is patronizing for anyone to impose their tastes of Britney vs Bach on others. Everyone’s opinion is equally valid. It is possible to say Scottie Pippin was a worse basketball player than Jordan – the data prove it. Science is somewhere in between. There is both an objective component and a subjective component. We then have to rely on experts. Then, the experts may have to be tenured.
In a nice paper, Chiappori, Groseclose and Levitt look at the zero-sum game of a penalty kick in professional soccer. They lay out a number of robust predictions that are testable in data, but they leave out the formal analysis of the theory (at least in the published version.) These make for great advanced game theory exercises. Here’s one:
The probability that the shooter aims for the middle of the goal (as opposed to aiming for the left side or the right side) is higher than the probability that the goalie stays in the middle (as opposed to jumping to the left or to the right.)
Hint: the answer is related to my post from yesterday, and you can get the answer without doing any calculation.
People seem to care not just about their own material success but how it measures up to their peers. There is probably a good evolutionary reason for this. Larry Samuelson has shown one way to formalize this idea in this paper.
But here’s a different story and one that is extremely simple. Imagine a speed skating competition with 10 competitors. Suppose that 8 of them skate their heats solo with no knowledge of the others’ times. The remaining 2 also have no knowledge of the others’ times except that they race simultaneously side by side.
Other things equal, each of the two parallel skaters has a greater than 1/10 chance of winning.
Does game theory have predictive power? The place to start if you want to examine this question is the theory of zero sum games where the predictions are robust: you play the minimax strategy: the one that maximizes your worst-case payoff. (This is also the unique Nash equilibrium prediction.)
The theory has some striking and counterintuitive implications. Here’s one. Take the game rock-scissors-paper. The loser pays $1 to the winner. As you would expect, the theory says each should play each strategy with 1/3 probability. This ensures that each player is indifferent among all three strategies.
Now, for the counterintuitive part, suppose that an outsider will give you an extra 50 cents if you play rock (not a zero-sum game anymore but bear with me for a minute), regardless of the other guy’s choice. What happens now? You are no longer indifferent among your three strategies, so your opponent’s behavior must change. He must now play paper with higher probability in order to reduce your incentive to play rock and restore your indifference. Your behavior is unchanged.
Things are even weirder if we change both players’ payoffs at the same time. Take the game matching pennies. You and your opponent hold a penny and secretly place it with either heads or tails facing up. If the pennies match you get $1 from your opponent. If they don’t match you pay $1.
Suppose we change the payoffs so that now you receive $2 from your opponent if you match heads-heads. All other outcomes are the same as before. (The game is still zero-sum) What happens? Like with RSP, your opponent must play heads less often in order to reduce your incentive to play heads. But since your opponent’s payoffs have also changed, your behavior must change too. In fact you must play heads with lower probability, because the payoffs have now made him prefer tails to heads.
How can we examine this prediction in the field? There is a big problem because usually we only observe outcomes and not the players’ payoffs which might vary day-to-day depending on conditions only known to them. To see what I mean, consider the game-within-a-game between a pitcher and a runner on first base in baseball. The runner wants to steal second, the pitcher wants to stop him. The pitcher decides whether to try a pick-off or pitch to the batter. The runner decides whether to try and steal.
When he runs and the pitcher throws to the plate, the chance that he beats the throw from the catcher depends on how fast he is and how good the catcher is, and other details. Thus, the payoff to the strategy choices (steal, pitch) is something we cannot observe. We just see whether he steals succesfully.
But there is still a testable prediction, even without knowing anything about these payoffs. By direct analogy to matching pennies, a faster runner will try to steal less often than a slower runner. And the pitcher will more often try to pick off the faster runner at first base. Therefore, in the data there will be correlation between the pitcher’s behavior and the runner’s. If the pitcher is throwing to first more frequently, that is correlated with a faster runner which in turn predicts that the runner tries to steal less often.
This correlation across players (in aggregate data) is a prediction that I believe has never been tested. (It’s strange to think of non-cooperative play of a zero-sum game as generating correlation across players.) And while many studies have failed to reject some basic implications of zero-sum game theory, I would be very surprised if this prediction were not soundly rejected in the data.
(The pitcher-runner game may not be the ideal place to do this test, can anyone think of another good, binary zero-sum game where there is good data?)
Honestly I have no good theory. Here are some rejected ones:
- They are the only ones sufficiently lacking in self-respect.
- Since ads drive everything what matters is the audience still watching at halftime. By that time the geezers have already drunk enough to be glued to their sofas, but not enough yet to be asleep.
- Only Pete Townsend’s generation knows how to count to XLIV.
- It’s a kind of best-of the has-beens competition. A shadow rock-and-roll hall of fame.
- You can hide their spectacles and tell them its Carnegie Hall.
- Minimizes the, ehem, fallout from wardrobe malfunctions.
Its obvious right? Ok but before you read on, say the answer to yourself.

Is it because he is the most able to make up any lost time by the earlier teammates? Because in the anchor leg you know exactly what needs to be done? Now what about this argument: The total time is just the sum of the individual times. So it doesn’t matter what order they swim in.
That would be true if everyone was swimming (running, potato-sacking, etc.) as fast as they could. But it is universally accepted strategy to put the fastest last. If you advocate this strategy you are assuming that not everyone is swimming as fast as they can.
For example, take the argument that in the anchor leg you know exactly what needs to be done. Inherent in this argument is the view that swimmers swim just fast enough to get the job done.
(That tends to sound wrong because we don’t think of competitive athletes as shirkers. But don’t get drawn in by the framing. If you like, say it this way: when the competition demands it, they “rise to the occasion.” Whichever way you say it, they put in more or less effort depending on the competition. And one does not have to interpret this as a cold calculation trading off performance versus effort. Call it race psychology, competitive spirit, whatever. It amounts to the same thing: you swim faster when you need to and therefore slower when you don’t.)
But even so its not obvious why this by itself is an argument for putting the fastest last. So let’s think it through. Suppose the relay has two legs. The guy who goes first knows how much of an advantage the opposing team has in the anchor leg and therefore doesn’t he know the amount by which he has to beat the opponent in the opening leg?
No, for two reasons. First, at best he can know the average gap he needs to finish with. But the anchor leg opponent might have an unusually good swim (or the anchor teammate might have a bad one.) Without knowing how that will turn out, the opening leg swimmer trades off additional effort in return for winning against better and better (correspondingly less and less likely) possible performance by the anchor opponent. He correctly discounts the unlikely event that the anchor opponent has a very good race, but if he knew that was going to happen he would swim faster.
The anchor swimmer gets to see when that happens. So the anchor swimmer knows when to swim faster. (Again this would be irrelevant if they were always swimming at top speed.)
The other reason is similar. You can’t see behind you (or at least your rear-ward view is severely limited.) The opening leg swimmer can only know that he is ahead of his opponent, but not by how much. If his goal is to beat the opening leg opponent by a certain distance, he can only hope to do this on average. He would like to swim faster when the opening leg opponent is behind but doing better than average. The anchor swimmer sees the gap when he takes over. Again he has more information.
There is still one step missing in the argument. Why is it the fastest swimmer who makes best use of the information? Because he can swim faster right? It’s not that simple and indeed we need an assumption about what is implied by being “the fastest.” Consider a couple more examples.
Suppose the team consists of one swimmer who has only one speed and it is very fast and another swimmer who has two speeds, both slower than his teammate. In this case you want the slower swimmer to swim with more information. Because in this case the faster swimmer can make no use of it.
For another example, suppose that the two teammates have the same two speeds but the first teammate finds it takes less effort to jump into the higher gear. Then here again you want the second swimmer to anchor. But this time it is because he gets the greater incentive boost. You just tell the first swimmer to swim at top speed and you rely on the “spirit of competition” to kick the second swimmer into high gear when he’s behind.
More generally, in order for it to be optimal to put the fastest swimmer in the anchor leg it must be that faster also means a greater range of speeds and correspondingly more effort to reach the upper end of that range. The anchor swimmer should be the team’s top under-achiever.
Exercises:
- What happens in a running-backwards relay race? Or a backstroke relay (which I don’t think exists.)
- In a swimming relay with 4 teammates why is it conventional strategy to put the slowest swimmer third?
The Sports Economist picks up on the economic impact of Tiger’s expected absence from professional golf tournaments this year.
But it may be a boon to academia. I previously blogged about Jen Brown’s research on “the Tiger Woods effect” as evidence of strategic effort in contests. In tournaments with Tiger Woods present, the rest of the field performs noticably worse than in tournaments in which he was absent. While that study was careful to note and account for the possibility that Tiger’s absence (by choice) from a tournament might be correlated with some unobservable factor that could bias the conclusion, these concerns are always present.
Fortunately, over the next year we will have a nice natural experiment due to the fact that Tiger’s absence will represent truly independent variation. Looking forward to seeing an update on the Tiger Woods effect. (Trilby toss: Matt Notowidigdo.)
In the top tennis tournaments there is a limited instant-replay system. When a player disagrees with a call (or non-call) made by a linesman, he can request an instant-replay review. The system is limited because the players begin with a fixed number of challenges and every incorrect challenge deducts one from that number. As a result there is a lot of strategy involved in deciding when to make a challenge.
Alongside the challenge system is a vestige of the old review system where the chair umpire can unilaterally over-rule a call made by the linesman. These over-rules must come immediately and so they always precede the players’ decision whether to challenge, and this adds to the strategic element.
Suppose that A’s shot lands close to B’s baseline, the ball is called in by the linesman but this call is over-ruled by the chair umpire. In these scenarios, in practice, it is almost automatic that A will challenge the over-ruled call. That is, A asks for an instant-replay hoping it will show that the ball was indeed in.
This seems logical. It looked in to the linesman and that is good information that it was actually in. For example, compare this scenario to the one in which the ball was called out by the linesman and that call was not over-ruled. In that alternative scenario, one party sees the the ball out and no party is claiming to see the ball in. In the scenario with the over-rule, there are two opposing views. This would seem to make it more likely that the ball was indeed in.
But this is a mistake. The chair umpire knows when he makes the over-rule that the linesman saw it in. He factors that information in when deciding whether to over-rule. His willingness to over-rule shows that his information is especially strong: strong enough to over-ride an opposing view. And this is further reinforced by the challenge system because the umpire looks very bad if he over-rules and a challenge shows he is wrong.
I am willing to bet that the data would show challenges of over-ruled calls are far less likely to be successful than the average challenge.
A separate observation. The challenge system is only in place on the show courts. Most matches are played on courts that are not equipped for it. I would bet that we could see statistically how the challenge system distorts calls by the linesmen and over-rules by the chair umpire by comparing calls on and off the show courts.
Readers of this blog know that I view that as a very good thing.
Justin Rao from UCSD analyzes shot-making decisions by the Los Angeles Lakers over the course of 60 games in the 2007-2008 NBA season. He collected data on the timing of the shot and identity of the shooter and then recorded additional data such as defensive pressure and shot location by watching the games on video. The data were used to check some basic hypotheses of the decision theory and game theory of shot selection.
The team cooperatively solves an optimal stopping problem in deciding when to take a shot over the course of a 24 second possession. At each moment a shot opportunity is realized and the decision is whether to take that shot or to wait for a possibly better opportunity to arise. Over time the option value of waiting declines because the 24 second clock winds down and the horizon over which further opportunities can appear dwindles. This means that the team becomes less selective over time. As a consequence, we should see in the data that the success rate of shots declines on average later in the possession. Justin verifies this in the data.
Of course, the shot opportunities do not arise exogenously but are the outcome of strategy by the offense and defense. The defense will apply more pressure to better shooters and the offense will have their better shooters take more shots. Both of these reduce the shooting percentage of the better shooters and raise the shooting percentage of the worse shooters. (For example when the better shooter takes more shots he does so by trying to convert less and less promising opportunities.)
With optimal play by both sides, this trend continuues until all shooters are equally productive. That is, conditional on Kobe Bryant taking a shot at a certain moment, the expected number of points scored should be the same as the alternative in which he passes to Vladimir Radmanovic who then shoots. To achieve this, Kobe Bryant shoots more frequently but has a lower average productivity. Also the defense covers Radmanovic more loosely in order to make it relatively more attractive to pass it to him. This is all verified in the data.
Finally, these features imply that a rising tide lifts all boats. That is, when Kobe Bryant is on the court, in order for productivities to be equalized across all players it must be that all other players’ productivities are increased relative to when Kobe is on the bench. He makes his teammates better. This is also in the data.
The equal productivity rule applies only to players who actually shoot. In rare cases it may be impossible to raise the productivity of the supporting cast to match the star’s. In that case the optimal is a corner solution: the star should take all the shots and the defense should guard only him. On March 2, 1962 Wilt Chamberlin was so unstoppable that despite being defended by 3 and sometimes 4 defenders at once, he scored 100 points, the NBA record.
Via The Volokh Conspiracy, I enjoyed this discussion of the NFL instant replay system. A call made on the field can only be overturned if the replay reveals conclusive evidence that the call was in error. Legal scholarship has debated the merits of such a system of appeals relative to the alternative of de novo review: the appelate body considers the case anew and is not bound by the decision below.
If standards of review are essentially a way of allocating decisionmaking authority between trial and appellate courts based on their relative strengths, then it probably makes sense that the former get primary control over factfinding and trial management (i.e., their decisions on those matters are subject only to clear error or abuse of discretion review), while the latter get a fresh crack at purely “legal” issues (i.e., such issues are reviewed de novo). Heightened standards of review apply in areas where trial courts are in the best place to make correct decisions.
These arguments don’t seem to apply to instant replay review. The replay presumably is a better document of the facts than the realtime view of the referee. But not always. Perhaps the argument against in favor of deference to the field judge is that it allows the final verdict to depend on the additional evidence from the replay only when the replay angle is better than that of the referee.
That argument works only if we hold constant the judgment of the referee on the field. The problem is that the deferential system alters his incentives due to the general principle that it is impossible to prove a negative. For example consider the (reviewable) call of whether a player’s knee was down due to contact from an opposing player. Instant replay can prove that the knee was down but it cannot prove the negative that the knee was not down. (There will be some moments when the view is obscured, we cannot be sure that the angle was right, etc.)
Suppose the referee on the field is not sure and thinks that with 50% probability the knee was down. Consider what happens if he calls the runner down by contact. Because it is impossible to prove the negative, the call will almost surely not be overturned and so with 100% probability the verdict will be that he was down (even though that is true with only 50% probability.)
Consider instead what happens if the referee does not blow the whistle and allows the play to proceed. If the call is challenged and the knee was in fact down, then the replay will very likely reveal that. If not, not. The final verdict will be highly correlated with the truth.
So the deferential system means that a field referee who wants the right decision made will strictly prefer a non-call when he is unsure. More generally this means that his threshold for making a definitive call is higher than what it would be in the absence of replay. This probably could be verified with data.
On the other hand, de novo review means that, conditional on review, the call made on the field has no bearing. This means that the referee will always make his decision under the assumption that his decision will be the one enforced. That would ensure he has exactly the right incentives.
A simple implication of sexual selection is that there should be a correlation between features that attract us sexually and characteristics that make our offspring more fit. Here is an article that studies the link between physical attraction and success in sport.
The better an American football player, the more attractive he is, concludes a team led by Justin Park at the University of Bristol, UK. Park’s team had women rate the attractiveness of National Football League (NFL) quarterbacks: all were elite players, but the best were rated as more desirable.
Meanwhile, a survey of more than a thousand New Scientist Twitter followers reveals a similar trend for professional men’s tennis players.
Neither Park nor New Scientist argue that good looks promote good play. Rather, the same genetic variations could influence both traits.
“Athletic prowess may be a sexually selected trait that signals genetic quality,” Park says. So the same genetic factors that contribute to a handsome mug may also offer a slight competitive advantage to professional athletes.
Studies like this are prone to endogeneity problems because success also feeds back on physical attraction. At the extreme, we know who Roger Federer is and that gets in the way of judging his attractiveness directly. More subtly, if you show me pictures of two anonymous athletes, the one who is more successful has probably also trained better, eaten better, been raised differently and these are all endogenous characteristics that affect attractiveness directly. Knowing that they correlate with success doesn’t tell us whether “success genes” have physically attractive manifestations.
One way to improve the study would be to look at adopted children. Show subjects pictures of the athletes’ biological parents and ask the subjects to rate the attractiveness of the parents. Then correlate the responses with the performance of the children. If these children were raised by randomly selected parents (obviously that is not exactly the case) then we would be picking up the effect of exogenous sources of physical attractiveness passed on only through the genes of the parents.
And why stop with success in sport. Physical attractiveness should be correlated with intelligence, social mobility, etc.
You are playing in you local club golf tournament, getting ready to tee off and there is last-minute addition to the field… Tiger Woods. Will you play better or worse?
The theory of tournaments is an application of game theory used to study how workers respond when you make them compete with one another. Professional sports are ideal natural laboratories where tournament theory can be tested. An intuitive idea is that if two contestants are unequal in ability but the tournament treats them equally, then both contestants should perform poorly (relative to the case when each is competing with a similarly-abled opponent.) The stronger player is very likely to win so the weaker player conserves his effort which in turn enables the stronger player to conserve his effort and still win.
There is a paper by Kellogg professor Jennifer Brown that examines this effect in professional golf tournaments. She compares how the average competitor performs when Tiger Woods is in the tournament relative to when he is not. Controlling for a variety of factors, Tiger Woods’ presence increases (i.e. worsens, remember this is golf) the score of the average golfer, even in the first round of the tournament.
There are actually two reasons why this should be true. First is the direct incentive effect mentioned above. The other is that lesser golfers should take more risks when they are facing tougher competition. Surprisingly, this is not evident in the data. (I take this to be bad news for the theory, but the paper doesn’t draw this conclusion.)
Also, since golf is a competition among many players and there are prizes for second, third etc., the theory does not necessarily imply a Tiger Woods effect. For example, consider the second-best player. For her, what matters is the drop-off in rewards as a player falls from first to second relative to second to third. If the latter is the steeper fall, then Tiger Woods’ presence makes her work harder. Since the paper looks at the average player, then what should matter is something like concavity vs. convexity of the prize schedule.
Also, remember the hypothesis is that both players phone it in. Unfortunately we don’t have a good control for this because we can’t make Tiger Woods play against himself. Perhaps the implied empirical hypothesis says something about the relative variance in the level of play. When Tiger Woods is having a bad season, competition is tighter and that makes him work harder, blunting the effect of the downturn. When he is having a good season, he slacks off again blunting the effect of the boom. By contrast, for the weaker player the incentive effects make his effort pro-cyclical, amplifying temporal variations in ability.
Jonah Lehrer (to whom my fedora is flipped) prefers a psychological explanation.
Via The Sports Economist comes a report that Las Vegas bookmakers are seeing big losses on NFL games this year owing to the large number of very bad teams and the difficulty of getting the point spreads right.
The Golden Nugget sports book, for instance, opened with St. Louis getting 12.5 points (the half to help with ties). That way, if you bet the Rams and the actual game ended 21-10 Indy, you’d win the bet with a score of 22.5-21 St. Louis.
A betting line is fluid though and will correct itself as money pours in for the favorite or underdog. Despite the Rams getting all those points, at home no less, the money kept going to Indy. The line reacted by moving all the way to 14 points at kickoff.
Still 90% of the money was on the Colts at game time and the Colts won 42-6. Perhaps the problem is that there is a large variance in the market’s estimate of the likely point spread. The bookmaker has to make a good guess the first time becuase too much adjustment of the line allows arbitrage. And a bad guess can be costly.
Summer is over. But that’s old news. My buddy Dave maintained a tradition of polling us for the album of the summer around the time that the season was drawing to a close. Of course in SoCal, summer never really ends, but at some point you have to start climbing the fence to get into the neighborhood pool and that’s as good a demarcation line as any.
The album of the summer is not necessarily one that came out that summer. Its not even necessary that you listened to it that summer. But it should be the album that will always remind you of that summer whenever you hear it. This summer I had my midlife crisis and the background music was Seven Swans by Sufjan Stevens.
I spent the first 25 years of my life a few miles from the Pacific Ocean and never really learned to surf. I am a fine body surfer and boogie boarder but around the time that most of my buddies got into surfing I was spinning my wheels playing chess (I suck.) I turned 40 last fall and now I live on the shores of Lake Michigan. There’s no surf here.
Fortunately I spend a month in California in the summer and this summer it was time to learn. My buddy Dave gave me a surfboard. It’s about twice as tall as me and weighs more than my 8 year old. Its also about 5 inches thick which made it impossible for me to get my arm around it to carry it like a regular cool surfer dude. I looked like a dork carrying it on my head.
But I can’t imagine a better board to learn on. Its more like a canoe than a surf board. It was hilarious to me looking at all of these really cool surfer guys sitting on their tiny little boards that sunk from the weight until they were submerged nearly to their shoulders. Meanwhile I could dip my toes in the water as I lounged around on my Steve Behre (pronounced berry) cruise liner waiting for waves. Dave said “It’s massive, its dangerous, and its embarrassing but just in terms of having fun surfing… the next one’s going to be a lot better.” Thanks Dave.
I got myself a wet suit. The water stays around 70F in San Diego in August so I probably could have got by without one but (again relying on Dave’s advice) since I was going to be surfing in the morning and since, thanks to Steve Behre, my most temperature-sensitive parts would be afloat and exposed to the morning air, I broke down got myself a spring suit. When I tried it on, the dude at the surf shop (Rusty’s in Del Mar) says “Its a little loose in the arms, but you’ll grow into it.” He either thought I was 13 years old or he could just tell that I was going to grow tremendous muscles from paddling.
So I was set. Every morning at 5AM I would start my day with these objects:
You will notice the Advil which is pretty much indispensible when you are a 40 year old man trying to paddle a barge through crashing waves by yourself in the dark. OK not exactly dark, but I was in the water every morning before sunrise. I would surf until about 7:30 and then head back to the apartment, usually before the kids were awake. Parenting advice: arriving at breakfast with your wetsuit on and harrowing surf tales makes you the coolest Dad in the world. Not to mention the tremendous muscles.
I stood up the very first day. Fleetingly. By the end of the first week I could consistently catch waves and stand. They were small waves thankfully. I was bragging to my buddy Storn and then I got this email back.
If you are just standing in front of the whitewater after the wave has broken then it doesn’t technically count. (Not that it isn’t fun.)
How did he know?? In my defense, the Steve Queen-Behre was almost impossible to turn. I guess that’s the tradeoff. Storn came down from the Bay Area and he brought his board, which while still technically a longboard was about half the width and weight of mine.
We swapped boards and I could actually get my arm around his (that’s me on the left.) Didn’t catch any waves though. Turns out that if you want a surfboard with some degree of maneuverability, you also have to paddle with some finesse. I put that on the todo list for next summer and went back to my trusty Steve Buoy. (When you can’t catch a wave you can’t ride the last one all the way in. “The paddle of shame” is what Storn called it.)
That day was the only time I surfed in daylight so I had Jennie bring the camcorder. Here’s some shredding on video.
Not video of me, mind you, Jennie was too busy making drip castles with the kids. Anyway, I don’t need help from no jet-ski. By the end of the month I could turn and ride the shoulder.
Sufjan Stevens was in my CD rotation that whole month. It’s a powerful album and one that was made to be played before sunrise. In your rented Toyota Sienna with a boat strapped to the top:
What’s your album of the summer?
I heard an interview with Reggie Jackson and Bob Gibson (former baseball greats) on NPR’s Fresh Air this weekend. They spent a lot of time talking about pitching inside and “brushing back” hitters. Reggie Jackson, a hitter, conceded that these were “part of the game.”
There is a mundane sense in which this is true, namely that not even the best pitcher has flawless control and sometimes batters get hit. But Reggie was even talking about intentional beanballs. In what sense is this part of the game?
The penalty for throwing inside is that, if you hit the batter, he gets a free base. (And your teammate might get beaned at the next opportunity.) The problem is that this penalty trigger is partly controlled by the opposition. Other things equal it gives the batter an incentive to stand a bit closer to the plate. In order to discourage this, the pitcher must establish a reputation for throwing inside when a batter crowds the plate. In that sense, intentionally throwing at the hitter is unavoidable strategy, part of the game.
So, one way to short-circuit this effect is to change the condition for giving a free base to something that is exogenous, i.e. independent of any choice made by the batter. For example, the batter gets a free base any time the ball sails more than some fixed distance inside of the plate, whether or not it actually hits the batter. Modern technology could certainly detect this with minimal error.
We talked a lot before about designing a scoring system for sports like tennis. There is some non-fanciful economics based on such questions. Suppose you have two candidates for promotion and you want to promote the candidate who is most talented. You can observe their output but output is a noisy signal that depends not just on talent, but also effort both of which you cannot observe directly. (Think of them as associates in a law firm. You see how much they bill but you cannot disentangle hard work from talent. You must promote one to partner where hard work matters less and talent matters more.)
How do you decide whom to promote? The question is the same as how to design a scoring system in tennis to maximize the probability that the winner is the one who is most talented.
One aspect of the optimal contest seems clear. You should let them set the rules. If a candidate knows he has high ability he should be given the option to offer a handicap to his rival. Only a truly talented candidate would be willing to offer a handicap. So if you see that candidate A is willing to offer a higher handicap than candidate B, then you should reward A.
The rub is that you have to reward A, but give B a handicap. Is it possible to do both?
Start when he is 13 months old:
(sorry for the low quality. two years ago = ancient technology.) Yes at that age a child can be taught to float. In fact almost no teaching is required. You place the child on his back, he floats. He cries too, it turns out. A lot. That’s why its not me there teaching him to float. Instead it is a highly trained swimming teacher and one of the most inspirational people I have ever known. That year was our kids’ first year of swimming lessons with him and we have been spending the summer in La Jolla, CA every year since primarily because of him and these swimming lessons. 10 minute lessons, daily for four weeks.
Here is what he learned last year when he was 2. (rss readers probably need to click through to the blog to see the video.)
A 2 year, 2month old child can learn to kick with his face in the water, roll over onto his back when he needs to breathe, and then continue on. And at this early age he learns something which is subtle but which is central to swimming at every level: looking at the floor to point the top of your head in the direction you are swimming and getting a breath by rotating on that axis. The hardest thing to teach the child is not to look where he is going. Looking where you are going means tilting your head up and that pushes your body down and makes you sink. For a two-year-old that is a deal-breaker, but even among adults head orientation is what distinguishes good swimmers from the best swimmers.
Here is how you teach a two-year-old to look at the floor.
Many repetitions of placing the child in the water, putting your hand deep under water and tell him to swim and grab the hand. He has to look down to find your hand. The typical swimming teacher hold out his hands near the surface of the water which instead trains the child to look up, a disaster. This tiny difference has an enourmous impact on how smoothly the child can learn to swim.
It also teaches the child to go slow. Another subtlety with swimming is that moving your arms and legs faster usually makes you go slower. Slowing down all of the movements teaches him how to move more efficiently through the water.
This summer, at age 3 years 2 months he reached the stage where he could swim by himself without an adult in the pool with him, keeping himself going with the swim-float-swim sequence. Then he began to learn to swim with his arms.
Next summer: how to tech a four-year-old to snorkel.
Estimates are that 7-10% of the population are left-handed. But more than 20% of professional baseball players are left-handed (the figure is closer to 30% for non-pitchers.) On the other hand, among the 32 seeded players at the US Open tennis tournament, only two are lefties (about 6%.) Explain.




