You are currently browsing the tag archive for the ‘sport’ tag.

I wrote about it here.  I had a look at the video and it was the right call given the rule, but as I argued in the original post the rule is an unnecessary kludge.  At best, it does nothing (in equilibrium.)

  1. Is it that women like to socialize more than men do or is it that everyone, men and women alike, prefers to socialize with women?
  2. A great way to test for strategic effort in sports would be to measure the decibel level of Maria Sharapova’s grunts at various points in a match.
  3. If you are browsing the New York Times and you are over your article limit for the month, hit the stop button just after the page renders but before the browser has a chance to load the “Please subscribe” overlay.  This is easy on slow browsers like your phone.
  4. Given the Archimedes Principle why do we think that the sea level will rise when the Polar Caps melt?
Advertisement

The eternal Kevin Bryan writes to me:

Consider an NFL team down 15 who scores very late in the game, as happened twice this weekend. Everybody kicks the extra point in that situation instead of going for two, and is then down 8.  But there is no conceivable “value of information” model that can account for this – you are just delaying the resolution of uncertainty (since you will go for two after the next touchdown).  Strange indeed.

Let me restate his puzzle.  If you are in a contest and success requires costly effort, you want to know the return on effort in order to make the most informed decision.  In the situation he describes if you go for the 2-pointer after the first touchdown you will learn something about the return on future effort.  If you make the 2 points you will know that another touchdown could win the game.  If you fail you will know that you are better off saving your effort (avoiding the risk of injury, getting backups some playing time, etc.)

If instead you kick the extra point and wait until a second touchdown before going for two there is a chance that all that effort is wasted.  Avoiding that wasted effort is the value of information.

The upshot is that a decision-maker always wants information to be revealed as soon as possible.  But in football there is a separation between management and labor.  The coach calls the plays but the players spend the effort.  The coach internalizes some but not all of the players’ cost of effort. This can make the value of information negative.

Suppose that both the coach and the players want maximum effort whenever the probability of winning is above some threshold, and no effort when its below.  Because the coach internalizes less of the cost of effort, his threshold is lower.  That is, if the probability of winning falls into the intermediate range below the players’ threshold and above the coach’s threshold, the coach still wants effort from them but the players give up.  Finally, suppose that after the first touchdown the probability of winning is above both thresholds.

Then the coach will optimally choose to delay the resolution of uncertainty.  Because going for two is either going to move the probability up or down.  Moving it up has no effect since the players are already giving maximum effort.  Moving it down runs the risk of it landing in that intermediate area where the players and coach have conflicting incentives.  Instead by taking the extra point the coach gets maximum effort for sure.

The difference between cycling and badminton:

“I just crashed, I did it on purpose to get a restart, just to have the fastest ride. I did it. So it was all planned, really,” Hindes reportedly said immediately after the race. He modified his comments at the official news conference to say he lost control of his bike.”

The opposition took it in stride:

French officials did not formally complain about the British tactic.

“You have to make the most of the rules. You have to play with them in a competition and no one should complain about that,” the France team’s technical director, Isabelle Gautheron, told The Associated Press.

But,

“He (Hindes) should not have told the truth,” Daniel Morelon, a Frenchman who coaches the China team, told the AP. “It’s part of the game, but you should not tell others.”

Eight female badminton players were disqualified from the Olympics on Wednesday for trying to lose matches the day before, the Badminton World Federation announced after a disciplinary hearing.

The players from China, South Korea and Indonesia were accused of playing to lose in order to face easier opponents in future matches, drawing boos from spectators and warnings from match officials Tuesday night.

All four pairs of players were charged with not doing their best to win a match and abusing or demeaning the sport.

Apparently the Badminton competition has the typical structure of a preliminary round followed by an elimination tournament.  Performance in the preliminary round determines seeding in the elimination tournament.  The Chinese and South Korean teams had already qualified for the elimination tournament but wanted to lose their final qualifying match in order to get a worse seeding in the elimination tournament.  They must have expected to face easier competition with the worse seeding.

This widely-used system is not incentive-compatible.  This is a problem with every sport that uses a seeded elimination tournament.  Economist/Market Designers have fixed Public School Matching and Kidney Exchange, let’s fix tournament seeding.  Here are two examples to illustrate the issue:

1. Suppose there are only three teams in the competition.  Then the elimination tournament will have two teams play in a first elimination round and the remaining team will have a “bye” and face the winner in the final.  This system is incentive compatible.  Having the bye is unambiguously desirable so all teams will play their best in the qualifying to try and win the bye.

2. Now suppose there are four teams.  The typical way to seed the elimination tournament is to put the top performing team against the worst-performing team in one match and the middle two teams in the other match.  But what if the best team in the tournament has bad luck in the qualifying and will be seeded fourth.  Then no team wants to win the top seed and there will be sandbagging.

As I see it the basic problem is that the seeding is too rigid.  One way to try and improve the system is to give the teams some control over their seeding after the qualifying round is over.  For example, we order the teams by their performance then we allow the top team to choose its seed, then the second team chooses, etc. The challenge in designing such a system is to make this seed-selection stage incentive-compatible.  The risk is that the top team chooses a seed and then after all others have chosen theirs the top team regrets its choice and wants to switch.  If the top team foresees this possibility it may not have a clear choice and this instability is not only problematic in itself but could ruin qualifying-round incentives again.

So that is the question.  As far as I know there is no literature on this.  Let’s us, the Cheap Talk community, solve this problem.  Give your analysis in the comments and if we come up with a good answer we will all be co-authors.

UPDATE:  It seems we have a mechanism which solves some problems but not all and a strong conjecture that no mechanism can do much better than ours.  GM was the first to suggest that teams select their opponents with higher qualifiers selecting earlier and Will proposed the recursive version.  (alex, AG, and Hanzhe Zhang had similar proposals) The mechanism, lets call it GMW, works like this:

The qualifiers are ranked in descending order of qualifying results.  (In case the qualifying stage produces only a partial ranking, as is the case with the group stages in the FIFA World Cup, we complete the ranking by randomly ordering within classes.)  In the first round of the elimination stage the top qualifier chooses his opponent.  The second qualifier (if we was not chosen!) then chooses his opponent from the teams that remain.  This continues until the teams are paired up.  In the second round of elimination we pair teams via the same procedure again ordering the surviving teams according to their performance in the qualifying stage.  This process repeats until the final.

It was pointed out by David Miller (also JWH with a concrete example, and afinetheorem) that GMW is not going to satisfy the strongest version of our incentive compatibility condition and indeed no mechanism can.

Let me try to formalize the positive and negative result.  Let’s consider two versions of No Envy.  They are strong and weak versions of a requirement that no team should want to have a lower ranking after qualifying.

Weak No Envy:  Let P_k(r,h) be the pairing that results in stage k of the elimination procedure when the ordering of teams after the qualifying stage was r and the history of eliminations prior to stage k is given by h.  Let r’ be the ordering obtained by altering r by moving team x to some lower position without altering the relative ordering of all other teams.  We insist that for every r, k, h, and x, the pairing P_k(r,h) is preferred by team x to the pairing P_k(r’,h).

Strong No Envy:  Let r’ be an ordering that obtains by moving team x to some lower position and possibly also altering the relative positions of other teams.  We insist that for every r,k,h, and x, the pairing P_k(r,h) is preferred by team x to P_k(r’,h).

GMW satisfies Weak No Envy but no mechanism satisfies Strong No Envy.  (The latter is not quite a formal statement because it could be that the teams pairing choices, which come from the exogenous relative strengths of teams, make Strong No Envy hold “by accident.”  We really want No Envy to hold for every possible pattern of relative strengths.)

One could also weaken Strong No Envy and still get impossibility.  The interesting impossibility result would find exactly the kind of reorderings r->r’ that cause problems.

Finally, we considered a second desideratum like strategy-proofness.  We want the mechanism that determines the seedings to be solvable in dominant strategies.  Note that this is not really an issue when the teams are strictly ordered in objective strength and this ordering is common knowledge.  It becomes an issue when there is some incomplete information (an issue raised by AG, and maybe also when there are heterogeneous strengths and weaknesses, also mentioned by AG.)

Formalizing this may bring up some new issues but it appears that GMW is strategyproof even with incomplete information about teams strengths and weaknesses.

Finally, there are some interesting miscellaneous ideas brought up by Scott (you can unambiguously improve any existing system by allowing a team who wins a qualifying match to choose to be recorded as the loser of the match) and DRDR (you minimize sandbagging, although you don’t eliminate it, by having a group format for qualifiers and randomly pairing groups ex post to determine the elimination matchups, this was also suggested by Erik, ASt and SX.)

From CNN:

Sprinters Allyson Felix and Jeneba Tarmoh threw their bodies across the finish line so evenly matched that cameras recording 3,000 frames a second couldn’t tell who beat whom.

Both runners recorded precisely the same finishing time, down to thousandths of a second: 11.068 seconds.

Two women beat Felix and Tarmoh: Carmelita Jeter and Tianna Madison. Their first and second place finishes on Saturday give them the chance to represent the United States at the Olympics in London this summer.

But the photo finish leaves USA Track & Field with a dilemma: Who gets the third slot?

There appears to be no precedent for a dead heat at U.S. Olympic Team track and field trials, prompting the U.S. Olympic Committee to announce new rules Sunday.

One of the runners can give up her claim to a spot on the Olympic team.

If neither one takes that unlikely option, they’ll be asked if they want to run a tie-breaking race or flip a coin.

If they choose the same option, the committee will respect their wishes.

If they disagree, they’ll have to race for it.

And if both athletes refuse to declare a preference, officials will flip a coin — a U.S. quarter to be exact.

They certainly have given it some thought but they may want to consult the previous literature as it seems they might be slightly off track:

Leaving nothing to chance, other than the flip itself, the rules also detail who gets to pick heads or tails and how the coin should be flipped.

“The USATF representative shall bend his or her index finger at a 90-degree angle to his or her thumb, allowing the coin to rest on his or her thumb,” the rules say.

Because of runners’ high:

When people exercise aerobically, their bodies can actually make drugs — cannabinoids, the same kind of chemicals in marijuana. Raichlen wondered if other distance-running animals also produced those drugs. If so, maybe runner’s high is not some peculiar thing with humans. Maybe it’s an evolutionary payoff for doing something hard and painful, that also helps them survive better, be healthier, hunt better or have more offspring.

So he put dogs — also distance runners — on a treadmill. Also ferrets, but ferrets are not long-distance runners. The dogs produced the drug, but the ferrets did not. Says Raichlen: “It suggests some level of aerobic exercise was encouraged by natural selection, and it may be fairly deep in our evolutionary roots.”

The story is from NPR, the pointer is from Balazs Szentes.

This is a screenshot from an espn.com webstreaming replay of the French Open match between Maria Sharapova and Klara Zakapalova. As you can see Sharapova won the first set and now they are locked in a tight second set. But hmmm… something tells me that Zakapalova will be able to push it to three sets…

Courtesy of Emir Kamenica.

College sports. The NBA and the NFL, two of the most sought-after professional sports in the United States outsource the scouting and training of young talent to college athletics programs. And because the vast majority of professionals are recruited out of college the competition for professional placement continues four years longer than it would if there were no college sports.

The very best athletes play basketball and football in college, but only a tiny percentage of them will make it as professionals. If professionals were recruited out of high school then those that don’t make it would find out four years earlier than they do now. Many of them would look to other sports where they still have chances. Better athletes would go into soccer at earlier ages.

As long as college athletics programs serve as the unofficial farm teams for professional basketball and football, many top athletes won’t have enough incentive to try soccer as a career until it is already too late for them.

My 9 year-old daughter’s soccer games are often high-scoring affairs. Double-digit goal totals are not uncommon.  So when her team went ahead 2-0 on Saturday someone on the sideline remarked that 2-0 is not the comfortable lead that you usually think it is in soccer.

But that got me thinking.  Its more subtle than that.  Suppose that the game is 2 minutes old and the score is 2-0.  If these were professional teams you would say that 2-0 is a good lead but there are still 88 minutes to play and there is a decent chance that a 2-0 lead can be overcome.

But if these are 9 year old girls and you know only that the score is 2-0 after 2 minutes your most compelling inference is that there must be a huge difference in the quality of these two teams and the team that is leading 2-0 is very likely to be ahead 20-0 by the time the game is over.

The point is that competition at higher levels is different in two ways. First there is less scoring overall which tends to make a 2-0 lead more secure.  But second there is also lower variance in team quality.  So a 2-0 lead tells you less about the matchup than it does at lower levels.

Ok so a 2-0 lead is a more secure lead for 9 year olds when 95% of the game remains to be played (they play for 40 minutes). But when 5% of the game remains to be played a 2-0 lead is almost insurmountable at the professional level but can easily be upset in a game among 10 year olds.

So where is the flipping point?  How much of the game must elapse so that a 2-0 lead leads to exactly the same conditional probability that the 9 year olds hold on to the lead and win as the professionals?

Next question.  Let F be the fraction of the game remaining where the 2-0 lead flipping point occurs.  Now suppose we have a 3-0 lead with F remaining.  Who has the advantage now?

And of course we want to define F(k) to be the flipping point of a k-nil lead and we want to take the infinity-nil limit to find the flipping point F(infinity).  Does it converge to zero or one, or does it stay in the interior?

Suppose you and I are playing a series of squash matches and we are playing best 2 out of 3.  If I win the first match I have an advantage for two reasons.  First is the obvious direct reason that I am only one match short of wrapping up the series while you need to win the next two.  Second is the more subtle strategic reason, the discouragement effect.  If I fight hard to win the next match my reward is that my job is done for the day, I can rest and of course bask in the glow of victory.  As for you, your effort to win the second match is rewarded by even more hard work to do in the third match.

Because you are behind, you have less incentive than me to win the second match and so you are not going to fight as hard to win it.  This is the discouragement effect.  Many people are skeptical that it has any measurable effect on real competition.  Well I found a new paper that demonstrates an interesting new empirical implication that could be used to test it.

Go back to our squash match and now lets suppose instead that it’s a team competition.  We have three players on our teams and we will match them up according to strength and play a best two out of three team competition.  Same competition as before but now each subsequent game is played by a different pair of players.

A new paper by Fu, Lu, and Pan called “Team Contests With Multiple Pairwise Battles” analyzes this kind of competition and shows that they exhibit no discouragement effect.  The intuition is straightforward:  if I win the second match, the additional effort that would have to be spent to win the third match will be spent not by me, but by my teammate.  I internalize the benefits of winning because it increases the chance that my team wins the overall series but I do not internalize the costs of my teammate’s effort in the third match.  This negative externality is actually good for team incentives.

The implied empirical prediction is the following.  Comparing individual matches versus team matches, the probability of a comeback victory conditional on losing the first match will be larger in the team competition.  A second prediction is about the very first match.  Without the discouragement effect, the benefit from winning the first match is smaller.  So there will be less effort in the first match in the team versus individual competition.

My son and I went to see the Cubs last week as we do every Spring.

The Cubs won 8-0 and Matt Garza was one out away from throwing a complete game shutout, a rarity for a Cub.  The crowd was on its feet with full count to the would-be final batter who rolled the ball back to the mound for Garza to scoop up and throw him out.  We were all ready to give a big congratulatory cheer and then this happened.  This is a guy who was throwing flawless pitches to the plate for nine innings and here with all the pressure gone and an easy lob to first he made what could be the worst throw in the history of baseball and then headed for the showers.  Cubs win!

But this Spring we weren’t so interested in the baseball out on the field as we were in the strategery down in the toilet. Remember a while back when I wrote about the urinal game? It seems like it was just last week  (fuzzy vertical lines pixellating then unpixellating the screen to reveal the flashback:)

Consider a wall lined with 5 urinals. The subgame perfect equilibrium has the first gentleman take urinal 2 and the second caballero take urinal 5.  These strategies are pre-emptive moves that induce subsequent monsieurs to opt for a stall instead out of privacy concerns.  Thus urinals 1, 3, and 4 go unused.

So naturally we turn our attention to The Trough.

A continuous action space.  Will the trough induce a more efficient outcome in equilibrium than the fixed array of separate urinals?  This is what you come Cheap Talk to find out.

Let’s maintain the same basic parameters. Assume that the distance between the center of two adjacent urinals is d and let’s consider a trough of length 5d, i.e. the same length as a 5 side-by-side urinals (now with invincible pink mystery ice located invitingly at positions d/2 + kd for k = 1, 2, 3, 4.) The assumption in the original problem was that a gentleman pees if and only if there is nobody in a urinal adjacent to him. We need to parametrize that assumption for the continuos trough. It means that there is a constant r such that he refuses to pee in a spot in which someone is currently peeing less than a distance r from him.  The assumption from before implies that d < r < 2d.  Moreover the greater the distance to the nearest reliever the better.

The first thing to notice is that the equilibrium spacing from the original urinal game is no longer a subgame-perfect equilibrium. In our continuous trough model that spacing corresponds to gentlemen 1 and 2 locating themselves at positions d/2 and 7d/2 measured from the left boundary of the trough.  Suppose r <= 3d/2. Then the third man can now utilize the convex action space and locate himself at position 2d where he will be a comfortable distance 3d/2>= r away from the other two. If instead r > 3d/2, then the third man is strictly deterred from intervening but this means that gentleman number 2 would increase his personal space by locating slightly farther to the right whilst still maintaining that deterrence.

So what does happen in equilibrium? I’ve got good news and bad news. The good news first. Suppose that r < 5d/4. Then in equilibrium 3 guys use the trough whereas only 2 of the arrayed urinals were used in the original equilibrium. In equilibrium the first guy parks at d/2 (to be consistent with the original setup we assume that he cannot squeeze himself any closer than that to the left edge of the trough without risking a splash on the shoes) the second guy at 9d/2 and the third guy right in the middle at 5d/2. They are a distance of 2d> r from one another, and there is no room for anybody else because anybody who came next would have to be standing at most a distance d< r from two of the incumbents. This is a subgame perfect equilibrium because the second guy knows that the third guy will pick the midpoint and so to keep a maximal distance he should move to the right edge. And foreseeing all of this the first guy moves to the left edge.

Note well that this is not a Pareto improvement. The increased usage is offset by reduced privacy.They are only 2d away from each other whereas the two urinal users were 3d away from each other.

Now the bad news when r >5d/4.  In this case it is possible for the first two to keep the third out.  For example suppose that 1 is at 5d/4  and 2 is at 15d/4.  Then there is no place the third guy can stand and be more than 5d/4 away hence more than r from the others.  In this case the equilibrium has the first two guys positioning themselves with a distance between them equal to exactly 2r, thus maximizing their privacy subject to the constraint that the third guy is deterred.  (One such equilibrium is for the first two to be an equal distance from their respective edges, but there are other equilibria.)

The really bad news is that when r is not too large, the two guys even have less privacy than with the urinals. For example if r is just above 5d/4 then they are only 10d/4 away from each other which is less than the 3d distance from before.  What’s happening is that the continuous trough gives more flexibility for the third guy to squeeze between so the first two must stand closer to one another to keep him away.

Instant honors thesis for any NU undergrad who can generalize the analysis to a trough of arbitrary length.

Bicycle “sprints.”  This is worth 6 minutes of your time.

Thanks to Josh Knox for the link.

If you give them the chance, Northwestern PhD students will take a perfectly good game and turn it into a mad science experiment.  First there was auction scrabble, now from the mind of Scott Ogawa we have the pari-mutuel NCAA bracket pool.

Here’s how it worked.  Every game in the bracket was worth 1000 points. Those 1000 points will be shared among all of the participants who picked the winner of that game.  These scores are added up for the entire bracket to determine the final standings.  The winner is the person with the most points and he takes all the money wagered.

Intrigued, I entered the pool and submitted a bracket which picked every single underdog in every single game.  Just to make a point.

Here’s the point.  No matter how you score your NCAA pool you are going to create a game with the following property:  assuming symmetric information and a large enough market, in equilibrium every possible bet will give exactly the same expected payoff.  In other words an absurd bet like all underdogs will win is going to do just as well as any other, less absurd bet.

This is easy to see in simple example, like a horse race where pari-mutuel betting is most commonly used.  Suppose A wins with twice the probability that B wins. This will attract bets on A until the number of bettors sharing in the purse when A wins is so large that B begins to be an attractive bet. In equilibrium there will be twice as much money in total bet on A as on B, equalizing the expected payoff from the two bets. One thing to keep in mind here is that the market must be large enough for these odds to equilibrate. (Without enough bettors the payoff on A may not be driven low enough to make B a viable bet.)

It’s a little more complicated though with a full 64 team tournament bracket. Because while each individual matchup has a pari-mutuel aspect, there is one key difference.  If you want to have a horse in the second-round race, you need to pick a winner in the first round.  So your incentive to pick a team in the first round must also take this into account.  And indeed, the bet share in a first round game will not exactly offset the odds of winning as it would in a standalone horse race.

On top of that, you aren’t necessarily trying to maximize the expected number points.  You just want to have the most points, and that’s a completely different incentive.  Nevertheless the overall game has the equilibrium property mentioned above.

(Now keep in mind the assumptions of symmetric information and a large market.  These are both likely to be violated in your office pool.  But in Scott’s particular version of the game this only works in favor of betting longshots. First of all the people who enter basketball pools generally believe they have better information than they actually have so favorites are likely to be over-subscribed. Second, the scoring system heavily favors being the only one to pick the winner of a match which is possible in a small market. )

In fact, my bracket, 100% underdogs, Lehigh going all the way, finished just below the median in the pool.  (Admittedly the market wasn’t nearly large enough for me to have been able to count on this.  I benefited from an upset-laden first round.)

Proving that equilibrium of an NCAA bracket pool has this equilibrium property is a great prelim question.

In basketball the team benches are near the baskets on opposite sides of the half court line. The coaches roam their respective halves of the court shouting directions to their team.

As in other sports the teams switch sides at halftime but the benches stay where they were. That means that for half of the game the coaches are directing their defenses and for the other half they are directing their offenses.

If coaching helps then we should see more scoring in the half where the offenses are receiving direction.

This could easily be tested.

How can a guy who never misses a field goal miss an easy one at a crucial moment?

Still, a semiconsensus is developing among the most advanced scientists. In the typical fight-or-flight scenario, scary high-pressure moment X assaults the senses and is routed to the amygdala, aka the unconscious fear center. For well-trained athletes, that’s not a problem: A field goal kick, golf swing or free throw is for them an ingrained action stored in the striatum, the brain’s autopilot. The prefrontal cortex, our analytical thinker, doesn’t even need to show up. But under the gun, that super-smart part of the brain thinks it’s so great and tries to butt in. University of Maryland scientist Bradley Hatfield got expert dart throwers and marksmen to practice while wearing a cumbersome cap full of electrodes. Without an audience, their brains show very little chatter among regions. But in another study, when dart throwers were faced with a roomful of people, the pros’ neural activity began to resemble that of a novice, with more communication from the prefrontal cortex.

When I was in the 6th grade I won our school’s spelling bee going away.  The next level was the district-wide spelling bee, televised on community access cable.  My amygdala tried to insert an extra `u’ into the word tongue and I was out in the first round.

Let’s join Harvard Sports Analysis for the post-mortem:

But no one knew that his score would decide the game. Before he ran the ball in, the Giants had 0.94 win probability (per Advanced NFL Stats). After the play, the Giants’ win probability dropped to 0.85. Had he instead taken a Brian Westbrook or Maurice Jones-Drew-esque knee on the goal line, the Giants would have had a 0.96 win probability. Assuming the Patriots used their final time out, the Giants would have had 3rd and Goal from the 1-yard line with around 1:04 left to play. At this point, the Giants could either attempt to score a touchdown or take a knee. Assuming the touchdown try was unsuccessful or that Eli Manning kneeled, the Giants could have let the clock run all the way down to 0:25 before using the Giants’ final time out. With 4th and Goal from the 2 with 25 seconds left to play, the Giants would have a 0.92 win probability, 0.07 higher than after Bradshaw scored the touchdown of his life.

I am not sure about all this though.  Shouldn’t Bradshaw have just stood there on the 1 (far away enough that he can’t be pushed in) and then cross over at the last second?

This is something I have wondered about for a long time.

When the muscle is stretched, so is the muscle spindle (see section Proprioceptors). The muscle spindle records the change in length (and how fast) and sends signals to the spine which convey this information. This triggers the stretch reflex (also called themyotatic reflex) which attempts to resist the change in muscle length by causing the stretched muscle to contract. The more sudden the change in muscle length, the stronger the muscle contractions will be (plyometric, or “jump”, training is based on this fact). This basic function of the muscle spindle helps to maintain muscle tone and to protect the body from injury.

One of the reasons for holding a stretch for a prolonged period of time is that as you hold the muscle in a stretched position, the muscle spindle habituates (becomes accustomed to the new length) and reduces its signaling. Gradually, you can train your stretch receptors to allow greater lengthening of the muscles.

Some sources suggest that with extensive training, the stretch reflex of certain muscles can be controlled so that there is little or no reflex contraction in response to a sudden stretch. While this type of control provides the opportunity for the greatest gains in flexibility, it also provides the greatest risk of injury if used improperly. Only consummate professional athletes and dancers at the top of their sport (or art) are believed to actually possess this level of muscular control.

This clarified a lot for me.

Jonah Lehrer didn’t:

In many situations, such reinforcement learning is an essential strategy, allowing people to optimize behavior to fit a constantly changing situation. However, the Israeli scientists discovered that it was a terrible approach in basketball, as learning and performance are “anticorrelated.” In other words, players who have just made a three-point shot are much more likely to take another one, but much less likely to make it:

What is the effect of the change in behaviour on players’ performance? Intuitively, increasing the frequency of attempting a 3pt after made 3pts and decreasing it after missed 3pts makes sense if a made/missed 3pts predicted a higher/lower 3pt percentage on the next 3pt attempt. Surprizingly [sic], our data show that the opposite is true. The 3pt percentage immediately after a made 3pt was 6% lower than after a missed 3pt. Moreover, the difference between 3pt percentages following a streak of made 3pts and a streak of missed 3pts increased with the length of the streak. These results indicate that the outcomes of consecutive 3pts are anticorrelated.

This anticorrelation works in both directions. as players who missed a previous three-pointer were more likely to score on their next attempt. A brick was a blessing in disguise.

The underlying study, showing a “failure of reinforcement learning” is here.

Suppose you just hit a 3-pointer and now you are holding the ball on the next possession. You are an experienced player (they used NBA data), so you know if you are truly on a hot streak or if that last make was just a fluke. The defense doesn’t. What the defense does know is that you just made that last 3-pointer and therefore you are more likely to be on a hot streak and hence more likely than average to make the next 3-pointer if you take it. Likewise, if you had just missed the last one, you are less likely to be on a hot streak, but again only you would know for sure. Even when you are feeling it you might still miss a few.

That means that the defense guards against the three-pointer more when you just made one than when you didn’t. Now, back to you. You are only going to shoot the three pointer again if you are really feeling it. That’s correlated with the success of your last shot, but not perfectly. Thus, the data will show the autocorrelation in your 3-point shooting.

Furthermore, when the defense is defending the three-pointer you are less likely to make it, other things equal. Since the defense is correlated with your last shot, your likelihood of making the 3-pointer is also correlated with your last shot. But inversely this time:  if you made the last shot the defense is more aggressive so conditional on truly being on a hot streak and therefore taking the next shot, you are less likely to make it.

(Let me make the comparison perfectly clear:  you take the next shot if you know you are hot, but the defense defends it only if you made the last shot.  So conditional on taking the next shot you are more likely to make it when the defense is not guarding against it, i.e. when you missed the last one.)

You shoot more often and miss more often conditional on a previous make. Your private information about your make probability coupled with the strategic behavior of the defense removes the paradox. It’s not possible to “arbitrage” away this wedge because whether or not you are “feeling it” is exogenous.

I write all the time about strategic behavior in athletic competitions.  A racer who is behind can be expected to ease off and conserve on effort since effort is less likely to pay off at the margin.  Hence so will the racer who is ahead, etc.  There is evidence that professional golfers exhibit such strategic behavior, this is the Tiger Woods effect.

We may wonder whether other animals are as strategically sophisticated as we are.  There have been experiments in which monkeys play simple games of strategy against one another, but since we are not even sure humans can figure those out, that doesn’t seem to be the best place to start looking.

I would like to compare how humans and other animals behave in a pure physical contest like a race.  Suppose the animals are conditioned to believe that they will get a reward if and only if they win a race.  Will they run at maximum speed throughout regardless of their position along the way?  Of course “maximum speed” is hard to define, but a simple test is whether the animal’s speed at a given point in the race is independent of whether they are ahead or behind and by how much.

And if the animals learn that one of them is especially fast, do they ease off when racing against her?  Do the animals exhibit a tiger Woods effect?

There are of course horse-racing data.  That’s not ideal because the jockey is human.  Still there’s something we can learn from horse racing.  The jockey does not internalize 100% of the cost of the horse’s effort.  Thus there should be less strategic behavior in horse racing than in races between humans or between jockey-less animals.  Dog racing?  Does that actually exist?

And what if a dog races against a human, what happens then?

I hadn’t watched American football in many years but around Christmas time I watched a little bit with my son who is getting old enough to pay attention to it.  What struck me was how many pointless rules there are in football.  I asked myself which of the many pointless rules is the most pointless.  Some candidate

1.Holding

2. Illegal motion

These two are basically rules that establish a conventional way to play the game.  If you dropped these rules you would still have a game that makes sense but aesthetically you could argue the game is less attractive.  Players grabbing each others uniforms, offensive players running around before the snap.  It’s a matter of taste but the deadweight loss is the subjective element of enforcement.  Bottom line:  artificial rules but not totally pointless.

3. Intentional grounding.  This rule has a point but its a stupid point.  The quarterback can’t throw the ball just anywhere, he has to throw it near somebody who could legally catch it. Or he can throw it out of bounds, it seems.  But if he can’t do any of those he has to get mowed down by a charging defender.

But here’s the most pointless rule I could come up with:

4. Ineligible Receiver Downfield.  There are only certain players on the offense who are designated as eligible to catch a pass.  If anybody else catches a pass then it doesn’t count.  Now that by itself is pretty artificial.  Those players, and their counterparts on the defense are basically added to the game just to offset one another.  You could remove them from both sides and it would be a wash.  But even more pointless:  an ineligible player is not allowed to advance down the field when a pass is thrown, even if it is thrown to somebody else.  These rules essentially provide job security for giant, immobile humanoids whose only function is to stand in the way of somebody else.  They take away the possibility of having a team of 10 perfectly substitutable athletes plus a quarterback.  I can’t see how that would not be a more interesting game.

Is there any more pointless rule than that?

From the great blog Mind Hacks:

Because of this, the new study looked at volleyball where the players are separated by a net and play from different sides of the court. Additionally, players rotate position after every rally, meaning its more difficult to ‘clamp down’ on players from the opposing team if they seem to be doing well.

The research first established the belief in the ‘hot hand’ was common in volleyball players, coaches and fans, and then looked to see if scoring patterns support it – to see if scoring a point made a player more likely to score another.

It turns out that over half the players in Germany’s first-division volleyball league show the ‘hot hand’ effect – streaks of inspiration were common and points were not scored in an independent ‘coin toss’ manner.

Via Vinnie Bergl, here is a post which examines pitch sequences in Major League Baseball, looking for serial correlation in the pitch quality, i.e. fastball, changeup, curve, etc.  The motivating puzzle is the typical baseball lore that. e.g. the changeup “sets up” the fastball.  If that were true then the batter knows he is going to face a fastball next and this reduces the pitcher’s advantage.  If the pitcher benefits from being unpredictable then there should be no serial correlation.  The linked post gives a cursory look at the data which shows in fact the opposite of the conventional lore:  changeups are followed by changeups.

There is a problem however with the simple analysis which groups together all pitch sequences from all pitchers.  Not every pitcher throws a changeup.  Conditional on the first pitch being a changeup, the probability increases that the next pitch will be a changeup simply because we learn from the first pitch that we are looking at a pitcher who has a changeup in his arsenal.  To correct for this the analysis would have to be carried out at the individual level.

Should we expect serial independence?  If the game was perfectly stationary, yes.  But suppose that after throwing the first curveball the pitcher gets a better feel for the pitch and is temporarily better at throwing a curveball.  If pitches were serially independent, then the batter would not update his beliefs about the next pitch, the curveball would have just as much surprise but now slightly more raw effectiveness.  That would mean that the pitcher will certainly throw a curveball again.

That’s a contradiction so there cannot be serial independence.  To find the new equilibrium we need to remember that as long as the pitcher is randomizing his pitch sequence, he must be indifferent among all pitches he throws with positive probability.  So we need to offset the temporary advantage of a curveball this is achieved by the batter looking for a curveball.  That can only happen in equilibrium if the pitcher is indeed more likely to throw a curveball.

Thus, positive serial correlation is to be expected.  Now this ignores the batter’s temporary advantage in spotting the curveball.  It may be that the surprise power of a breaking pitch is reduced when the batter gets an earlier read on the rotation.  After seeing the first curveball he may know what to look for next and this may in fact make a subsequent curveball less effective, ceteris paribus.  This model would then imply negative serial correlation:  other pitches are temporarily more effective than the curveball so the batter should be expecting something else.

That would bring us back to the conventional account.  But note that the route to “setting up the fastball” was not that it makes the fastball more effective in absolute terms, but that it makes it more effective in relative terms because the curveball has become temporarily less effective.

The latter hypothesis could be tested by the following comparison.  Look at curveballs that end the at bat but not the inning.  The next batter will not have had the advantage of seeing the curveball up close but the pitcher still has the advantage of having thrown one.  We should see positive serial correlation here, that is the first pitch to the new batter should be more likely (than average) to be a curveball.  If in the data we see negative correlation overall but positive correlation in this scenario then it is evidence of the batter-experience effect.

(Update:  the Fangraphs blog has re-done the analysis at the individual level and it looks like the positive correlation survives.  One might still worry about batter-specific fixed effects.  Maybe certain batters are more vulnerable to the junk pitches and so the first junk pitch signals that we are looking at a confrontation with such a batter.)

This article in The New Yorker about Federer’s loss to Djokovic in the US Open Semi-final is absolutely worth a read. You don’t have to care about tennis as long as you have a personal stake in the deep question of what style of perfection really wins.

http://www.newyorker.com/online/blogs/sportingscene/2011/09/roger-federer-novak-djokovic.html

But I have a slightly different take.

All Fed-Heads knew right away when he won the second set to go up 2-0 that nevertheless was going to lose the match. The tragedy of that match, and of Roger Federer in general is not that perfection failed. He was never perfect or anything close to it. The irony is that, by comparison to Nadal and Djokovic, especially Nadal, Roger Federer is so much more like the rest of us mortals.

Nadal has pure animal fighting spirit branded onto his DNA. Yes, his tennis is wrong, but that doesn’t matter because he is the one who has the aura of invincibility, not Federer. You can count on Roger to make impeccable shots. To play like an artist. But you can count on Nadal to win.

Federer is not like a superhero who just effortlessly deploys his superpower and watches the results roll in. When you watch him long enough you start to see how tightly wound he is at every moment, mustering every ounce of concentration to keep himself in that groove. If he is a master of anything he is a master of trying.

What you learn from watching his matches the last year is just how unstable that groove is. And what makes his decline so depressing is how it reminds us that if you have to try you are not a master. He carried the banner for all of us who have nothing going for us except the will to try, and even he The Master Tryer, the man who tried so hard that he was Perfect, can’t beat those guys whose strokes are hacker strokes next to his, but who were born winners.

And that is why this particular match was really his most tragic. Match point against Djokovic. After tanking sets 3 and 4 and then pulling himself together to go up a break and serve for the match in the fifth set, we still knew he was going to lose. It was just a matter of how.

Djokovic is not Nadal. He does not win by sheer will. A lot of trying went in to his streak this year. And to Federer fans, Djokovic is something of an interloper. You look at his game and there is no real reason he should be pushing Roger out of the top 2. He is super solid. But we want our iconic battle between Mr. Made-Perfect against Mr. Passion. Djokovic doesn’t belong.

But when Federer had Djokovic match point down, Djokovic did something that made a total mockery of everything about Federer’s game. He took a blind swing on a service return and hit it for a stinging winner. He became Nadal for a single shot. You are not supposed to be able to become Nadal. That is not something you can try to do. And indeed there was no trying involved whatsoever. He just did it.

Federer could never, ever do that.

If you think about pain as an incentive mechanism to stop you from hurting yourself there are some properties that would follow from that.

When I was pierced by a stingray, the pain was outrageous. The puncture went deep into my foot and that of course hurts but the real pain came from the venom-laden sheath that is left behind when the barb is removed. Funny thing about the venom is that it is protein based and it can be neutralized by denaturing the protein, essentially changing its structure by “cooking” it as you would a raw egg.

How do you cook the venom when it is inside your foot? You don’t pee on it unless you are making a joke on a sitcom (and that’s a jellyfish anyway.) What you do is plunge your foot is scalding hot water raising the internal temperature enough to denature the venom inside. Here’s what happens when you do that. Immediately you feel dramatic relief from the pain. But not long after that you begin to notice that your foot is submerged in scalding hot water and that is bloody painful.

So you take it out. Then you feel the nerve-numbing pain from the venom return to the fore. Back in. Relief, burning hot water, back out. Etc. Over and over again until you have cooked all the venom and you are done. In all about 4 hours of soaking.

A good incentive scheme is reference-dependent. There’s no absolute zero. Zero is whatever baseline you are currently at and rewards/penalties incentivize improvement relative to the baseline. When the venom was the most dangerous thing, the scalding hot water was painless. Once the danger from the venom was reduced, the hot water became the focus of pain. And back and forth.

Second Observation.  After three weeks of surfing (minus a couple of days robbed by my stingray friend) I came away with a sore shoulder.  Rotator cuff injuries are common among surfers, especially over the hill surfers who don’t exercise enough the other 11 months of the year.  The interesting thing about a rotator cuff injury is that the pain is felt in the upper shoulder, not at the site of the injury which is more in the area of the shoulder blade.  It’s referred pain.

In a moral hazard framework the principal decides which signals to use to trigger rewards and penalties.  Direct signals of success or failure are not necessarily the optimal ones to use because success and failure can happen by accident too.  The optimal signal is the one that is most informative that the agent took the appropriate effort.  Referred pain must be based on a similar principle.  Rotator cuff injuries occur because of poor alignment in the shoulder resulting in an inefficient mix of muscles doing the work.  Even though its the rotator cuff that is injured, the use of the upper shoulder is a strong signal that you are going to worsen the injury.  It may be optimal to penalize that directly rather than associate the pain with the underlying injury.

(Drawing:  Scale Up Machine Fail, from www.f1me.net.)

Usain Bolt was disqualified in the final of the 100 meters at the World Championships due to a false start.  Under current rules, in place since January 2010, a single false start results in disqualification.  By contrast, prior to 2003 each racer who jumped the gun would be given a warning and then disqualified after a second false start.  In 2003 the rules were changed so that the entire field would receive a warning after a false start by any racer and all subsequent false starts would lead to disqualification.

Let’s start with the premise that an indispensible requirement of sprint competition is that all racers must start simultaneously.  That is, a sprint is not a time trial but a head-to-head competition in which each competitor can assess his standing at any instant by comparing his and his competitors’ distance to a fixed finished line.

Then there must be penalty for a false start.   The question is how to design that penalty.  Our presumed edict rules out marginally penalizing the pre-empter by adding to his time, so there’s not much else to consider other than disqualification. An implicit presumption in the pre-2010 rules was that accidental false starts are inevitable and there is a trade-off between the incentive effects of disqualification and the social loss of disqualifying a racer who made an error despite competing in good faith.

(Indeed this trade-off is especially acute in high-level competitions where the definition of a false start is any racer who leaves less than 0.10 seconds after the report of the gun.  It is assumed to be impossible to react that fast. But now we have a continuous variable to play with.  How much more impossible is it to react within .10 seconds than to react within .11 seconds? When you admit that there is a probability p>0, increasing in the threshold, that a racer is gifted enough to reach within that threshold, the optimal incentive mechanisn picks the threshold that balances type I and type II errors.  The maximum penalty is exacted when the threshold is violated.)

Any system involving warnings invites racers to try and anticipate the gun, increasing the number of false starts. But the pre- and post-2003 rules play out differently when you think strategically.  Think of the costs and benefits of trying to get a slightly faster start.  The warning means that the costs of a potential false start are reduced. Instead of being disqualified you are given a second chance but are placed in the dangerous position of being disqualified if you false start again.  In that sense, your private incentives to time the gun are identical whether the warning applies only to you or to the entire field.  But the difference lies in your treatment relative to the rest of the field.  In the post-2003 system that penalty will be applied to all racers so your false start does not place you at a disadvantage.

Thus, both systems encourage quick starts but the post 2003 system encouraged them even more. Indeed there is an equilibrium in which false starts occur with probability close to 1, and after that all racers are warned. (Everyone expects everyone else to be going early, so there’s little loss from going early yourself. You’ll be subject to the warning either way.) After that ceremonial false start the race becomes identical to the current, post 2010, rule in which a single false start leads to disqualification.  My reading is that equilibrium did indeed obtain and this was the reason for the rule change.  You could argue that the pre 2003 system was even worse because it led to a random number of false starts and so racers had to train for two types of competition:  one in which quick starts were a relevant strategy and one in which they were not.

Is there any better system?  Here’s a suggestion.  Go back to the 2003-2009 system with a single warning for the entire field.  The problem with that system was that the penalty for being the first to false start was so low that when you expected everyone else to be timing the gun your best response was to time the gun as well.  So my proposal is to modify that system slightly to mitigate this problem. Now, if racer B is the first to false start then in the restart if there is a second false start by, say racer C, then racer C and racer B are disqualified.  (In subsequent restarts you can either clear the warning and start from scratch or keep the warning in place for all racers.)

Here’s a second suggestion.  The racers start by pushing off the blocks.  Engineer the blocks so that they slide freely along their tracks and only become fixed in place at the precise moment that the gun is fired.

(For the vapor mill,  here are empirical predictions about the effect of previous rule-regimes on race outcomes:

  1. Comparing pre-2003, under the 2003-2009 you should see more races with at least one false start but far fewer total false starts per race.  The current rules should have the least false starts.
  2. Controlling for trend (people get faster over time) if you consider races where there was no false start, race times should be faster 2003-2009 than pre-2003.   That ranking reverses when you consider races in which there was at least one false start. Controlling for Usain Bolt, times should be unambiguously slower under current rules.)

The weather in Chicago sucks but at least there are real seasons (there’s only one in SoCal where I am from.)  Here’s a thought about seasons.

Everything gets old after a while. No matter how much you love it at first, after a while you are bored. So you stop doing it.  But then after time passes and you haven’t done it for a while it gets some novelty back and you are willing to do it again.  So you tend to go through on-off phases with your hobbies and activities.

But some activities can only be fun if enough other people are doing it too. Say going to the park for a pickup soccer game.  There’s not going to be a game if nobody is there.

We could start with everyone doing it and that’s fun, but like everything else it starts to get old for some people and they cut back and before long its not much of a pickup game.

Now, unlike your solo hobbies, when the novelty comes back you go out to the field but nobody is there. This happens at random times for each person until we reach a state where everybody is keen for a regular pickup game again but there’s no game.  What’s needed is a coordination device to get everyone out on the field again.

Seasons are a coordination device.  At the beginning of summer everyone gets out and does that thing that they have been waiting since last year to do. Sure, by the end of the season it gets old but that’s ok summer is over.  The beginning of next summer is the coordination device that gets us all out doing it again.

On Tuesday, in the sixth round of the MLB Draft, the San Diego Padres selectedoutfielder Kyle Gaedele (who the Tampa Bay Rays had previously drafted in the 32nd round of the 2008 draft). Gaedele plays center field and shows good signs of hitting for power, but what most writers, sports fans, and guys named Bradley talk about is Gaedele’s great uncle.

Casual fans probably do not know about Kyle’s great uncle, Eddie Gaedel (who removed the e off his last name for show-business purposes). We nerds can forgive the casual fan for forgetting a player who outdid, in his career, only the great Otto Neu. Gaedel took a single at-bat, walked to first, and then left for a pinch runner.

What makes Eddie Gaedel a unique and important part of baseball history, however, is not his statistics, per se, but his stature. Gaedel stood 3’7″ tall, almost half the height of his great nephew. Gaedel was the first and last little person to play in Major League Baseball, and the time has come for that to change.

In baseball, the strike zone (effectively the target that a pitcher must aim for) is defined relative to the size of the hitter.  A very small player has a very small strike zone, so small that many pitchers will have a hard time throwing strikes.  Insert such a batter at a key moment, he walks to first base and then you replace him with a fast runner.  Why doesn’t every team have such a player on their roster?

Cap Clutch:  Vinnie Bergl.

Via Marginal Revolution, an essay exploring the psychology of watching a sporting event after the fact on your DVR.  Is it less enjoyable than watching the same game live when it happens?  I love this question and I love the answers he gives.  Strangely though, he divides his reasons into the “rational” and the “irrational” and with only one exception I would give the opposite classification.  Here are his rational ones:

  1. Removing commercials reduces drama.  I suppose he calls this rational because he thinks that its true and perfectly sensible.  The unavoidable delay before action resumes builds suspense.  But even though I agree with that, I call this an irrational reason because of course I can always watch the commercials or just sit around for 2 minutes if I’d rather not see yet another Jacob’s Creek wine commercial.  If in fact I don’t do that, then that’s irrational.
  2. If you know it has already happened then it is less interesting.  Again, this may be true for many people, but to make it into the rational category it has to be squared with the fact that we watch movies, TV dramas, even reality TV shows whose outcomes we know are already determined.
  3. Recording gives me too much control.  Same as #1.
Now for the irrational ones:
  1. I don’t get to believe that my personal involvement will affect the game. This one I agree with.  Many people are under this illusion and it would be hard to call it rational for someone to think they are any less in control when the event is already over.
  2. If this were a really exciting game I would have found out about it independently by now no matter how hard I tried to avoid it.  I would call this the one truly rational reason and I think its a big problem for most major sports.  If something really exciting happened that information is going to find you one way or another.  So if you are sitting down to watch a taped event and the information didn’t find you, then you know it can only be so good.  Even worse, if the game reaches a state where it would take a dramatic comeback to change the outcome, you know that comeback isn’t going to happen.

I would add two of my own, one rational and one irrational.  First, you don’t watch a DVR’d sporting event with friends.  The whole point of recording it is to pick the optimal time to watch it and that’s not going to be your friend’s optimal time.  Plus he probably already saw it, plus who is going to control the fast-forward?  Watching with friends adds a dimension to just about anything, especially sports so DVR’d events are going to be less interesting just for the lack of social dimension having nothing to do with the tape delay.

Second, there is something very strange about hoping for something to happen when in fact it has either already happened or already not.  Now, this is irrelevant for people who easily suspend disbelief watching movies.  Those people can yell at the fictitious characters on the screen and feel elation and despair when their pre-destined fate is played out.  But people who can’t find the same suspense in fiction look to sports for the source of it.  For those people too many existential questions get in the way of enjoying a tape-delayed broadcast.

A reader, Kanishka Kacker, writes to me about Cricket:

Now, very often, there are certain decisions to be made regarding whether a given batter was out or not, where it is very hard for the umpire to decide. In situations like this, some players are known to walk off the field if they know they are “out” without waiting for the umpire’s decision. Other players don’t, waiting to see the umpire’s decision.

Here is a reason given by one former Australian batsman, Michael Slater, as to why “walking” is irrational:

(this is from Mukul Kesavan’s excellent book “Men in White”)

“The pragmatic argument against walking was concisely stated by former Australian batsman Michael Slater. If you walk every time you’re out and are also given out a few times when you’re not (as is likely to happen for any career of a respectable length), things don’t even out. So, in a competitive team game, walking is, at the very least, irrational behavior. Secondarily, there is a strong likelihood that your opponents don’t walk, so every time you do, you put yourself or your team at risk.”

What do you think?

Let me begin by saying that the only thing I know about Cricket is that “Ricky Ponting” was either the right or the wrong answer to the final question in Slumdog Millionaire.  Nevertheless, I will venture some answers because there are general principles at work here.

  1. First of all, it would be wrong to completely discount plain old honor. Kids have sportsmanship drilled into their heads from the first time they start playing, and anyone good enough to play professionally started at a time when he or she was young enough to believe that honor means something. That can be a hard doctrine to shake.  Plus, as players get older and compete in at more selective levels, some of that selection is on the basis of sportsmanship.   So there is some marginal selection for honorable players to make it to the highest levels.
  2. There is a strategic aspect to honor.  It induces reciprocity in your opponent through the threat of shame.  If you are honorable and walk, then when it comes time for your opponent to do the same, he has added pressure to follow suit or else appear less honorable than you.  Even if he has no intrinsic honor, he may want to avoid that shame in the eyes of his fans.
  3. But to get to the raw strategic aspects, reputation can play a role.  If a player is known to walk whenever he is out then by not walking he signals that he is not out.  In those moments of indecision by the umpire, this can tip the balance and get him to make a favorable call.  You might think that umpires would not be swayed by such a tactic but note that if the player has a solid reputation for walking then it is in the umpire’s interest to use this information.
  4. And anyway remember that the umpire doesn’t have the luxury to deliberate.  When he’s on the fence, any little nudge can tilt him to a decision.
  5. Most importantly, a player’s reputation will have an effect on the crowd and their reactions influence umpires.  If the fans know that he walks when he’s out and this time he didn’t walk they will let the umpire have it if he calls him out.
  6. There is a related tactic in baseball which is where the manager kicks dirt onto the umpire’s shoes to show his displeasure with the call.  It is known that this will never influence the current decision but it is believed to have the effect of “getting into the umpire’s head” potentially influencing later decisions.
  7. Finally, it is important to keep in mind that a player walks not because he knows he is out but because he is reasonably certain that the umpire is going to decide that he is out whether or not he walks.  The player may be certain that he is not out but only because he is in a privileged position on the field where he can determine that.  If the umpire didn’t have the same view, it would be pointless to try and persuade.  Instead he should walk and invest in his reputation for the next time when the umpire is truly on the fence.

Jeff’s Twitter Feed

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Join 2,151 other subscribers
%d bloggers like this: