You are currently browsing the tag archive for the ‘incentives’ tag.

Tyler Cowen explores economic ideas that should be popularized.  Let me take this opportunity to help popularize what I think is one of the pillars of economic theory and the fruit of the information economics/game theory era.

When we notice that markets or other institutions are inefficient, we need to ask compared to what?  What is the best we could possibly hope for even if we could design markets from scratch?  Myerson and Satterthwaite give the definitive answer:  even the best of all possible market designs must be inefficient:  it must leave some potential gains from trade unrealized.

If markets were perfectly efficient, whenever individual A values a good more than individual B it should be sold from B to A at a price that they find mutually agreeable.  There are many possible prices, but how do they decide on one?  The Myerson-Satterthwaite theorem says that, no matter how clever you are in designing the rules of negotiation, inevitably it will sometimes fail to converge on such a price.

The problem is one of information.  If B is going to be induced to sell to A, the price must be high enough to make B willing to part with the good.  And the more B values the good, the higher the price it must be.  That principle, which is required for market efficiency, creates an incentive problem which makes efficiency impossible.  Because now B has an incentive to hold out for a higher price by acting as if he is unwilling to part with the good.  And sometimes that price is more than A is willing to pay.

From Myerson-Satterthwaite we know what the right benchmark is for markets:  we should expect no more from them than what is consistent with these informational constraints.  It is a fundamental change in the way we think about markets and it is now part of the basic language of economics.  Indeed, in my undergraduate intermediate microeconomics course I give  simple proof of a dominant-strategy version of Myerson-Satterthwaite, you can find it here.

(Myerson won the Nobel prize jointly with Maskin and Hurwicz.  There should be a second Nobel for Myerson and Satterthwaite.)

The seminal (economist’s!) answer to this question has been offered by my old teacher in grad school and my colleague till a few years ago, Kathy Spier, in her paper “Incomplete Contracts and Signaling”.  As her title suggests, her core idea is based on signaling: an informed party making an offer in a game signals his private information via the offer.  An offer that carries a negative inference may not be made.  Kathy’s model is quite complex but it’s central logic is captured in a passage from her paper:

A fellow might hesitate to ask his fiancée to sign a prenuptial agreement…. because to do so would lead her to believe that the quality of the marriage – or the probability of divorce – are higher than she had thought.

In the new century, roles are reversed – the wealthy partner might be female and the poor one male.  If there is no pre-nup, the man can extract a large fraction of his ex-wife’s wealth after a divorce.  In that situation, to signal his love, the man should offer to sign a pre-nup that gives him none of his ex-wife’s fortune.  If he is confident the marriage will survive, divorce is impossible anyway , so why worry about income in an impossible event?

Alas, as the poets have long told us, the path of true love does not run smooth – the most well-intentioned and loving couple can find their marriage has hit the rocks.  Then, there will be much regret and perhaps desperate, legal action to extract enough cash to live in the style to which one has become accustomed.

And so I turn finally to this sad case in the British courts:

When Katrin Radmacher and Nicolas Granatino married in 1998, she insisted it had been for love, not for money. That was why the wealthy German heiress had ensured that her banker husband signed a prenuptial agreement promising to make no claims on her fortune if the marriage failed. It was, she said, “a way of proving you are marrying only for love”.

Once the love had gone, however – the couple separated in 2006 – the fortune remained, and Granatino, by then a mature student at Oxford, decided to challenge the prenup, which they had signed in Germany before marrying and divorcing in Britain, arguing it had no status in English law.

But Granatino lost.

I’m sure a research paper can come out of this: two-sided incomplete information, two-sided signaling and optimal contracting…..I’m too busy keeping my marriage alive to have the time to write it.

Jean Tirole has written the best theoretical analysis I have seen of the role of government intervention to revitalize frozen asset markets.  The key idea in this paper is that investors need to finance their next project and are unable to do this by selling their “legacy” assets because adverse selection has frozen the market.

A government buyback of these toxic assets attracts the bottom tail.  The government of course is losing money on all of the assets it buys.  But the payoff is that it rejuvinates the market:  private financiers will now step in and buy the assets of those who refused the government offer.  It’s a surprising result but ex post its pretty easy to understand.

If the government is offering a price p for the legacy assets, then the value of the marginal asset sold is equal to p + S where S is the value of going forward with the newly funded project.  Investors with legacy assets worth just more than that refuse the government’s deal.  Now private financiers can get them to accept an offer them a price a bit higher than p.   And this is profitable for the financiers because the assets have value p + S.  This proves that the market for private finance will become unstuck.

All that was required was that the government price p was high enough to allow those who accept to finance their project and earn S.  (Tirole points out that this is an argument that buybacks must be of sufficient scale to be effective.)  This value S becomes a wedge between the value of refinance to the investors and the value of the legacy asset to financiers.

The paper then goes on to study the optimal intervention when the market is not restricted to simple buybacks. The optimal scheme is a mix of buybacks and partial transfers of legacy assets that keep “skin in the game” to reduce the downstream adverse selection problem.  The government is trying to minimize the cost of the intervention by spurring as much activity as possible from the private finance market.

This paper is worth studying.

I am partial to RJ’s Black Soft Licorice. In moderation, like cocaine (I imagine!), it does no harm. But like cocaine (I imagine!), one is tempted to consume it in excess, a bag at a time.

How can I just eat a reasonable amount?  I could joint Licorice Lovers Anonymous (LLA) and complete a ten step program to kick my habit entirely. I am sure there are many fellow sufferers out there, who love RJ’s not too little but too much.   Just a few tweets would allow us to coördinate and set up weekly meetings.  Seems like overkill.  And anyway, I don’t want to kick the habit entirely, just control it.

I see my five-year old wandering around, causing trouble and a simple solution appears magically in my mind.  I ask him to hide the licorice.  He is very good at hiding things that do not belong to him – remote controls, his brother’s toys, my watch etc. etc. He’ll love to hide the licorice.  There are a couple of problems.  It is my intention to ask him to bring the licorice back every day so I can have a few pieces.  But there is s significant chance that he’ll forget where he hid the bag.  So what?  Then, we’ll lose the bag and the licorice. But this is not any worse than the LLA solution of cutting out the addiction completely.

There is a second and quite famous problem from incentive theory: Who will monitor the monitor?  In other words, perhaps your police-kid will eat the licorice himself.  For this problem I have an answer.  My five-year old will consume strawberry Twizzlers by the cartful, but black, spicy licorice, I think not.  I am proved right.  One piece of licorice is chewed but the rest are intact.

Thinking about it, I realize that I have used a variation of an old idea of Oliver Wlliamson’s, “Credible Commitments: Using Hostages to Support Exchange” (jstor gated version). In his analysis, a contracting party A voluntarily hands over an “ugly princess” to party B to give party A the incentive to perform some costly investment.  Party B does not value the princess and hands her over once the investment is sunk.  In my argument, the ugly princess is the licorice and instead of specific investment, I want to commit to avoid over-consumption of an addictive good.

This pretty much gives you principles under which this mechanism works: Consumption of good that is addictive for party A but has no value for party B can be controlled by allowing party B to control the use of the good.  Party A might return the favor for party B (e.g. by rationing computer game time).  Only, my party B would never agree to this voluntarily and would see as a violation of civil liberties rather than as a favor.  This level of addiction I have no solution for….

An Israeli leftist believes that right-wing Prime Minister Neyanyahu can bring peace:

“The left wants to make peace but cannot, while the right doesn’t want to but, if forced to, can do it.”

Why can a right-winger make peace, while a left-winger cannot?  There might be many reasons but the one mentioned in this blog must of course draw from Crawford-Sobel’s Strategic Information Transmission which has become the canonical model of the game-theoretic notion Cheap Talk. The key intuition was identified by Cukierman and Tommasi in their AER paper “Why does it take a Nixon to go to China?” (working paper version).

Suppose an elected politician knows the true chances for peace but also has a bias against peace and for war.  The the median voter hears his message and decides whether to re-elect him or appoint a challenger.  Given the politician’s bias, he may falsely claim there is a good case for war even if it is not true.  It is hard for a politician biased towards war to credibly make the case for war.  He risks losing the election.  But if he makes the case for peace, it is credible: Why would a hawk prosecute for peace unless the case for peace is overwhelming?  So the more a politician proposes a policy that is against his natural bias, the higher is the chance he gets re-elected.  If the case for peace is strong, a war-biased politician can either propose war in which case he may not get re-elected and the challenger gets to choose policy.  Or he can propose peace, get re-elected and implement the “right” policy.  In equilibrium, the latter dominates and Nixon is necessary to go China, Mitterand is necessary for privatization etc…

Is this why Netanyahu believes he can make peace?   Maybe he cares about leaving a legacy as a statesman.  This would make him a less credible messenger – via the logic above, he is biased towards peace and any dovish message he sends is unreliable.  Let’s hope that the stories of his strongly his Zionist father and hawkish wife are all true.  And then Hamas should fail to derail the negotiations…And Hezbollah should fail in its efforts… And the other million stars that must align must magically find their place….

For 15 years, the British bookmaker William Hill allowed bettors to wager on their own weight loss, often taking out full-page newspaper ads to publicize the bet.  This was a clear opportunity for those looking to lose weight to make a commitment, with real teeth.  Here is a paper by Nicholas Burger and John Lynham which analyzes the data.

Descriptive statistics are presented in Table 2, which shows that 80% of bettors lose their bets. Odds for the bets range from 5:1 to 50:1 and potential payoffs average $2332.9 The average daily weight loss that a bettor must achieve to win their bet is 0.39 lbs. In terms of reducing caloric intake to lose weight, this is equivalent to reducing daily consumption by two Starbucks hot chocolates. The first insight we draw from this market is that although bettors are aware of their need for commitment mechanisms, those in our sample are not particularly skilled at selecting the right mechanisms.10 Bettors go to great lengths to construct elaborate constraints on their behaviour, which are usually unsuccessful.

Women do much worse than men.  Bets in which the winnings were committed to charity outperformed the average.  Bets with a longer duration (Lose 2x pounds in 2T days rather than x pounds in T days) have longer odds, suggesting that the market understands time inconsistency.

Beanie barrage:  barker.

Whether it is desirable to have your kid fall asleep in the car goes through cycles as they age. It’s lovely to have your infant fall asleep in the snap-out car carriers. Just move inside and the nap continues undisturbed. By the time they are toddlers and you are trying to keep a schedule, the car nap only messes things up. Eventually though, getting them to fall asleep in the car is a free lunch:  sleep they wouldn’t otherwise get, a moment of peace you wouldn’t otherwise get. Best of all at the end of a long day if you can carry them into bed you skip out on the usual nighttime madness.

Our kids are all at that age and so its a regular family joke in the car ride home that the first to fall asleep gets a prize. It sometimes even works. But I learned something on our vacation last month we went on a couple of longer then usual car trips. Someone will fall asleep first, and once that happens the contest is over. The other two have no incentives. Also, in the first-to-fall game, each child has an incentive to keep the others awake. Not good for the parents. (And this second problem persists even if you try to remedy the first by adding runner-up prizes.)

So the new game in town is last-to-sleep gets a prize. You would think that this keeps them up too long but it actually has some nice properties. Optimal play in this game has each child pretending to sleep, thereby tricking the others into thinking they can fall asleep and be the last. So there’s lots of quiet even before they fall asleep. And there’s no better way to get a tired kid to fall asleep than to have him sit still, as if sleeping, in a quiet car.

Parallel paths meet and end on one astounding episode of The Price is Right. Beautiful writing.

Ted says that when he went back for the afternoon taping, the producers moved him to a part of the studio where the contestants couldn’t see him. He says that he has heard “through unofficial channels” that he has been banned from the Bob Barker Studio, the way casinos have started asking Terry not to play blackjack inside their walls again. That Kathy Greco gave him a “Sicilian death stare” after the show, and that nobody ever needs a three-digit PIN. That according to his database, the Big Green Egg had appeared on the show only twice — before Terry and Linda began recording it — and that it was $900 before it was $1,175. That so many contestants — not just Terry — had won that day because they had listened to him. That his only mistake came when Terry played Switch?, because Ted didn’t realize there were two bikes, and he thought that a terabyte sounded like a lot of memory. That he was edited out of the show when it aired, that he can be seen only once, shaking his head when the prize is a Burberry coat, a prize that had never before appeared on the show. Otherwise, he would have known how much it was worth, the way he knew that a Berkline Contemporary Rock-a-Lounger was worth $599, and Brandon’s Ducane gas grill was worth $1,554, and Sharon’s car was worth $18,546. And for all that knowledge, for all his devotion, Bob Barker had called him a Loyal Friend and True, and Drew Carey called him that guy in the audience.

Deerstalker display:  The Browser.

Jeff discussed a seminal game theoretic analysis of Cheap Talk in an earlier post: “Strategic Information Transmission” by Crawford and Sobel studies a decision-maker, the Receiver, who listens to a message from an informed advisor, the Sender, before making a decision.  The optimal decision for each player depends on the information held by the Sender. If the Sender and Receiver have the same preferences over the best decision, there is an equilibrium where the Sender reports his information truthfully and the Receiver makes the best possible decision.

What if the Sender is biased and wants a different decision, say a bit to the left of the Receiver’s optimal decision? Then the Sender has an incentive to lie and move the Receiver to the left and always telling the truth is no longer an equilibrium.  Crawford and Sobel show that this implies that in equilibrium information can only be conveyed in lumpy chunks and the Receiver takes the best expected decision for each chunk.  The bigger the bias, the less information can be transmitted in equilibrium and the larger each lump must be.

The Crawford-Sobel model has differences of opinion generated by differences in preferences.  But individuals who have the same preferences but different beliefs also have differences of opinion.  The Sender and Receiver may agree that if global warming is occurring drastic action should be taken to slow it down.  But the Sender may believe it is occurring while the Receiver believes it is not.  Differences in beliefs seem to create a similar bias to differences in preferences and hence one might conjecture there is little difference between them.  A lovely paper by Che and Kartik shows this is not the case.  If the Sender with a belief-based bias acquires information, his belief changes.  If signals are informative, his beliefs must move closer to the truth and his bias must go down.  If  Sender with a preference-based bias acquires information, his bias by definition does not change.  So, when there are belief-based differences in opinion, information acquisition changes the bias, indeed it reduces it.  This allows the Sender to transmit more information in equilibrium and improve the Receiver’s decision implementation (this is the Crawford-Sobel intuition but in a different model).  The Sender values this influence and has good incentives to acquire information.  Hiring an advisor with a different belief is valuable for the decision-maker, better than having a Yes-Man. Some pretty cool and fun insights.  And it is always nice when the intuition is well explained and it is related to classical work

There is lots of other subtle stuff and I am not doing justice to the paper.  You can find the paper Opinions as Incentives on Navin’s webpage.

How to allocate an indivisible object to one of three children, it’s a parent’s daily mechanism design problem. Today I used the first-response mechanism.  “Who wants X?”  And whoever says “me!” first gets it.

This dimension of screening, response time,  is absent from most theoretical treatments.  While in principle it can be modeled, it won’t arise in conventional models because “rational” agents take no time to decide what they want.

But the idea behind using it in practice is that the quicker you can commit yourself the more likely it is you value it a lot.  Of course it doesn’t work with “who wants ice cream?”   But it does make sense when its “We’ve got 3 popsicles, who wants the blue one?”  We are aiming at efficiency here since fairness is either moot (because any allocation is going to leave two out in the cold) or a part of a long-run scheme whereby each child wins with equal frequency asymptotically.

It’s not without its problems.

  1. Free disposal is hard to prevent.  Eventually the precocious child figures out to shout first and think later, reneging if she realizes she doesn’t want it.
  2. There’s also ex-post negotiation.  You might think that this can only lead to Pareto improvements but not so fast.  Child #1 can “strongly encourage” child #2 to hand over the goodies.  A trade of goods for “security” is not necessarily Pareto improving when the incentives are fully accounted for.
  3. It prevents efficient combinatorial allocation when there are externalities and/or complementarities.  Such as, “who’s going in Mommy’s car?”  A too-quick “me!” invites a version of the exposure problem if child #3 follows suit.

Still, it has its place in a parent’s repertoire of mechanisms.

Suppose in a Department in a university there are two specializations, E and T.  The Department has openings coming up over time and must hire to fill the slot when it appears or let it lapse, perhaps with some chance of getting it the following year.

The Department can hire on the “best athlete” criterion: just choose the best candidates, regardless of specialization.  Or it could have a “Noah’s Ark” approach and let in one E specialist for each T specialist (perhaps this is done intertemporally if there are less than two slots/year).  Both approaches are used in hiring in practice.  How does the best approach depend on the environment?

To think this through, let’s suppose the Department uses the best athlete criterion.  There are two problems.  First, if specialty T has lower standards than specialty E, they will propose more candidates.  They may exaggerate their quality if it is hard to assess.  Or specialty T may simply want to increase in size – there will be more people to interact with, collaborate with etc.  How should specialty E respond?  They know that of they stick to their high standards, the Department will be swamped by Ts.  So, they lower their bar for hiring, reasoning that their candidate has to be better than the marginal candidate brought in by the Ts, a weaker criterion.  In other words, the best athlete hiring system leads to a “race to the bottom”.

Hiring by the Noah’s Ark system prevents this from happening.  The two groups might have different standards or want to empire build.  But the each group is not threatened by the other as their slots are safe.  This comes at a cost – if the fraction of good candidates in each field differs from the slot allocation in the Department, it will miss out on the best possible combination of hires.  So, if the corporate culture is good enough and everyone internalizes the social welfare function, it is better to have the best athlete criterion.

After a disaster happens the post-mortem investigation invariably turns up evidence of early warning signs that weren’t acted upon.  There is a natural tendency for an observer to “second-guess,” to project his knowledge of what happened ex post into the information of the decision-maker ex ante.  The effects are studied in this paper by Kristof Madarasz.

To illustrate the consequences of such exaggeration, consider a medical example. A radiologist recommends a treatment based on a noisy radiograph. Suppose radiologists differ in ability; the best ones hardly ever miss a tumor when its visible on the X-ray, bad ones often do. After the treatment is adopted, an evaluator reviews the case to learn about the radiologist’s competence. By observing outcomes, evaluators naturally have access to information that was not available ex-ante; in that interim medical outcomes are realized and new X-rays might have been ordered. A biased evaluator thinks as if such ex-post information had also been available ex-ante. A small tumor is typically difficult to spot on an initial X-ray, but once the location of a major tumor is known, all radiologists have a much  betterchance of finding the small one on the original X-ray. In this manner, by projecting information, the evaluator becomes too surprised observing a failure and interprets success too much to be the norm. It follows that she underestimates the radiologistís competence on average.

The paper studies how a decision-maker who anticipates this effect practices “defensive” information production ex ante, for example being too quick to carry out additional tests that substitute for the evaluator’s information (a biopsy in the medical example) and too reluctant to carry out tests that magnify it.

A tip of the boss of the plains to Nageeb Ali.

BP’s cap on the ruptured gulf coast oil well is a two-edged sword.  On the one hand, there is a good chance it will hold and the problem will be solved.   On the other hand,  the cap makes it harder to verify whether this solution has failed.

The cap means that pressure is diverted elsewhere underground.  Right now there is a camera in place pointing at the capped part of the well.  When the cap was not in place this camera made it common knowledge whether oil was flowing into the gulf and it made quite clear how much.  With the cap however, “seepage” in other locations can only be measured by noisy tests that can easily be disputed by both parties.

For example, BP will cite:

Some seepage from the ocean floor is normal in the Gulf of Mexico, according to University of Houston professor Don Van Nieuwenhuise.

“A lot of oil that’s formed naturally, by the Earth, ends up escaping or leaking to the surface in the form of natural seeps and yes, there are a lot of these all around the world,” he said.

and the government will argue:

“If the well remains fully shut in until the relief well is completed, we may never have a fully accurate determination of the flow rate from this well. If so, BP — which has consistently underestimated the flow rate — might evade billions of dollars of fines,” Markey, D-Massachusetts, said in a letter to Allen released Sunday.

The deadweight loss of negotiation and litigation means that even if the risk to the gulf is substantially reduced by having the cap in place, it may still be better to uncap the well and seek solutions (such as extraction of the flowing oil) that can be monitored directly by the camera that is already there.

An executive rises through the ranks at a large organization and becomes C.E.O.  He makes terrible decisions or is a passive leader, letting the firm slide into obscurity.  The firm is publicly traded and poor performance is observable.  But the C.E.O. manages to get another great job, leading a “turnaround” at another large organization.  He uses the same strategy that performed so badly in the first firm.  His second firm also goes down the tube.  This story is loosely based on an example I use in one of my M.B.A. classes.  And I have another new example.  How can it happen?

The first theory is pretty simple.  If a project fails, it is hard to know where to lay the blame – the economic climate, the C.E.O., bad luck etc. etc.  The the C.E.O. can come up with a story that helps him look like a leader not a loser.  Even worse, the people he works with want to get rid of him.  Perhaps they say nice things and sell a lemon to someone else.  The potential recipients of the lemon should know the perverse incentives in play and avoid the winner’s curse.  Perhaps the consult insiders they trust and with whom they will likely have a long future relationship.

But this theory does not accommodate cases where the C.E.O. publicly proposed and pushed a failed strategy at the first firm.  Or very obviously did not do his job.  Even these characters can pull off a successful exit.  The rationale for this phenomenon has two parts: (a) the pool of viable potential leaders is small and (b) very few people have the experience of running a large organization.  So, even if they performed poorly, perhaps they can learn from their mistakes and do better the second time around.

This presumes that a known bad performer carries less risk than an unknown performer because the former has experience.  I find it hard to believe.  A rational choice interpretation would be nice for the conscious purchase of a known experienced lemon who might change over a potential inexperienced mango.

Usually you order a bottle of wine in a restaurant and the waiter/wine guy opens it and pours a little for you to taste.  Conventionally, you are not supposed to be deciding whether you made a good choice, just whether or not the wine is corked, i.e. spoiled due to a bottling mishap or bad handling.  In practice this itself requires a well-trained nose.

But in some restaurants, the sommelier moves first:  he tastes the wine and then tells you whether or not it is good.

Suspicions are not the only reason some people object to this practice. Others feel they are the best judges of whether a wine is flawed or not, and do not appreciate sommeliers appropriating their role.

We should notice though that it goes two ways.  There are two instances where the change of timing will matter.  First there is the case where the diner thinks the wine is bad but the sommelier does not.  Here the change of timing will lead to more people drinking wine that they would have rejected.  But that doesn’t mean they are worse off.  In fact, diners who are sufficiently convinced will still reject the wine and a sommelier whose primary goal is to keep the clientele happy will oblige.  But more often in these cases just knowing that an expert judges the wine to be drinkable will make it drinkable. On top of this psychological effect, the diner is better off because when he is uncertain he is spared the burden of sticking his neck out and suggesting that the wine may be spoiled.

But the reverse instance is by all accounts the more typical:  diners drinking corked bottles because they don’t feel confident enough to call in the wine guy.  I have heard from a master sommelier that about 10% of all bottles are corked!  Here the sommelier-moves-first regime is unambiguoulsy better for the customer because a faithful wine guy will reject the bottle for him.

Unless the incentive problem gets in the way.  Because if the sommelier is believed to be an expert acting in good faith, then he never lets you drink a corked bottle.  You rationally infer that any bottle he pours for you is not spoiled, and you accept it even if you don’t think it tastes so good.  But this leads to the Shady Sommelier Syndrome:  As long as he has the tiniest regard for the bottom line, he will shade his strategy at least a little bit, giving you bottles that he judges to be possibly, or maybe certainly just a little bit, corked.  You of course know this and now you are back to the old regime where, even after he moves first, you are still a little suspicious of the wine and now its your move.  And your bottle is already one sommelier-sip lighter.

You are a poor pleb working in a large organization.  Your career has reached a stage where you are asked to join one of two divisions, division A or division B.  You can’t avoid the choice even if you prefer the status quo – it would be bad for your career.  Each division is controlled by a boss.  Boss A is sneaky and self-serving. perhaps he is “rational” in the parlance of economics.  Even better, perhaps his strategy is quite transparent to you after a brief chat with him so you can predict his every move.  He is the Devil you know. Boss B might be rational or might be somewhat altruistic and have your best interests at heart.  He is the Devil you don’t know.  Neither boss is going anywhere soon and you have no realistic chance of further advancement.  You will be interacting frequently with the boss of the division you choose.

Which division should you join?

You face a trade-off it seems.  If you join division A, it is easier for you to play a best-response to boss A’s strategy – you can pretty much work out what it is.  If you join division B, it is harder but the fact that you don’t know can help your strategic interaction.

For example, suppose you are playing a game where “cooperation” is not an equilibrium if it is common knowledge that both players are rational – the classical story is the Prisoner’s Dilemma.  Then, the incomplete information might help you to cooperate.  If you do not cooperate, you reveal you are rational and the game collapses into joint defection.  If you cooperate, you might be able to sustain cooperation well into the future (this is the famous work of Kreps, Milgrom, Roberts and Wilson).

On the other hand, if you are playing a pure coordination game, this logic is less useful.  All you care about is the action the other player is going to take and you want to play a best response to it.  So, the division you should join depends on the structure of the later boss-pleb game.

Perhaps it is possible to frame this question in such a way that the existing reputation and game theory literature tells us if and when incomplete information should be welcomed by the pleb so you should play with the Devil you don’t know and when it is bad, so you should play with the Devil you know?

FIFA experimented with a “sudden-death” overtime format during the 1998 and 2002 World Cup tournaments, but the so-called golden goal was abandoned as of 2006.  The old format is again in use in the current World Cup, in which a tie after the first 90 minutes is followed by an entire 30 minutes of extra time.

One of the cited reasons for reverting to the old system was that the golden goal made teams conservative. They were presumed to fear that attacking play would leave them exposed to a fatal counterattack.  But this analysis is questionable.  Without the golden goal attacking play also leaves a team exposed to the possibility of a nearly-insurmountable 1 goal deficit.  So the cost of attacking is nearly the same, and without the golden goal the benefit of attacking is obviously reduced.

Here is where some simple modeling can shed some light.  Suppose that we divide extra time into two periods.  Our team can either play cautiously or attack.  In the last period, if the game is tied, our team will win with probability p and lose with probability q, and with the remaining probability, the match will remain tied and go to penalties.  Let’s suppose that a penalty shootout is equivalent to a fair coin toss.

Then, assigning a value of 1 for a win and -1 for a loss, p-q is our team’s expected payoff if the game is tied going into the second period of extra time.

Now we are in the first period of extra time.  Here’s how we will model the tradeoff between attacking and playing cautiously.  If we attack, we increase by G the probability that we score a goal.  But we have to take risks to attack and so we also we increase by L the probability that they score a goal.  (To keep things simple we will assume that at most one goal will be scored in the first period of extra time.)

If we don’t attack there is some probability of a goal scored, and some probability of a scoreless first period.  So what we are really doing by attacking is taking an G-sized chunk of the probability of a scoreless first period and turning it into a one-goal advantage, and also a L-sized chunk and turning that into a one-goal deficit.  We can analyze the relative benefits of doing so in the golden goal system versus the current system.

In the golden goal system, the event of a scoreless first period leads to value p-q as we analyzed at the beginning.  Since a goal in the first period ends the game immediately, the gain from attacking is

G - L + (1-G-L)(p-q).

(A chunk of sized G-L of the probability of a scoreless first period is now decisive, and the remaining chunk will still be scoreless and decided in the second period.)  So, we will attack if

p - q \leq G - L + (1 - G - L) (p-q)

This inequality is comparing the value of the event of a scoreless first period p-q versus the value of taking a chunk of that probability and re-allocating it by attacking.  (Playing cautiously doesn’t guarantee a scoreless first period, but we have already netted out the payoff from the decisive first-period outcomes because we are focusing on the net changes G and L to the scoring probability due to attacking.)

Rearranging, we attack if

p - q \leq \frac{G-L}{G+L}.

Now, if we switch to the current system, a goal in the first period is not decisive.  Let’s write y for the probability that a team with a one-goal advantage holds onto that lead in the second period and wins.  With the remaining probability, the other team scores the tying goal and sends the match to penalties.

Now the comparison is changed because attacking only alters probability-chunks of sized yG and yL.  We attack if

p - q \leq Gy - Ly + (1 - G - L) (p-q),

which re-arranges to

p - q \leq y\frac{G-L}{G+L}

and since y < 1, the right-hand side is now smaller.  The upshot is that the set of parameter values (p,q,y,G,L) under which we prefer to attack under the current system is a strictly smaller subset of those that would lead us to attack under the golden goal system.

The golden goal encourages attacking play.  The intuition coming from the formulas is the following.  If p > q, then our team has the advantage in a second period of extra time.  In order for us to be willing to jeopardize some of that advantage by taking risks in the first period, we must win a sufficiently large mass of the newly-created first-period scoring outcomes.  The current system allows some of those outcomes (a fraction 1-y of them) to be undone by a second-period equalizer, and so the current system mutes the benefits of attacking.

And if p<q, then we are the weaker team in extra time and so we want to attack in either case.  (This is assuming G > L.  If G< L then the result is the same but the intuition is a little different.)

I haven’t checked it but I would guess that the conclusion is the same for any number of “periods” of extra time (so that we can think of a period as just representing a short interval of time.)

You (the sender) would like someone (the responder) to do you a favor, support some decision you propose or give you some resource you value.  You email the responder, asking him for help.  There is no reply.  Maybe he has an overactive Junk Mail filter or missed the email.  You email the responder again. No reply.  The first time round, you can tell yourself that maybe the responder just missed your request.  The second time, you realize the responder will not help you.  Saying Nothing is the same as saying “No”.

Why not just say No to begin with?  Initially, the responder hopes you do not send the second email.  Then, when the responder reverses roles and asks you for help, you will not hold an explicit No against him.  By the time the second email is sent and received, it is too late – at this point whether you respond or not, there is a “No” on the table and your relationship has taken a hit.  The sender will eventually learn that often no response means “No”.  Sending a second email, while clearing up the possibility the first non-response was an error, may lead to a worsening of the relationship between the two players.  So, the sender will weigh the consequences of the second email carefully and perhaps self-censor and never send it.

Then, Saying Nothing will certainly be better than Saying No for the responder and a communication norm is born.

My male colleagues at Kellogg are a clean-shaven, short-haired bunch. The first hypothesis is that the “business casual” atmosphere at a B-School makes the a clean-cut JCrew look focal and any deviation from it socially uncomfortable (though I have no qualms about ignoring it!). But colleagues on the Econ Dept, which is outside the B-School, also largely subscribe to this norm. Even short-sporting, flip-flop wearing, oldish-wannabe-surfer-economists from Southern California seem to shave daily. I can remember this pattern from grad school: the Europeans were pretty casual about shaving and the Americans were much more likely to have the clean-cut look.  There was no business casual social norm to conform to in grad school, so I don’t think that explanation carries all the water.

Another rationale for the buzz cut can be safely dismissed: if you think that having sticking with short hair saves on visits to the barber, you’re wrong. For this rationale to work, you have to be willing to have long hair too, otherwise you’re going quite often to the barber to keep it short all the time.  So if you are unwilling to go long, going short keeps your barber nicely employed.

I am led then to the Jeff Van Gundy explanation:

My dad said, ‘You can’t have normal-length hair until high school.’ It was a form of discipline.

Not only is it is a form of discipline, it is a signal of discipline.  You are disciplined enough to have regular haircuts and, by extension, shave regularly.  On the other hand, Europeans are busy counter-signaling: you are undisciplined and do incredibly well on exams, so you must be really smart!   No wonder Europeans and Americans can have such a hard time communicating with each other.

Hmmn.  After all this analysis, I guess I still have to work out what look to adopt.  After all, some scruffy people are hirsute because they truly are undisciplined.  Gotta make sure I’m not in that group.

Ryan Avent’s self-styled populist post takes to task a rich man’s tax-conscious balance sheet dance:

As far as I can tell, this is entirely within the law. But I don’t think it’s improper to declare it obscene. Shameful, even. With a fortune of that size, additional wealth is about little more than score-keeping.

Everyone has this natural response to a rich person desiring to avoid taxes.  We all think like Ryan does:

But let’s be honest for a moment. According to this Bloomberg story, Mr Lampert is worth $3 billion. If he earns just 1% per year on that fortune—and he certainly earns much more—then he takes home $30 million in income. Per year. That’s 600 times the median household income in America. It’s more money than a person can reasonably spend. With that much money you can binge every day, and yet the money will just keep accumulating.

But you don’t have to think much longer than that to see a different side of things.  Since Mr. Rich is beyond the binge-every-day constraint, there are lots of other things he can do with his money besides bingeing.  For example, if you were Mr. Rich you could probably think of a lot of loved ones you would like to make happy by sharing your wealth with them. Or perhaps you understand that money is what determines what gets done in the world and maybe you have very strong feelings about what should get done.

Like maybe you want to be able to donate to artists or schools or libraries.  Maybe you want to help prevent HIV infection. Is it so obvious that a rich man, already beyond bingeing, who wants an extra dollar is being more greedy than a middle-class man who wants to get a dollar closer to the bingeing stage?

Let me be clear that I don’t believe that all of the Mr. Riches are trying to be Bill and Melinda Gates.  But I don’t see how you can conclude just from the fact that someone is rich that they don’t have reasons that we would be completely sympathetic to if we knew them.

And if I were a smart do-gooder who thought that everyone on Wall Street was evil the obvious thing to do would be to start a hedge fund, rip them off, and spend their money to meet my goals.

Ghutrah greeting:  gappy3000.

My first post on this topic was prompted by reading newspaper stories about Afghanistan and having lunch with Jim Robinson shortly afterwards.  (For example, Karzai is sacking trusty lieutenants and moving to form a coalition with the Taliban and perhaps Pakistan.)  But who has thought deeply about this issue and come up with some interesting insights?  The answer is of course: Roger Myerson.  He has an informal overview of his thoughts on state-building.  To understand his ideas fully, you have to read the overview.  Here are a few insights I pulled out that are most related to my earlier post.

One issue I raised was: How do you ensure political competition is constructive not destructive? Myerson says the key is that the losers in any political competition feel they have the opportunity to win a future competition.   Otherwise, what choice to they have but to compete from outside the political system and trigger conflict?

An alternative might be to install a puppet dictator who faces no competition.  But here I repeat my earlier point: this dictator will be rapacious and steal from his citizens.  To keep him in line, constructive political competition is necessary.

Myerson’s overview has his thoughts on how to build constructive national and local competition.  Again, I recommend you take a look.

Afghan security firms provide armed escorts for NATO convoys.  Some firms lost their employment because of violent incidents where they killed civilians.  But NATO Convoys them suffered greater attacks and the security firms were re-employed.  There is an obvious incentive problem:

“The officials suspect that the security companies may also engage in fake fighting to increase the sense of risk on the roads, and that they may sometimes stage attacks against competitors.

The suspicions raise fundamental questions about the conduct of operations here, since the convoys, and the supplies they deliver, are the lifeblood of the war effort.

“We’re funding both sides of the war,” a NATO official in Kabul said. The official, who spoke on the condition of anonymity because the investigation was incomplete, said he believed millions of dollars were making their way to the Taliban.”

This is a Mafia tactic: To get people to pay from protection, you have to create the demand for protection.  Supply creates its own demand.  There is also a reverse effect:  The security firms sometimes bribe the Taliban to keep away from the convoys.  With this source of steady income, the Taliban have no incentive to disband and may even have an incentive to expand.  Demand creates its own supply.

The second circle seems less pathological than the first.  If we cannot find the Taliban ourselves and kill them or bribe them then to stay away from the convoys, we have to use a local security firm.  The security firm is an intermediary, adding value and generating surplus.  The first circle is destroying surplus, like the Mafia.  It is creating a public bad, a security problem, to generate a transfer.

Beyond punishing anyone who is caught planning a deliberate attack, it is hard to see any simple solution.  Fewer and fewer countries want to be involved in Afghanistan and so using our own troops is difficult.  The Taliban might prefer to be employed in the real economy.  But the main alternative to attacking NATO convoys is growing opium.  Is that any better than attack and theft?

The entire episode signals that Afghanistan is a Mafia state with leaders acting an profit maximizers, destroying surplus to capture a bigger slice of what’s left of the economic pie.   A depressing state of affairs after eight years of war.

The big news is that AT&T will be discontinuing its unlimited use data plans effective next week which happens to coincide with Steve Jobs worst-kept-secret announcement of the next-generation iPhone.  People are up in arms.

Unlimited, all-you-can-eat wireless data was a beautiful thing for Apple devices on AT&T, delivering streams of Pandora, YouTube videos, a million tweets, and hundreds of webpages without worry. And now it’s dead.

AT&T’s new, completely restructured mobile data plans for both iPhones and iPads have officially launched the era of pay-per-byte data, which we’ve known was coming. We just hoped it would take a little longer. It’s the anti-Christmas.

One thing to keep in mind is that unlimited use tariffs are not part of an efficient or profit-maximizing pricing policy whether you consider monopoly or perfect competition.  It is hard to imagine a model under which unlimited use makes sense unless there is zero marginal cost.  (If marginal cost is positive then under unlimited use your usage will typically go beyond the point where your marginal value exceeds marginal cost. Whatever the market structure, this would be replaced by marginal cost pricing possibly with a reduced fixed fee.)

Still the specific form of the tariff– zero per-MB cost up to some limit and then a steep price after that– annoys many people.  In fact, there are theories that show that this kind of pricing is the best way to exploit consumers who don’t accurately forecast their own usage.

But this brings me to the second thing to keep in mind.  Those exploits take advantage of people who underestimate their usage.  But here is the actual pricing menu.

I bet that you actually overestimate your usage.  I use my phone a lot for browsing the web, maps, etc. and I average under 200 MB per month.  Because some months I do go above 200MB, I will buy the 2GB plan for $25 (I don’t need tethering.)  My wife on the other hand never goes above 200MB.  So the new plan is a better deal for us.

Here’s how to check your usage.

Neil is a great businessman as well as a popular songwriter (though he’s unlucky in love and that cost him).  In an earlier post, I wondered why artists do not simply price discriminate and not let scalpers get the rents.  If they do not want to look exploitative, then can try to use some other instruments (e.g. a refund to loyal fans) to avoid just letting scalpers exploit the fans.

Another answer is that artists actually do perform price discrimination using the scalper as the intermediary:

Less than a minute after tickets for last August’s Neil Diamond concerts at New York’s Madison Square Garden went on sale, more than 100 seats were available for hundreds of dollars more than their normal face value on premium-ticket site TicketExchange.com. The seller? Neil Diamond.

Ticket reselling — also known as scalping — is an estimated $3 billion-a-year business in which professional brokers buy seats with the hope of flipping them to the public at a hefty markup.

In the case of the Neil Diamond concerts, however, the source of the higher-priced tickets was the singer, working with Ticketmaster Entertainment Inc., which owns TicketExchange, and concert promoter AEG Live. Ticketmaster’s former and current chief executives, one of whom is Mr. Diamond’s personal manager, have acknowledged the arrangement, as has a person familiar with AEG Live, which is owned by Denver-based Anschutz Corp.

Selling premium-priced tickets on TicketExchange, priced and presented as resales by fans, is a practice used by many other top performers, according to people in the industry. Joseph Freeman, Ticketmaster’s senior vice president for legal affairs, says that the company’s “Marketplace” pages only rarely list tickets offered by fans.

According to the lead singer of Nine Inch Nails:

the true market value of some tickets for some concerts is much higher than what the act wants to be perceived as charging. For example, there are some people who would be willing to pay $1,000 and up to be in the best seats for various shows, but MOST acts in the rock / pop world don’t want to come off as greedy pricks asking that much, even though the market says its value is that high. The acts know this, the venue knows this, the promoters know this, the ticketing company knows this and the scalpers really know this. So…

The venue, the promoter, the ticketing agency and often the artist camp (artist, management and agent) take tickets from the pool of available seats and feed them directly to the re-seller (which from this point on will be referred to by their true name: SCALPER). I am not saying every one of the above entities all do this, nor am I saying they do it for all shows but this is a very common practice that happens more often than not. There is money to be made and they feel they should participate in it. There are a number of scams they employ to pull this off which is beyond the scope of this note.

StubHub.com is an example of a re-seller / scalper. So is TicketsNow.com.

Of course, the danger is that the fans find out what the artist is doing – e.g. Neil Diamond’s strategy has been fully revealed thanks to the WSJ.  Either this leads to a counter-reaction or fans just get used to it and accept the new norms.  Hard to say what is happening but the Bon Jovi VIP pricing without using a scalper as a middleman suggests more fans are accepting direct price discrimination by the artist.

(Hat Tip: Troy Kravitz and Mallesh Pai)

Jonah Lehrer has a post

about why those poor BP engineers should take a break. They should step away from the dry-erase board and go for a walk. They should take a long shower. They should think about anything but the thousands of barrels of toxic black sludge oozing from the pipe.

He weaves together a few stories illustrating why creativity flows best when it is not rushed.  This is something I generally agree with and his post is good read but I think one of his examples needs a second look.

In the early 1960s, Glucksberg gave subjects a standard test of creativity known as the Duncker candle problem. The problem has a simple premise: a subject is given a cardboard box containing a few thumbtacks, a book of matches, and a waxy candle. They are told to determine how to attach the candle to piece of corkboard so that it can burn properly and no wax drips onto the floor.

Oversimplifying a bit, to solve this problem there is one quick-and-dirty method that is likely to fail and then another less-obvious solution that works every time.  (The answer is in Jonah’s post so think first before clicking through.)

Now here is where Glucksberg’s study gets interesting. Some subjects were randomly assigned to a “high drive” group, which was told that those who solved the task in the shortest amount of time would receive $20.

These subjects, it turned out, solved the problem on average 3.5 minutes later than the control subjects who were given no incentives.  This is taken to be an example of the perverse effect of incentives on creative output.

The high drive subjects were playing a game.  This generates different incentives than if the subjects were simply paid for speed.  They are being paid to be faster than the others.  To see the difference, suppose that the obvious solution works with probability p and in that case it takes only 3.5 minutes.  The creative solution always works but it takes 5 minutes to come up with it. If p is small then someone who is just paid for speed will not try the obvious solution because it is very likely to fail.  He would then have to come up with the creative solution and his total time will be 8.5 minutes.

But if he is competing to be the fastest then he is not trying to maximize his expected speed.  As a matter of fact, if he expects everyone else to try the obvious solution and there are N others competing, then the probability is 1 - (1-p)^N that the fastest time will be 3.5 minutes.  This approaches 1 very quickly as N increases.  He will almost certainly lose if he tries to come up with a creative solution.

So it is an equilibrium for everyone to try the quick-and-dirty solution, and when they do so, almost all of them (on average a fraction 1-p of them) will fail and take 3.5 minutes longer than those in the control group.

Naming rights raise a lot of money.  Think of professional sports stadiums like Chicago’s own US Cellular Field  (does US Cellular still exist??)  The amazing thing to me is that when Comiskey Park changed names to “The Cell,” local media played right along and gave away free advertising by parroting the name in their daily sports roundups.  Somehow the stadium knew that this coordination/holdup problem would be solved in their favor.

We should seize on this.  But not by selling positive associations to corporations that want to promote their brand.  Instead lets brand badly-behaving corporations with negative associations.

The Exxon Valdez oil spill is a name that stuck.  Every single time public media refer to that event they remind us of the association between Exxon and the mess they made.  No doubt we will continue to refer to the current disaster as the BP Gulf spill or something like that.  That is good.

But why stop there?  (Positive) advertisers have learned that you can slip in the name of a brand before, after, and in-between just about any scripted words and call it an ad.  The Tostitos Fiesta Bowl.  The Bud Lite halftime show. The X brought to you by Y.  These are positive associations.

Think of all the negative events and experiences that are just waiting to be put to use as retribution by negative association.  “And today I am here to announce that the BP National Debt will soon reach 15 trillon Dollars.”  Or “The BP recession is entering its fifth consecutive quarter with no end in sight.”

Why are we wasting hurricane names on poor innocents like Katrina and Andrew?  I say for the 2010 hurricane season we ditch the alphabetical order and line em up in order of egregiousness.   “Hurricane Blackwater devastates the Florida Coast.  Tropical Storm Halliburton kills hundreds in Central America.”

The nice thing about negative naming is that supply is virtually unlimited.  Cities don’t go selling the names of every street in town because selling the marginal street requires lowering the price.  But you can put the name of every former VP at Enron and Arthur Andersen on their own parking meter and the last one makes you want to spit just as much as the first.  Hey, what about parking tickets?  This parking ticket is brought to you by Washington Mutual.

Suddenly the inefficiency of city bureaucracy is a valuable social asset.  Welcome to the British Petroleum DMV, please take your place in line number 8.  And some otherwise low-status professions will now be able to leverage that position to provide an important public service.  “There’s some stubborn tartar on that molar, Ms. Clark, I’m going to have to use the Toyota Prius heavy-duty scaler.  You might feel some scraping. Rinse please.”

“Good Afternoon, Pleasant Meadow Morturary, will you be interested in Goldman Sachs cremation services today?” Or  “Mr. Smith we are calling to confirm your appointment for a British Petroleum colonoscopy on Monday.  Please be on time and don’t eat anything 24 hours prior.”

Just as positive name-association is a lucrative business,  these ne’er-do-wells would of course pay big money to have their names removed from the negative icons and that’s all for the better.  If the courts can place a cap on their legal liability this gives us a simple way to make up the difference.

And I am ready to do my part.  As much as I like one-word titles Sandeep and I are going to add a subtitle to our new paper.  Its going to be called “Torture:  Sponsored by BP.”

On the way from Brookline to Central Square in Cambridge to go to Toscanini’s, we turned on Hampton St to avoid roadwork and found the Myerson Tooth Corporation:

Next door is the Good News Garage owned by Click and Clack of NPR fame.

Rand Paul, referring to criticism of BP’s handling of the oil spill says

“What I don’t like from the president’s administration is this sort of, ‘I’ll put my boot heel on the throat of BP,'” Paul said in an interview withABC’s “Good Morning America.” “I think that sounds really un-American in his criticism of business.”

“And I think it’s part of this sort of blame-game society in the sense that it’s always got to be somebody’s fault instead of the fact that maybe sometimes accidents happen,” Paul said.

This is symptomatic of the perennial time-inconsistency problem that comes with incentives for good behavior.  The incentives are structured so that when bad outcomes occur, BP will be punished.  If the incentive scheme works then BP acts in good faith and then it is true that bad outcomes are just accidents. The problem is that when the accidents happen it is true that BP was acting in good faith and so they don’t deserve punishment.  And if doling out the punishment requires political will then the political will is not there.  After all, who is going to stand up and demand that BP be punished for an accident?

This is the unraveling of incentives.  Because the incentive worked only because BP expected to get punished whether or not it was an accident. To prevent this, it is the politician’s job to stir up outrage, justified or not, in order to reignite the political will to dole out the punishment.  The blame game is a valuable social convention whether or not you believe there is someone to blame.

Affirmative action in hiring is more controversial than it has to be because of the way it is typically framed.  People who agree with the general motivation object to specific implementations like racial preferences and quotas because of their blunt nature.

Any affirmative action hiring policy entails a compromise because it mandates a distortion away from the employer’s unconstrained optimal practice.  We should look for ways that achieve the goals of affirmative action but with minimal distortions.

One simple idea is turn away from policies that incentivize hiring and instead incentivize search.  Suppose that the employer believes that 10% of all candidates are qualified for the job but that only 5% of all minorities are qualified.  Imposing a quota on the number of minority hires is less flexible than a quota on the number of minorities interviewed.

Requiring the employer to interview twice as many minority candidates equalizes the probability that the most qualified candidate is a minority or non-minority. Across all employers using this policy, the fraction of minority employees will hit the target.  But each individual employer is free to hire the most qualified candidate among the candidates identified so the allocation of workers is more efficient than would be achieved with a straight hiring quota.

I coach my 7-year-old daughter’s soccer team.  It’s been a tough Spring season so far: they lost the first three games by 1 goal margins.  But this week they won something like 15-1.

I noticed something interesting.  In all of the close games the girls were emotionally drained. By the end of the game they didn’t have much energy left.   Many of them asked to be rotated out.

But this week nobody asked to be rotated out.  In fact this week they had the minimum number of players so each of them played the whole game and still nobody complained of being tired.  Obviously they were having fun running up the score but they didn’t get tired.

Incentives are about getting players to want conditions to  improve.  So incentives necessarily make them less happy about where they are now.  Feeling good about winning means feeling bad about not winning.  That’s the motivation.

But encouragement is about being happy about where you are now.  And it has real effects:  it energizes you.  You don’t get tired so fast when you are having fun.

There is a clear conflict between incentives and encouragement.  At the same time incentives motivate you to win, they discourage you because you are losing.  A coach who fails to recognize this is making a big mistake.

And I am not giving a touchy-feely speech about “it’s not whether you win or lose…”  I am saying that a cold-hearted coach who only cares about winning should, at the margin, put less weight on incentives to win.

If my daughter’s team loved losing, is it possible they would lose less often?  Probably not.  But that’s because the love of losing would give them an incentive to lose.  They would be discouraged when they win but that would only help them to start losing.  (Unless the opposing coach used equally insane incentives.)

Nevertheless, to love winning by 10 goals is a waste of incentive and is therefore a pure cost in terms of its effect on encouragement when the game is close.  Think of it this way:   you have a fixed budget of encouragement to spread across all states of the game.  If you make your team happy about winning by 10 goals,  that directly subtracts from their happiness about winning by only 1 goal.

My guess is that, against a typically incentivized opponent, the optimal incentive scheme is pretty flat over a broad range. That range might even include losing by one goal.  Because when the team is losing by one goal, the positive attitude of being in the first-best equivalence class will keep them energized through the rest of the game and that’s a huge advantage.