You are currently browsing the tag archive for the ‘game theory’ tag.

The MILLTs at Spousonomics are calling on spouses to look for Pareto improvements in our marital transactions.  Paula offers this list for her husband on Valentine’s day.

1. Help with garbage night.
2. Join you in the 30-day meditation challenge.
3. Not remind you when you have to make up a work shift at the food coop.
4. Use my Petzl head lamp when I’m reading in bed and you’re already asleep.
5. Work on my tone of voice when I’m frustrated.
6. Pick my battles.
7. Entertain notion that my way isn’t the only way.
8. Try again to make braised pork shoulder.
9. Give Sonny & the Sunsets another chance.
10. Let things go.

I’m as keen on free lunches as the next guy (I’m looking at you Asher), but at the risk of throwing cold water on Paula’s Valentine’s Day overtures, let me bring a little dose of tradeoffs to this home economics lesson.  First of all, Paula is shortchanging her generosity on many of these because very few of them are literal Pareto improvements.  Garbage night?  Who isn’t better off keeping their hands clean, not to mention taking a pass on the sub-freezing walk to the curb. And I am not sure what exactly a Petzl head lamp is but I’d be worried about waking up to the fragrance of molten hair after dozing off with one of those on.

No those are genuine sacrifices.  Indeed Pareto improvements are pretty hard to come by even if you are otherwise a selfish pig.  Especially if you are a selfish pig.  Because as long as you are already doing everything that would make you better off, the only room left for Pareto improvements is spanned by the knife’s edge of indifference.

There is a second category represented on the list:  proposals that take a long-run view. These are a bit more subtle.  Give Sonny and the Sunsets another chance.  This qualifies as a Pareto improvement even though the implicit suggestion is that Sonny and the Sunsets didn’t cast a warm glow the first time. If the clouds part for Paula the second time around then she and her husband are both better off.  But again Paula’s pure self interest already takes care of this one so long as she’s thinking ahead.  Anyway, if even Sonny and the Sunsets can grow on us after a few listenings then anything can.  Why not just spend 30 days meditating?  Oh wait…

And let’s not forget that a Pareto improvement has to make the other party better off, at least weakly.  Given what we can all infer from the pledge itself, “Try again to make braised pork shoulder” seems to fail on that count.

Then there’s the issue of narrow framing.  Pareto efficiency for the household may entail violence against the rest of the world.  Not reminding him about food co-op is nice but what about the poor slobs waiting for their food at the co-op? Heck why not replace this one with “Encourage you to pilfer more food from the co-op?”

The last set of proposals all relate to improving conflict resolution.  Most appear superficially to be obvious Pareto improvements.  Work on my tone of voice when I am frustrated.  Paula is probably truly indifferent to how her own voice sounds when she’s frustrated, but I would bet that her husband has a clear preference.  So this does seem to require a little more than pure self-interest to implement.  “Let things go” is another.

But it’s for exactly this kind of household constitutional amendment that the logic of Pareto efficiency can be turned on its head.  The concept of renegotiation in repeated games holds a key lesson.  Marriage is a partnership that requires individual sacrifice in order to reach the efficient frontier.  The temptation to cheat on the relationship must be deterred with the threat of moving below from the frontier as a reprisal.  Once there it is tempting to re-negotiate back to the frontier.  But as soon as we get used to doing that, the incentive keeping us at the frontier in the first place goes away.

Best not to “Let Go” so quickly, Paula.  Sorry Mr. Paula.  Try to have a Happy Valentine’s day anyway.

Actions speak louder than words.  Anarchists seeking to spread revolution resort to extreme acts hoping to stir the sympathy of the general population.  Would be change-agents differ in their favored instrument of provocation – assassination, bombings or general strike.  They are united by their intrinsic lack of real power.  They only way they can hope to achieve their ends is by persuading other players to react and indirectly give them what they want.  As such, the “propaganda of the deed” in practiced typically by people on the fringe of society, not in the corridors of power.   (See my paper The Strategy of Manipulating Conflict with Tomas Sjöström for illustrations of this strategy.)

But Mubarak has reached this lowly state even as President of Egypt.  He has conspicuously lost popular support and tensions long suppressed have burst asunder for all to see.  He has lost the support of “the people” and, perhaps even more importantly, the army.  What can he do to get it back?  The anti-Mubarak protestors have till recently refrained from looting and mob mentality has been notable for its absence.  As long as that remains the case, the army and the people are siding with the anti-Mubarak protestors or largely staying out of the fray.  Mubarak’s only hope is to get the people and the army to pick his side.  He needs to energize the mob and trigger looting.  That is his strategy.  Police disappeared from the streets of Cairo a few days ago, inviting looters to run amok.  That did not work.  So, now he has employed pro-Mubarak “supporters” to fight anti-Mubarak protestors.  Open fighting on the streets of Cairo, prodding the army to step in.  The people scared of the outbreak of lawlessness turning to the strongman Mubarak to return some semblance of stability to the city and the country.  This is where we are in the last couple of days.  Another obvious strategy for Mubarak: Get his supporters to loot and pin it on the anti-Mubarak protestors.  Not sure if that is happening yet.

What can be done to subvert the Mubarak strategy?  For the protestors, the advice is obvious – no looting, no breakdown of law and order.  The primary audience is the army and people – keep them on your side.  For the Obama administration there is little leverage over Mubarak.  I assume he has hidden away millions if not billions – cutting off future aid has little chance of persuading Mubarak to do anything.  Again, the army is the primary audience for the Obama administration.  Whichever side they pick will win.  The army cares more about the cutoff of future aid than Mubarak.  They have trained in US military schools and have connections here.  The only leverage the Obama administration has is over the army and it is hard to tell how strong that leverage is.

Malcolm Gladwell is cynical about the ability of social media to facilitate activism:

The platforms of social media are built around weak ties. Twitter is a way of following (or being followed by) people you may never have met. Facebook is a tool for efficiently managing your acquaintances, for keeping up with the people you would not otherwise be able to stay in touch with. That’s why you can have a thousand “friends” on Facebook, as you never could in real life

If Twitter is only identifying people with weak preferences for activism, the “revolution will not be tweeted”.  But there is a second countervailing effect created by network externalities, studied in Gladwell’s book The Tipping Point.  An individual’s cost in participating in a revolution is s function of how many other people are involved.  For example, the probability that an individual gets arrested is smaller the larger the number of people surrounding him in a demonstration.  Even if Twitter in the first instance does not increase the number of people participating in a demonstration, it does create common knowledge about where they are meeting and when.  The marginal participant in the absence of common knowledge strictly prefers to participate with Twitter-common-knowledge.  Now more individuals will join as the demonstration has gotten a bit bigger etc.  The twitting point is reached and we have a bigger chance of revolution.  Now, let me go to Jeff’s twitter feed and see what he is plotting in his takeover of the NU Econ Dept.

The government in Egypt is cutting off communications networks, including mobile phones and the Internet.

The decision to get out and protest is a strategic one.  It’s privately costly and it pays off only if there is a critical mass of others who make the same commitment.  It can be very costly if that critical mass doesn’t materialize.

Communications networks affect coordination.  Before committing yourself you can talk to others, check Facebook and Twitter, and try to gauge the momentum of the protest.  These media aggregate private information about the rewards to a protest but its important to remember that this cuts two ways.

If it looks underwhelming you stay home, go to work, etc.  And therefore so does everybody who gets similar information as you.  All of you benefit from avoiding protesting when the protest is likely to be unsuccessful.  What’s more, in these cases even the regime benefits from enabling private communication, because the protest loses steam.

Now consider the strategic situation when you lines of communication are cut and you are acting in ignorance of the will of others.  The first observation is that in these cases when the protest would have fizzled, without advance knowledge of this many people will go out and protest.  Many are worse off, including the regime.

The second observation is that even in those cases when protest coordination would have been amplified by private communication, shutting down communication may nevertheless have the same effect, perhaps even a stronger one.  There are two reasons for this. First, the regime’s decision to shut down communications networks is an informed one.  They wouldn’t bother taking such a costly and face-losing move if they didn’t think that a protest was a real threat.  The inference therefore, when you are in your home and you can’t call your friends and the internet is shut down is that the protest has a real chance of being effective.  The signal you get from this act by the regime substitutes for the positive signal you would have gotten had they not acted.

The other reason is that this signal is public.  Everyone knows that everyone knows … that the internet has shut down.  Instead of relying on the noisy private signal that you get from talking to your friends, now you know that everybody is seeing exactly the same thing and are emboldened in exactly the same way.

It’s as if the regime has done the information aggregation for you and packaged it into a nice fat public signal.  This removes a lot of the coordination uncertainty and strengthens your resolve to protest.

Addendum: Tyler has some related observations.

There is pressure for filibuster reform in the Senate.  Passing the threshold of sixty to even hold a vote was hard in the last couple of years when the Democrats had a large majority.  It’s going to be near impossible now their ranks are smaller.  Changing the rules has a short run benefit – easier to get stuff passed – but a long run cost – the Republicans will use the same rules to pass their legislation when Sarah Palin is President.  Taking the long view, the Democrats decided not to go this route.

By the same token, the kind delaying tactics that did not work in the lame duck session are an efficiency loss  – they had little real effect on legislation but delayed the Senators taking the kind of long holidays they are used to.   Some movement on delaying tactics is mutually beneficial.  And so according to the NYT:

“Mr. Reid pledged that he would exercise restraint in using his power to block Republicans from trying to offer amendments on the floor, in exchange for a Republican promise to not try to erect procedural hurdles to bringing bills to the floor.

And in exchange for the Democratic leaders agreeing not to curtail filibusters by means of a simple majority vote, as some Democratic Senators had wanted to do, Senator Mitch McConnell of Kentucky, the Republican leader, said he would refrain from trying that same tactic in two years, should the Republicans gain control of the Senate in the next election.”

I am teaching a new PhD course this year called “Conflict and Cooperation”. The title is broad enough to include almost anything I want to teach.  This is an advantage – total freedom! – but also a problem – what should I teach?  The course is meant to be about environments with weak property rights where one player can achieve surplus by stealing it and not creating it.  To give some structure, I have adopted Hobbes’s theories of conflict to give structure to the lectures.  Hobbes says the three sources of conflict are greed, fear and honour. The solution is to have a government or Leviathan which enforces property rights.

Perhaps reputation models à la Kreps-Milgrom-Roberts-Wilson come closest to offering a game theoretic analysis of honour (e.g. altruism in the finitely repeated prisoner’s dilemma). But I will only do these if I get the time as this material is taught in many courses.  So, I decided to begin with greed.

I started with the classic guns vs butter dilemma: why produce butter when you can produce guns and steal someone else’s butter?  This incentive leads to two kinds of inefficiency: (1) guns are not directly productive and (2) surplus is destroyed in war waged with guns.  The second inefficiency might be eliminated via transfers (the Coase Theorem in this setting). This still leaves the first inefficiency which is similar to the underinvestment result in hold-up models in the style of Grossman-Hart-Moore.  With incomplete information, there can be inefficient war as well.  A weak country has the incentive to pretend to be tough to extract surplus from another.  If its bluff is called, there is a costly war. (Next time, I will move this material to a later lecture on asymmetric information and conflict as it does not really fit here.)

These models have bilateral conflict. If there are many players, there is room for coalitions to form, pool guns, and beat up weaker players and steal their wealth. What are stable distributions of wealth? Do they involve a dictator and/or a few superpowers? Are more equitable distributions feasible in this environment? It turns out the answer is “yes” if players are “far-sighted”. If I help a coalition beat up some other players, maybe  my former coalition-mates will turn on me next. Knowing this, I should just refuse to join them in their initial foray. This can make equitable distributions of wealth stable.

I am writing up notes and slides as I am writing a book on this topic with Tomas Sjöström.  Here are some slides.

It appears to be intended for using illustrative experiments in an undergraduate-level course in game theory.

This site is based on the perception of game theory as the study of a set ofconsiderations used by individuals in strategic situations. Models are not seen as depictions of how individuals actually play game-like situations and are not meant to be used as the basis for a recommendation on how to play real “games”.  My goal as a teacher is to deliver a loud and clear message that game theoretic models are not meant to supply predictions of strategic behavior in real life.


Illinois governor Pat Quinn is considering whether to sign into law a tax bill that includes a new tax on online retailers, the so-called Amazon Tax.  Until now, online transactions are not taxed in states where the retailer has no physical presence (with a few exceptions.)  The new measure would end this in Illinois, treating Amazon as an Illinois retailer so long as one of its online affiliates is based in the state.  (Every state has thousands of online affiliates.)

Amazon is responding by playing chicken.  From Presh Talwalker:

So Amazon is fighting back at Illinois with a threat. Amazon has emailed its commissioned affiliates the following message:

We regret to inform you that the Illinois state legislature has passed anunconstitutional tax collection scheme that, if signed by Governor Quinn, would leave Amazon.com little choice but to end its relationships with Illinois-based Associates. [emphasis mine]

The following logic seems to explain the motive. If Amazon ends its affiliate relationships in Illinois, then it would have no physical presence in the state, and hence it would get around the bill.

The email levies harsh criticism at Illinois and is meant to garner sympathy. In reality, the move is calculated and strategic.

Amazon is threatening all affiliates on purpose – even though it doesn’t have to. Here is an interesting tidbit the Chicago Tribune reported:

The bill applies only to affiliates that have at least $10,000 a year in revenue. But if large retailers, such as Amazon, cut off all affiliates in Illinois, it would end commission streams to small Web sites, such as bloggers, who might sell Amazon goods at their sites. Amazon could not be reached for comment.

Amazon is playing a classic retaliatory strategy. If Illinois wants to pass this law, then it will do everything to hurt the state and even otherwise innocent and small-time bloggers, who might decide its time to complain to Gov. Pat Quinn.

There’s more in Presh’s article here. (Amazon seems to understand reputation building because it carried through with its threat in Colorado when that state passed a similar measure.)

My view is that the threat is credible even ignoring reputation-building.  The lost revenue from sales tax would dwarf the losses from cutting off affiliates.

Navy Captain Owen Honors was relieved of his command of The USS Enterprise. This is the guy behind the viral videos that made the news this week.

I want to blog about the news coverage of the firing. For example, this Yahoo! News article has the headline “Navy Firing Over Videos Raises Questions Of Timing.”  Here is the opening paragraph:

The Navy brusquely fired the captain of the USSEnterprise on Tuesday, more than three years after he made lewd videos to boost morale for his crew, timing that put the military under pressure to explain why it acted only after the videos became public.

Two observations:

  1. Sadly, it does make perfect sense to respond to his firing now by complaining that he wasn’t fired earlier.  (And to complain less if he wasn’t fired at all.) The firing now reveals that his behavior crosses some line that the Navy has private information about.  Now that we know he crossed that line we have good reason to ask why he wasn’t punished earlier.
  2. Obviously that fact implies that it is especially difficult for the Navy to fire him now, even if they think he deserves to be fired.

The more general lesson is that there is tragically too little reward for changing your mind due to social forces that are perfectly rational and robust.  The argument that a mind-changer is someone who recognizes his own mistakes and is mature enough to reverse course cannot win over the label that he is a “waffler” or other pejorative.  And the force is especially strong when it comes to picking a leader.

 

Of this (via MR):

It’s an auction conducted at the airport terminal.  In this auction you are a seller and you are bidding to sell your ticket back to the airline.

Optimists look at this and contemplate the efficiency gains:  this is a mechanism for appropriately allocating scarce space on the plane. Pessimists detect a nasty incentive:  now that the lowest bidder can be bought off the plane the airline has a stronger incentive to overbook.

The pessimists are right precisely because the optimists are right too.

Consider standard airline pricing with no overbooking.  You buy a ticket in advance for a flight next month.  Lots of uncertain details are resolved between now and then which determine your actual willingness to pay to fly on the departure date.  One month in advance you can only form an expectation of this and that expected value is your willingness to pay for a seat in advance.

This is inefficient.  Because, after the realization of uncertainty it could be that your value for flying is lower than somebody else who didn’t buy a ticket. Efficiency dictates that you should sell your ticket to him on the day of the flight.

One way to implement this is to hold an auction on the day of departure.  Put aside the issue that flyers want advance booking for planning reasons.  Even without that incentive, just-in-time auctions solve the inefficiency problem with conventional pricing but airlines would never use them.

The reason is that an auction leaves bidders with consumer surplus (or in the parlance of information economics, information rents.) As a simple example, suppose there is a single seat avaiable on the flight and two bidders are bidding for it.  An optimal auction is (revenue-equivalent to) a second-price auction so that the winning bidder’s price is equal to the willingness to pay of the second-highest bidder.  That is lower than the winner’s willingness to pay and the difference is his consumer’s surplus.

The airline would like to achieve the efficient allocation without leaving you this consumer’s surplus.  That is impossible in a spot-auction because the airline can never know exactly how much you are willing to pay and charge you that.

But a hybrid pricing mechanism can implement the efficient allocation and capture all the surplus it generates.  And this hybrid pricing mechanism entails overbooking followed by a departure-day auction to sell back excess tickets.

The basic idea is standard information economics.  The reason you get your information rents in the spot auction is that you have an informational advantage:  only you know your realized willingness to pay.  To remove that informational advantage the airline can charge you an entrance fee to participate in the auction before your willingness to pay is realized, i.e. a month in advance as in conventional pricing.

Here is how the scheme works in the simple example.  There is one seat available.  Instead of selling that single seat to a single passenger, the airline sells two tickets.  Then, on the day of departure an auction is held to sell back one ticket to the airline.  The person who “wins” this auction and makes the sale will be the person with the lowest realized value for flying.  The other person keeps their ticket and flies.  On auction day, the winner gets some surplus:  the price he will receive is the willingness to pay of the other guy which is by definition higher than his own.  (Delta is apparently using a first-price auction, but by revenue equivalence the surplus is the same.)

But in order to get the opportunity to compete in this auction you have to buy a ticket a month in advance.  And at that time you don’t know whether you are going to win the auction or fly.  The best you can do is calculate your expected surplus from participating in that auction and you are willing to pay the airline that much to buy a ticket. Your ticket is really your entrance pass to the auction. And the price of that ticket will be set to extract all of your expected surplus.

Note that the only way that the airline can achieve these efficiency gains and the accompanying increase in profits is by overbooking at the stage of ticketing.  So the pessimists are right.

(You can write down a literal model of all of the above.  The conclusion that all of your surplus is extracted would follow if travelers were ex ante symmetric:  they all have the same expected willingness to pay at the time of ticketing.  But the general conclusion doesn’t require this:  all of the efficiency gains from adding a departure-day sellback auction will be expropriated by the airline.  That follows from a beautiful paper by Eso and Szentes.  To the extent that fliers retain some consumer surplus it is due to ex ante differences in expected willingness to pay.  The two fliers with the highest expected surplus will buy tickets at a price equal to the third-highest expected surplus.  This consumer surplus is already present in conventional pricing.)

Bad review < Good review < No review at all:

S. Irene Virbila, the L.A. Times’ restaurant critic for the last 16 years, was visiting Red Medicine restaurant in Beverly Hills on Tuesday night when she was approached by managing partner Noah Ellis, who took Virbila’s picture without her permission and then ordered Virbila and her three companions to leave, refusing them service.

Ellis posted her picture on the restaurant’s Tumblr site, explaining that she was not welcome there.

The LA Times food blog has the story.  Other blogs have the picture.  The Times is undeterred.

The Times will continue with its plans to review Red Medicine. The restaurant was chosen for review, Parsons said, because of its pedigree –- Ellis has worked in the past with noted chef and restaurateur Michael Mina. And, Parsons added, “We had hopes that they would be doing interesting things with Southeast Asian food. We will still review them.”

It sounds so simple:  you’re nice you make the list, you’re naughty you get a stocking full of coal.  But just how much of the year do you have to be nice?

It would indeed be simple if Santa could observe perfectly your naughty/nice intentions.  Then he could use the grim ledger:  you make the list if and only if you are nice all 365 days of the year.  But it’s an imperfect world.  Even the best intentions go awry.  Try as you may to be nice there’s always the chance that you come off looking naughty due to misunderstandings or circumstances beyond your control.  Just ask Rod Blagojevich.

And with 365 chances for misunderstanding, the grim ledger makes for a mighty slim list come Christmas Eve.  No, in a world of imperfect monitoring, Santa needs a more forgiving test than that. But while it should be forgiving enough to grant entry to the nice, it can’t be so forgiving that it also allows the naughty to pass. And then there’s that dreaded third category of youngster:  the game theorist who will try to find just the right mix of naughty and nice to wreak havoc but still make the list.  Fortunately for St. Nick, the theory of dynamic moral hazard has it all worked out.

There exists a number T between 0 and 365 (the latter being a “sufficiently large number of periods”) with three key properties

  1. The probability that a truly nice boy or girl comes out looking nice on at least T days is close to 100%,
  2. The probability that the unwaveringly naughty gets lucky and comes out looking nice for T days is close to 0%,
  3. If you are being strategic and you are going to be naughty at least once,  then you should go all the way and be unwaveringly naughty.

The formal statement of #3 (which is clearly the crucial property) is the following.  You may consider being naughty for Z days and nice for the remaining 365-Z days and if you do your payoff has two parts. First, you get to be naughty for Z days.  Second, you have a certain probability of making the list.  Property #3 says that the total expected payoff is convex in Z.  And with a convex payoff you want to go to extremes, either nice all year long or naughty all year long.

And given #1 and #2, you are better off being nice than naughty.  One very important caveat though.  It is essential that Santa never let you know how you are doing as the year progresses.  Because once you know you’ve achieved your T you are in the clear and you can safely be naughty for the remainder.  No wonder he’s so secretive with that list.

(The classic reference is Radner. More recently these ideas are being used in repeated games.)

Facebook, Buzz, Reader, and other social networking sites all have one thing in common:  if you like something then you get to like it.  But you never get to dislike what you dislike.  (Sure you can unlike what you previously liked, but just as with that other interest rate you are constrained by the zero lower bound.  You can’t go negative.)

This kind of system seems to pander to people such as me who obsessively count likes (and twitter followers, and google reader subscribers and…) because for people like us even a single dislike would be devastating. With only positive feedback possible we are spared the bad news.

But after a while we start to get the nagging suspicion that the lack of a like is tantamount to being disliked. We put ourselves in the mind of each individual reader.  If she liked it then she will like it.  If she didn’t like it, she would like to dislike it but she can’t.  So she’s silent.  But then if she was neutral she now knows that by being silent she is going to be pooled with with the dislike haters.  She doesn’t want to hurt my feelings so she likes. Kindhearted but cruel:  now I know that everyone who didn’t like indeed didn’t like.  It’s exactly as if there was a dislike button.  Despair.

But wait.  One wrinkle saves our fragile ego.  Some people are just too busy to like.  Or they don’t know about the like button.  And who knows exactly how many people read the article anyway.  So a non-like could be any one of these. Which means that kindhearted neutrals can safely stay on the sidelines and pool with these non-participants. A pool big enough to drown out the haters. Joyful noise! And as a bonus I get to know for sure that the likers are likers and not just patronizers.

Finally there’s the personal aspect, it’s flattering to see who likes.  The serial likers keep me going.   Especially this one regular reader who by amazing coincidence has the same name as me and who likes everything I write.

(drawing:  emotional baggage from www.f1me.net)

Arbitraging profiling.

A white bank robber in Ohio recently used a “hyper-realistic” mask manufactured by a small Van Nuys company to disguise himself as a black man, prompting police there to mistakenly arrest an African American man for the crimes.

One way players might play a game is by learning over time till they reach a best response to strategies they have observed in the past.  If learning converges, then a natural  hypothesis due to Fudenberg and Levine , is that it settles on a self-confirming equilibrium:

Self-Confirming Equilibrium (SCE) is a relaxation of Nash equilibrium: Each player chooses a best response to his beliefs and his beliefs are “correct” on the path of play.  But different players may have different beliefs over strategies off the path of play and may believe that players’ actions are correlated.  Nash equilibrium (NE) requires that players’ beliefs are also correct off the path of play, that all players have the same beliefs over off the path play and that players’ strategies are independent.  As the definition of Nash equilibrium puts extra constraints on beliefs, the set of Nash equilibria of a game cannot be larger than the set of self-confirming equilibria.

There is no reason why learning based on past play should tell us anything about off path play.  So SCE is a more natural prediction for the outcome of learning than NE.  Finally, we come to college football!

The University of Oregon football team has been pursuing an innovative “off the path” strategy:

“Oregon plays so fast that it is not uncommon for it to snap the ball 7 seconds into the 40-second play clock, long before defenses are accustomed to being set. That is so quick that opponents have no ability to substitute between plays, and fans at home do not have time to run to the fridge.”

Opposing teams on defense are just not used to playing  against this strategy and have not developed a best-response.   So far they have come up with an import from soccer, the old fake an injury strategy.  This has yielded great moments like the YouTube video above.

I am trying to relate this football scenario to SCE.  SCE does not incorporate experimentation which is what the Oregon Ducks are trying so this is immediately inconsistent with SCE.  But set that aside – even without experimentation, is the status quo of slower snaps and best responses to them an SCE?  I think it is and that it is even consistent with NE.

Even in a SCE of the two player sequential move game of football, the offense has to hypothesize what the defense would do if the offense plays fast.  Given their conjecture about the defense’s play if the offense plays fast, it is better for the offense to play slow rather than play fast.  Their conjecture about the defense’s play to fast snaps does not have to be at a best response for the defense as this node is unreached.  And the defense plays a best response to what they observe – slow play by the offense. So both players are at a best response and the offense’s conjecture about the defense play off the path of play can be taken to be “correct” as neither SCE nor NE put restrictions on the defense being at a best response off the path of play.

In other words, in two player games, a SCE is automatically a NE.  From diagonalizing Fudenberg and Levine, it seems this that this is true if you rule out correlated strategies (but I am administering an exam as I write this so I cannot concentrate!). If I am right, the football example is consistent with SCE and hence NE. (In three (or more) player games, there can be a substantive difference between SCE and NE as different players can have different conjectures on off path play in SCE but not NE and this can turn out to be important.)

But the football experience is not necessarily a Subgame Perfect Equilibrium. This adds the requirement of sequential rationality to Nash equilibrium:  Each player’s strategy at all decision nodes, even those off the path of play, has to be a best response to his beliefs and beliefs have to be correct etc.   So, it may be that football teams on offense have been assuming there is some devastating loss to playing fast.  First, it is simply hard to play fast and perhaps they thought it was easy to defend fast snaps.  But since this was never really tested, no-one really knew it for a fact.

Now the Oregon Ducks are experimenting and their opponents are trying to find a best response.  So far they have come up with faking injuries.  Eventually they will find a best response.  Then and only then will the teams learn whether it is better for the offense to have fast snaps or slow snaps.  And then they will play subgame perfect equilibrium: the offense may switch back to slow snaps if the best response to fast snaps if sufficiently devastating.

Today Qatar was the surprise winner in the bid to host the FIFA World Cup in 2022, beating Japan, The United States, Australia, and Korea.  It’s an interesting procedure by which the host is decided consisting of multiple rounds of elimination voting.  22 judges cast ballots in a first round.  If no bidder wins a majority of votes then the country with the fewest votes is eliminated and a second round of voting commences.  Voting continues in this way for as many rounds as it takes to produce a majority winner.  (It’s not clear to me what happens if there is a tie in the final round.)

Every voting system has its own weaknesses, but this one is especially problematic giving strong incentives for strategic voting.  Think about how you would vote in an early round when it is unlikely that a majority will be secured. Then, if it matters at all, your vote determines who will be eliminated, not who will win.   If you are confident that your preferred site will survive the first round, then you should not vote truthfully.  Instead you should to keep bids alive that will easier to beat in later rounds.

Can we look at the voting data and identify strategic voting?  As a simple test we could look at revealed preference violations.  For example, if Japan survives round one and a voter switches his vote from Japan to another bidder in round two, then we know that he is voting against his preference in either round one or two.

But that bundles together two distinct types of strategic voting, one more benign than the other.  For if Japan garners only a few votes in the first round but survives, then a true Japan supporter might strategically abandon Japan as a viable candidate and start voting, honestly, for her second choice.  Indeed, that is what seems to have happened after round one.  Here are the data.

We have only vote totals so we can spot strategic voting only if the switches result in a net loss of votes for a surviving candidate.  This happened to Japan but probably for the reasons given above.

The more suspicious switch is the loss of one vote for the round one leader Qatar. One possibility is that a Qatar supporter , seeing Qatar’s survival to round three secured, cast a strategic vote  in round two to choose among the other survivors. But the more likely scenario in my opinion is a strategic vote for Qatar in round one by a voter who, upon learning from the round 1 votes that Qatar was in fact a contender, switched back to voting honestly.

Every year my employer schedules two weeks for Open Enrollment in the benefits plan. This is the time period where you can freely change health plans, life insurance, etc. In the weeks before Open Enrollment we receive numerous emails reminding us that it is coming up. During the Open Enrollment period we receive numerous emails reminding us of the deadline. The day of the deadline we receive a final email saying that today is the deadline.

Then, every year after the deadline passes the deadline is extended an additional week, and the many people who procrastinated and missed the first deadline are given a reprieve. This happens so consistently that many people know it is coming and understand that the “real” deadline is the second one. So many that it is reasonable to assume that a sizeable number of these people procrastinate up to the second deadline and miss that one too. But there is never a third deadline.

You notice this kind of thing happening a lot. Artificial deadlines that you can be forgiven for missing but only once. It’s a puzzle because whatever causes the first deadline to be extended should eventually cause the second deadline to be extended once everyone figures out that the second deadline is the real one. For example if the reason for the first deadline extension is that year after year many people miss the first deadline and flood the Benefits office with requests, then you would expect that this will eventually happen with the second deadline.

The deadline-setters must feel on firmer ground denying a second request by saying “We warned you about the first deadline, then we were so nice and gave you an extension and you still missed that one.” But if a huge number of people missed the second deadline, no doubt they could still mount enough pressure for a second extension. Indeed the original speech of “We sent you emails every day warning you about the deadline and you still missed it” didn’t stem that tide.

Is it just that every nth deadline is a new coordination game among all the employees and the equilibrium number of deadline extensions is simply the first one in which the “no extension request” equilibrium is selected? I think there is more structure than that because you rarely see more than just one, and you almost never see zero.

I think there is scope for some original theory here and it could be very interesting. What’s your theory?

Professor Richard Quinn informs his Business Strategy class at the University of Central Florida that “forensic analysis” of the data gives him a good sense of who cheated on the midterm exam.  (The link has video of the scolding.)  Such a good sense that he can provide the administration with a list that he is “95% certain includes everyone who cheated on the exam.”  (Quick:  can you come up with such a list, even without seeing the data?) Unfortunately he can’t be as sure of the converse: that everyone on that list was a cheater.

So he is offering a deal to his students.  They can individually confess to cheating, attend a 4 hour ethics course and receive amnesty, or they can take the risk that they will not be caught.  What would you do?

  1. Professor Quinn’s speech reveals that the only evidence for cheating is an anonymous tip plus a suspicious grade distribution.  Based only on this the only signal that you cheated was that your score was high. But it’s not credible to punish people just for having a high score.
  2. If Professor Quinn expects his gambit to work and for cheaters to turn themselves in, then he should believe that everyone who doesn’t turn himself in is innocent.  So you should not turn yourself in.
  3. The biggest fear is that someone who you collaborated with turns himself in and he is induced to rat you out.  Then as long as you are not sure who knows you were in on the scam you should turn yourself in.
  4. It’s surprising that this possibility was never mentioned in Professor Quinn’s rant because without it, his threat loses much of its force.
  5. The fact that he didn’t raise this possibility reveals that he is not so interested in rounding up every last cheater but simply to get a large enough number to confess.  That way he can say that a lesson was learned.  This suggests that you should confess only if you think that your confession will just push the total number of confessions over that threshold.  Unlikely (unless everyone is thinking like you.)

Throw a party.  And use a system like evite.com to handle the invitations. There is a typical pattern to the responses over time.  You will have an initial flurry of yeses and regrets followed by a long period of silence punctuated by sporadic responses which continues to the days before the party.  Then there is a final flurry and that is when you learn if your friends are real friends.

Because people come to your party for one of two reasons.  Either they like you or they just feel obligated for reasons like you are an important co-worker or they don’t want to hurt your feelings, etc.  Think of how these two types of people will handle your invitation.

An invitation is an option that can be exercised at any time before the date of the party.  The people who did not respond immediately are waiting to decide whether to exercise the option.  If she’s a true friend then this is because she has a potential conflict that would prevent her attending.  She is waiting and hoping to avoid that conflict.  When she is sure there is no conflict she will say yes.

The other people are hoping for an excuse not to come.  Once they get a better offer, manage to schedule a conflicting business trip, or otherwise commit themselves, they will send their regrets.

In both cases, when the party is imminent, the option value of waiting is gone. Those who want to come but haven’t gotten out of their conflict give up and send their regrets. Those who hoped to get out of it but failed to come up with a believable excuse give up and accept.

So, a simple measure of how much your friends like you is the proportion of acceptances that arrive in the final days.  Lots of acceptances means you better set aside a few extra drinks for yourself.

As junior recruiting approaches, we cannot help but speculate on the optimal way to compare apples to oranges – candidates across different fields (e.g. micro vs macro) and across universities.  I speculated a while ago that a “best athlete” recruiting system across fields is prone to gaming.  Each field might simply claim its candidate is great.  To stop that happening, you might have to live with having slots allocated to fields and/or rotating slots over time.

It turns out that Yeon-Koo Che, Wouter Dessein and Navin Kartik have thought about something much more subtle along these lines in their paper “Pandering to Persuade“.  They consider both comparisons across fields and across candidates from different universities.    I’m going to give  a rough synopsis of the paper.

Suppose the recruiting committee in an economics department is deciding whether to hire a theorist or a labor economist.  There is only one labor economist candidate and her quality is known.  There are two theorists, one from University A and one from University B.  The recruiting committee would like to hire a theorist if and only if his quality is higher than the labor economist’s.     Also, the recruiting committee and everyone else believes that, on average, candidates from University A are better than those from University B.   But of course this is only true on average.  Luckily some theorists can read the paper and help fine tune the committee’s assessment of the theory candidates.  They share the committee’s interest in hiring the best theorist but they are quite shallow and hence uninterested in research outside their own field.  In particular, theorists do not care for labor economics and always prefer a theorist at the end of the day.

So, the recruiting committee must listen to the theorists’ recommendation with care.  First, the theorists have huge incentives to exaggerate the quality of their favored candidate if this carries influence with the committee.  Hence, quality evaluations cannot be trusted. All the theorists can credibly do is say which candidate is better but not by how much.  But there is a further problem: if the theorists say candidate B is better, given the committee’s prior, they might think better of candidate B and yet prefer to hire the labor economist!  Being theorists, the sender(s) can do backward induction and they know the difficulty with their strategy if it is too honest.  The solution is obvious to the theorists:  extol the virtues of candidate A even when candidate B is a little better.  Hence, in equilibrium, the candidate from the ex ante better university gets favored. But candidate B still has a shot:  if they are sufficiently good, the theorists still recommend them.  The committee may with some probability still go with the labor economist so it is risky to make this recommendation.   But if candidate B is sufficiently good, the theorists may want to run this risk rather than push the favored candidate A.  I refer you to the paper for the full equilibrium(a) but, as you can see, the paper is fun and interesting.

There are some extensions considered.   In one, the authors study delegation to the theorists.  Sometimes the department will lose out on a good labor economist but at least there is no incentive for the theorists to select the worst candidate.  This is the giving slots to fields solution I wondered about and it is derived in this elegant model.

Because communication requires both a talker and a listener and it takes time and energy for the listener to process information.  So it may be cheap to talk but it is costly to listen.

But then the cost of listening implies that there is an opportunity cost to everything you say.  Because you can only say so much and still be listened to. They won’t drink from a firehose.

When you want to be listened to you have an incentive to ration what you say, and therefore the mere fact that you chose to say something conveys information about how valuable it was to you to have it heard.  There is no babbling because babbling isn’t worth it.

I also believe that this is a key friction determining the architecture of social networks.  Who talks and who listens to whom?  The efficient structure economizes on the cost of listening.  It is efficient to have a small number of people who specialize in listening to many sources then selectively “curating” and rebroadcasting specialized content. End-listeners are spared the cost of filtering.  The economic question is whether the private and social incentives are aligned for someone who must ration his output in order to attract listeners.

Last Tuesday, everyone’s favorite mad-scientist-laboratory of Democracy, the San Francisco City Council enacted a law banning the Happy Meal. Officially what is banned is the bundling of toys with fast food. The theory seems to be that toys are a cheap substitute for quality food and that a prohibition on bundling will force McDonald’s to compete instead on the quality of its food.

But it’s not easy to lay out a coherent theory of the Happy Meal. You could try a bargaining story. Kids like toys, parents want healthy food but are willing to compromise if the kids put up enough of a fuss. McDonald’s offers that compromise in the form of cheap toys and crappy food, raking in their deadweight loss (!).

You could try a story based on 2nd degree price discrimination. There are parents who care more about healthy food (Chicken Nuggets??) and parents who care less. A standard form of price discrimination has a higher end item for the first group and a low-end item for the second. The low-end item fetches a low price because it is purposefully inferior. But if toys are a perfect substitute for healthy food in the eyes of the health-indifferent parents, then a Happy Meal raises their willingness to pay without attracting the health-conscious (and toy-indifferent) parents.

You could even spin a story that suggests that the SF City Council’s plan may backfire. That’s what Josh Gans came up with in his post at the Harvard Business Review Blog.

For a parent, this market state of affairs spells opportunity. With McDonald’s offering a toy instead of additional bad stuff, the parent can ‘sell’ this option to their children and get them to eat less bad stuff than they would at another chain. The toy is a boon if the parents are more concerned about the bad stuff than having another junky toy in the house … They allow a parent to increase the value of healthier products in the eyes of children and negotiate a better price (perhaps in the form of better food at home) for allowing their children to have them. Happy Meals do have carrots after all.

But all of these stories have the same flaw:  McDonald’s can still achieve exactly the same outcome by unbundling the Happy Meal, selling toys a’la carte alongside the Now-Only-Somewhat-Bemused Meals they used to share a cardboard box with.  Just as before families will settle their bargains by re-assembling the bundle, health sub-conscious families will buy the low-end burger and pair it with toys, and parents who have to bribe their kids will buy McDonald’s exclusive movie-tie-in toys to get them to eat their carrots.

(Yes I am aware of the Adams-Yellen result that bundling can raise profits, but this has nothing to do with toys and healthy food specifically.  Indeed McAfee, McMillan and Whinston show that generically the Adams-Yellen logic implies that some form of bunding is optimal.  So this cannot be the relevant story for McDonalds which is otherwise a’la carte.)

So I don’t think that economic theory by itself has a lot to say about the consequences of the Exiled Meal.  The one thing we can say is that McDonald’s doesn’t want to be forced to unbundle.  Putting constraints like that on a monopolist can sometimes improve consumer welfare and sometimes reduce it.  It all depends on whether you think McDonald’s increases its share of the surplus by lowering the total or raising it.  The SF City Council, like most of us one way or the other, probably had formed an opinion on that question already.

Last Friday I earned $17.20 for a day’s work as a standby juror.  Standby jurors wait in a big room until they are put in a panel of 14 and sent into a courtroom for selection.  In civil trials, 6 jurors will be selected and seated from the 14.  I was rejected from two panels and then sent home.

The jury selection process is a little obscure, but I found this.  Here’s the model I glean from it.  The panel is numbered 1-14.  Lawyers for Plaintiff and Defendant each have 3 “peremptory” challenges which enables them to strike a juror from the panel.  The Plaintiff moves first and makes any peremptory challenges. This creates a provisional jury consisting of the 6 highest jurors on the list that have not been eliminated yet.  The Defendant can either accept this jury or use a challenge to strike one or more from it sending a new proposed jury, again consisting of the 6 highest jurors not yet struck, back to the Plaintiff.  This continues until someone accepts or all challenges are exhausted.

The game can be solved by backward induction.  I think something like the following is an optimal strategy.  First, when given a proposal of 6 jurors, rank them from least favorable to most.  To decide whether to strike the least favorable, ask whether her replacement will be stricken by the opposition (and any further replacements) and if so whether the final replacement will be better or worse than the least favorable now.  Of course you could always just strike the 3 least favorable in one go, but you are better off moving sequentially in hopes that the other guy makes a mistake and does it for you.

It gets more complicated when you take into account the challenges for “cause.”  These are challenges that require justification on the grounds that the juror is biased.  The possibility of challenges for cause explains why the panel has 14 rather than just 12.  And challenges for cause are evidently used a lot because on my second panel all but 3 jurors were excused.

In practice the jury selection is at least as much signaling as screening.  As if we were playing jury-Jeopardy, the lawyers sent messages phrased in the form of a question.  The defendant asked us “Does everybody understand that anyone can file a lawsuit if they pay a fee?”  The Plaintiff said “Does everyone understand that an insurance company has all the same rights as an individual?”  This is a kind of pre-opening statement.

I wonder whether it was the Plaintiff or Defendant that challenged me.  What they knew about me is that I am a Professor of Economics, the father of three kids, and that I drove a car over a mailbox when I was 16.  (Both of the cases were insurance claims arising out of an auto accident so they wanted to know.)

When I entered the first courtroom the Judge informed us that this case involved an insurance company.  I sized up the lawyers for the two sides.  I immediately pegged the Plaintiff as a sleazy ambulance chaser and the Defendant as a slick insurance company henchman whose life’s calling is to deprive the injured their just deserts (and instead send them to the Mojave for ice cream sundaes.)  The next thing we were told was that this was in fact a case in which the insurance company was suing a client.  So much for reading people by their faces.

The judges struck me as smarter than the attorneys.  (This based on talking with them, not just the looks on their faces 🙂 )

30% of my fellow standby jurors were unemployed.  60% were divorced or separated.

I saw David Myatt present this paper in Oxford this Fall.  It made quite a splash with some surprising results about voter turnout rates consistent with “rational choice” theory.  Voting theories based on the assumption that voters calculate the costs and benefits have always been thought to imply very low turnout in order to generate sufficiently high probabilities of close elections.

Consider a region with a population of 100,000 where 75% of the inhabitants are eligible to vote. Suppose that a 95% confidence interval for the popularity of the leading candidate stretches from 56% to 61%. If each voter is willing to participate in exchange for a 1-in-2,500 chance of influencing the outcome of the election, then turnout will exceed 50%. Greater turnout for the underdog offsets her disadvantage.

 

Your vote makes a difference only when it is pivotal.  Now don’t worry, I am not bringing this up to sort through the tired old arguments about whether you should go to the polls today. You should!  That settled, let’s talk about what it implies for how you should vote once you get there.

Because if your vote only makes a difference when it breaks a tie (or makes a tie), then when it comes time to decide how to vote, you might as well assume your vote will be pivotal.  And ask yourself how would you vote if your vote was going to make or break a tie.

Be careful.  This is not the same as the question “How would you vote if you were the dictator?”  Indeed quite often your vote should not be the vote you would cast if yours was the only vote.  That’s because when your vote is pivotal you learn something that a dictator doesn’t.  You learn that all of the other voters were (almost) perfectly split and and that implies something very specific about the other voters and what they must know about the candidates (or propositions) on the ballot.

Quite often that information is crucial for determining how you want to vote. Let me give you a simple example.  Judges are almost always re-elected. Pretty much the only time a judge is voted off the bench is if that judge is completely incompetent.  Now you haven’t bothered to read anything about the judges on your ballot.  You know nothing about them individually but you know that most judges are doing just fine and should be re-elected.

If you were the dictator (an uninformed dictator!) you would vote yes for every judge.  But things turn completely upside-down in an election when you factor in the information you learn from your vote being pivotal.  Since all competent judges are easily re-elected, the only way it could have happened that all the other voters are split is that this judge is not competent!  Knowing that, and knowing that your vote will decide whether an incompetent judge is re-elected, you should vote no.  Against every judge.

Now, the smart readers of this blog have already thought one step ahead and noticed that this logic is self-defeating.  Because if everyone figured this out, then everyone is voting against every judge and then every judge is voted down, not just the incompetent ones.  Here’s where the theory takes one of two paths, use your judgement.

First you might not believe that the electorate in general is as sophisticated as you are.  The vast majority of voters don’t understand the logic of pivotalness and they are naively voting the way they would if they were dictators.  In that case, the argument I have laid out works as written and you should vote against every judge.

On the other hand you might believe that a signifcant fraction of voters do understand the strategic subtleties of voting.  Then we have an equilibrium to find.  For starters we take as given that the judge himself and all of his friends will vote for him.  So he has a head start.  Now there’s a small group of do-gooders who have read up on this judge and know whether he is competent.   They vote as if they are dictators, with good reason now because they are informed. They vote yes if he is competent and no if he is not.

The rest of us know nothing.  Until, that is, we take into account what we can infer from being pivotal.  And if it were just the informed and the judge’s friends who were voting then what we can infer is that enough of the informed are voting no to counteract the judge’s head start.  That is, the judge is incompetent.

In equilibrium none of us uninformed voters vote yes.  Because if any of us are voting yes, then effectively the judge has an even bigger head start and that makes it even worse news that the no votes caught up with the head start.  But not all of us vote no.  Some of us do, but most of us abstain.  Enough of us that it remains a valid inference that a pivotal vote means that enough of the informed voted no to make it optimal for us to vote no.

This is the logic of The Swing Voter’s Curse.

In the wake of the Nobel for the search theory of unemployment, let’s talk about the search models that really matter:  hooking up.

Everybody who reads this blog understands the Prisoner’s Dilemma.  Play it just once and neither side will cooperate.  So a simple theory of relationships is based on a repeated Prisoner’s Dilemma.  When the relationship can potentially continue, there is now an incentive to cooperate today in order to maintain cooperation in the future.  Put differently, the threat of a future breakdown of cooperation enforces cooperation today.

But things get interesting when we embed this into a search and matching model.  Out of the large pool of the unmatched, two singles get “matched” and they start a relationship, i.e. a repeated prisoner’s dilemma.  As long as the relationship continues each decides whether to cooperate or defect and at any stage either party can break-up the relationship and go find another match.

This possibility of breaking up the match adds a new friction to relationships. The threat of a breakdown in the current relationship is not enough anymore to incentivize cooperation because that threat can be avoided by leaving.  And indeed, it’s not an equilibrium anymore for relationships to work efficiently because then any partner can cheat in his current relationship and then immediately go find another partner (who, expecting cooperation, is the next sucker, etc.)

Something has to give to maintain incentives.  What’s the best way to make relationships just inefficient enough to keep as much cooperation as possible? A simple solution is to “start small:” At the beginning of any relationship there is a trial phase where the level of cooperation is purposefully low, and only after both partners remain in the relationship through the trial phase do they start to get-it-, er, cooperate.

This courtship ritual is privately wasteful but socially valuable.  Once I am in a relationship I am willing to wait through the trial phase because the reward of cooperation is waiting for me at the end.  And once the trial phase is over I have no incentive to cheat because then I would just have to go through the trial phase again with my new partner.  Equilibrium is restored.

There are a number of different spins on this idea in the literature.  There was an early series of papers by Joel Watson based on a model with incomplete information.  I remember really liking this paper by Lindsey, Polak, and Zeckhauser on “Free Love, Fragile Fidelity, and Forgiveness.”  And this quarter, we heard David McAdams with a new perspective on things, including some conditions under which courtship can be dispensed with altogether and partners can get right down to business.

Tyler Cowen explores economic ideas that should be popularized.  Let me take this opportunity to help popularize what I think is one of the pillars of economic theory and the fruit of the information economics/game theory era.

When we notice that markets or other institutions are inefficient, we need to ask compared to what?  What is the best we could possibly hope for even if we could design markets from scratch?  Myerson and Satterthwaite give the definitive answer:  even the best of all possible market designs must be inefficient:  it must leave some potential gains from trade unrealized.

If markets were perfectly efficient, whenever individual A values a good more than individual B it should be sold from B to A at a price that they find mutually agreeable.  There are many possible prices, but how do they decide on one?  The Myerson-Satterthwaite theorem says that, no matter how clever you are in designing the rules of negotiation, inevitably it will sometimes fail to converge on such a price.

The problem is one of information.  If B is going to be induced to sell to A, the price must be high enough to make B willing to part with the good.  And the more B values the good, the higher the price it must be.  That principle, which is required for market efficiency, creates an incentive problem which makes efficiency impossible.  Because now B has an incentive to hold out for a higher price by acting as if he is unwilling to part with the good.  And sometimes that price is more than A is willing to pay.

From Myerson-Satterthwaite we know what the right benchmark is for markets:  we should expect no more from them than what is consistent with these informational constraints.  It is a fundamental change in the way we think about markets and it is now part of the basic language of economics.  Indeed, in my undergraduate intermediate microeconomics course I give  simple proof of a dominant-strategy version of Myerson-Satterthwaite, you can find it here.

(Myerson won the Nobel prize jointly with Maskin and Hurwicz.  There should be a second Nobel for Myerson and Satterthwaite.)

The threat of the death penalty makes defendants more willing to accept a given plea bargain offer.  But a tough-on-crime DA takes up the slack by making tougher offers.  What is the net effect?  A simple model delivers a clear prediction:  the threat of the death penalty results in fewer plea bargains and more cases going to trial.

The DA is like a textbook monopolist but instead of setting a price, he offers a reduced sentence.  The defendant can accept the offer and plead guilty or reject and go to trial taking his chances with the jury. Just like the monopolist, the DA’s optimal plea offer trades off marginal benefit and marginal cost.  When he offers a stiffer sentence, the marginal benefit is that defendants who accept it serve more time.  The marginal cost is that it is more likely that the defendant rejects the tougher offer, and more cases goes to trial.  The marginal defendant is the one whose trial prospects make him just indifferent between accepting and rejecting the plea bargain.

Introducing the death penalty changes the payoff to a defendant who rejects a plea deal (his reservation value.)  The key observation is that this change affects defendants differently according to their likelihood of conviction at trial. Defendants facing a difficult case are more likely to be convicted and suffer the increased penalty.  (Formally, the reservation value is now steeper as a function of the probability of conviction.)

One thing the DA could do is increase the sentence in his plea bargain offer just enough that the pre-death-penalty marginal defendant is once again indifferent between accepting and rejecting.  The rate of plea bargains would then be the same as before the death penalty.

But he can do better by offering an even tougher sentence. The reason: his marginal benefit of such a move is the same as it was pre-death penalty (the same infra-marginal defendants serve more time) but the marginal cost is now lower for two reasons.  First, compared to the no-death penalty scenario, fewer defendants reject the tougher offer.  Because we are moving along a steeper reservation value curve.  Second, those who do reject now get a stiffer penalty (death) conditional on conviction.

The DA’s tougher stance in plea bargaining means that fewer defendants accept and more cases go to trial.  Evidence?  Here is one paper that shows that re-instatement of the death penalty in New York lead to no increase in the rate of plea bargains accepted (and a clear decrease in the size of plea bargain offers.)

Which type of artist debuts with obscure experimental work, the genius or the fraud? Kim-Sau Chung and Peter Eso have a new paper which answers the question:  it’s both of these types.

Suppose that a new composer is choosing a debut project and he can try a composition in a conventional style or he can write 4’33”, the infamous John Cage composition consisting of three movements of total silence. Critics understand the conventional style well enough to assess the talent of a composer who goes that route. Nobody understands 4’33” and so the experimental composer generates no public information about his talent.

There are three types of composer.  Those that know they are talented enough to have a long career, those that know they are not talented enough and will soon drop out, and then the middle type:  those that don’t know yet whether they are talented enough and will learn more from the success of their debut.  In the Chung-Eso model, the first two types go the experimental route and only the middle type debuts with a conventional work.

The reason is intuitive.  First, the average talent of experimental artists must be higher than conventional artists. Because if it were the other way around, i.e. conventional debuts signaled talent then all types would choose a conventional debut, making it not a signal at all.  The middle types would because they want that positive signal and they want the more informative project.  The high and low types would because the positive signal is all they care about.

Then, once we see that the experimental project signals higher than average talent, we can infer that it’s the high types and the low types that go experimental.  Both of these types are willing to take the positive signal from the style of work in exchange for generating less information by the actual composition.  The middle types on the other hand are willing to forego the buzz they would generate by going experimental in return for the chance to learn about their talent.  So they debut conventionally.

Now, as the economics PhD job market approaches, which fields in economics are the experimental ones (generates buzz but nobody understands it, populated by the geniuses as well as the frauds) and which ones are conventional (easy to assess, but generally dull and signals a middling type) ?

Apart from a certain solitary activity, all other sensations caused by our own action are filtered out or muted by the brain so that we can focus on external stimuli.  There is a famous experiment which demonstrates an unintended consequence of this otherwise useful system.

You and I stand before each other with hands extended.  We are going to take turns pressing a finger onto the other’s palm.  Each of us has been secretly instructed to each time try and match the force the other has applied in the previous turn.

But what actually happens is that we press down on each other progressively harder and harder at every turn. And at the end of the experiment each of us reports that we were following instructions and it was the other that was escalating the pressure.  Indeed, the subjects in these experiments were asked to guess the instructions given to their counterpart and they guessed that the others were instructed to double the pressure.

What’s happening is that the brain magnifies the sensastion caused by the other’s pressing and mutes the sensation caused by our own.  Thus, each of us underestimates the pressure when it is caused by our own action.  (In a control experiment the force was mediated by a mechanical device –and not the finger directly– and there was no escalation.)  So each subject believes he is following the instructions but in fact each is contributing equally to the escalating pressure.

You are invited to extrapolate this idea to all kinds of social interaction where you are being perfectly polite, reasonable, and accomodating, but he is being insensitive, abrasive, and stubborn.