You are currently browsing the tag archive for the ‘economics’ tag.

Here’s a pretty simple point but one that seems to be getting lost in the “discussion.”

Insurance is plagued by an incentive problem. In an ideal insurance contract the insuree receives, in the event of a loss or unanticipated expense, a payment that equals the full value of that loss. This smooths out risk and improves welfare. The problem is that by eliminating risk the contract also removes the incentive to take actions that would reduce that risk. This lowers welfare.

In order to combat this problem the contracts that are actually offered are second-best: they eliminate some risk but not all. The insured is left exposed to just enough risk so that he has a private incentive to take actions that reduce it. The incentive problem is solved but at the cost of less-than-full insurance.

But building on this idea, there are often other instruments available that can do even better. For example suppose that you can take prophylactic measures (swish!) that are verifiable to the insurance provider. Then at the margin welfare is improved by a contract which increases insurance coverage and subsidizes the prophylaxis.

That is, you give them condoms. For free. As much as they want.

Wealthy kids are usually wealthy because their wealthy parents left them a lot of money.  You might think that’s because parents are altruistic towards their kids.  Indeed every dollar bequeathed is a dollar less of consumption for the parent.  But think about this:  if parents are so generous towards their kids why do they wait until they die to give them all that money?  For a truly altruistic parent, the sooner the gift, the better.  By definition, a parent never lives to see the warm glow of an inheritance.

A better theory of bequests is that they incentivize the children to call, visit, and take care of the parents in their old age.  An inheritance is a carrot that awaits a child who is good to the parent until the very end.  That’s the theory of strategic bequests in Bernheim, Shleiffer and Summers.

But even with that motivation you have to ask why bequests are the best way to motivate kids.  Why not just pay them a piece rate?  Every time they come to visit they get a check.  If the parent is even slightly altruistic this is a better system since the rewards come sooner.

To round out the theory of strategic bequests we need to bring in the compound value of lump-sum incentives.  Suppose you are nearing the bitter end and its likely you are not going to live more than another year.  You want your kids to visit you once a month in your last year and that’s going to cost you 12*c where c is your kid’s opportunity cost per weekly visit.  You could either implement this by piece-rate, paying them c every time they come, or in a lump sum by leaving them 12c in your will if they keep it up the whole time.

But now what happens if, as luck would have it, you actually survive for another year?  With the piece rate you are out 12c and still have to cough up another 12c if you want to see your kids again before you die.  But a bequest can be re-used.  You just restart the incentives, and you get another year’s worth of visits at zero additional cost.

Is it credible?  All you need is to commit to a policy that depends only on their devotion in the last year of your life.  Since you are old your kids know you can’t remember what happened earlier than that anyway so yes, it’s perfectly credible.

(Idea suggested by Mike Whinston.)

Here is the abstract of a paper by Christian Roessler and Sandro Shelegia:

In Rome, if you start digging, chances are you’ll find things. We consider a famous complaint that justifies the underdeveloped Roman metro system: “if we tried to build a new metro line, it would probably be stopped by archeological finds that are too valuable to destroy, so we would have wasted the money.” Although this statement appears to be self-contradictory, we show that it can be rationalized in a voting model with diverse constituents. Even when there is a majority preference for a metro line, and discovery of an antiquity has the character of a positive option, a majority may oppose construction. We give sufficient conditions for this inefficiency to occur. One might think it arises from the inability to commit to finishing the metro (no matter what is discovered in the process). We show, however, that the inefficient choice is made in voting over immediate actions precisely when there is no Condorcet winner in voting over contingent plans with commitment. Hence, surprisingly, commitment cannot really solve the problem.

The problem is how to build a majority coalition in favor of digging.  There’s no problem when the probability of an antiquity is low because then everyone who favors the Metro but not the antiquity will be on board.  When the probability of an antiquity is high there is again no problem but now because you have the support of those who are hoping to find one.  Rome’s problem is that the probability of an antiquity is neither low enough nor high enough.

I think this says something about flyouts in Junior Recruiting, and in turn it says something about how candidates should market themselves.

(It used to be 4 and 5.)

A student in her 5th year who doesn’t have a stellar job market paper is always tempted to stay another year and try to produce something better. This is the ex post incentive of an individual student.

But ex ante the department as a whole would like to enforce a commitment for all students to go on the market in 5, even those whose job market paper at that stage leaves something to be desired. The basic reason is risk aversion. Every year they spend in grad school they produce another signal for the market. Good signals improve their prospects but bad signals make them worse. They would avoid the additional risk by committing to stay only 5 years rather than 6.

Now consider a student whose job market paper in year 5 leaves something to be desired. If she stays another year and produces a good paper, then although she is better off, she raises the bar for her colleagues and thereby strengthens their incentives to stay another year. A department policy that strongly incentivizes students to finish in 5 is needed to prevent the implied unraveling.

But that’s MIT.  Then there’s everybody else. Students in other departments have to compete with MIT students for top jobs. At a department like Yale, only the best students will be able to compete for top jobs and this makes them risk loving not risk averse. Instead of wanting to minimize signals, the best Yale students want to produce enough signals in hopes that at least one of them is good enough to give them a shot at a top department job.

So one should expect funding, TAships, and face time with advisors to drop after 5 years at MIT but continue into the 6th year at Yale.  (Note: I have no data on this.)

He seems to have mixed feelings:

There are three lanes, with the left two lanes narrowing into one.  A slight bit further ahead, the traffic from Gallows Road merges into the right lane, map here.

Many people from the far left lane merge “unethically,” driving ahead as far as they can, and then asking to be let in at the near-front of the queue.  The traffic from Gallows Road, coming on the right, merges ethically, as it is a simple feed of two lanes nto one.  They have no choice as to when the merge is, although de facto the construction of the intersection puts many of them ahead of the Rt.50 drivers.

The left lane merge is slightly quicker than the right lane merge, in part because not everyone is an unethical merger.  Yet it is more irksome to drive in the left lane, because you feel, correctly, that people are taking advantage of you (unless you are an unethical merger yourself, which I am not).

In recent times, I have switched my choice to the right lane.

Naked CDS.  Let me define it first.  A Credit Default Swap (CDS) is a bet that some debtor, say the government of Argentina, is going to default on their debt. When you buy a CDS you are buying a claim to a payment made in the event that there is a default.  When you sell a CDS you are betting that there will be no default and you won’t have to make that payment.

The conventional role of a CDS is an insurance instrument.  If you hold Argentinian bonds and are worried about default, you can buy CDS to insure against that risk.  (In the event of default you are out one bond but you get compensation in the form of your CDS payout.) Naked CDS refers to an unconventional role:  selling a CDS contract to someone who doesn’t actually hold the bond.

There is a very interesting argument that naked CDS can poison the financial well, and I believe it is in this paper by Yeon-Koo Che and Rajiv Sethi.  (I haven’t actually read the paper but I discussed it with Ahmad Peivandi and I think I get the gist of it.  Fair warning:  don’t assume that what follows is an accurate account of the paper, but whether or not it is accurate, it’s an interesting argument.)

Suppose that Argentina needs to issue new bonds in order to roll over its debt. The market for these bonds consists of people who are sufficiently optimistic that Argentina is not going to default.  Such people have a demand for bets on the solvency of the Argentinian government.  Argentina wants to capitalize on this demand.

In a world without naked CDS the government of Argentina has market power selling bets.  You  make your bet by purchasing the bonds.   Market power enables the  Argentinian government to mark up the price of its bonds, selling to the most optimistic buyers, and thus raise more capital for a given issue.  This reduces the chance of a default.

A market for naked CDS creates an infinitely elastic, perfectly competitive supply of bets.  Someone who is optimistic that there will be no default can now bet their beliefs by selling a naked CDS rather than purchasing bonds.  If the Argentinian government tries to exercise its market power, they will prefer to trade competitively priced CDS bets instead.  Thus, the market for naked CDS destroys the government’s market power completely.  There are welfare effects of this.

  1. The downside is that the government is less likely to raise enough to roll over its debts and therefore more likely to default.
  2. But the upside is that people get to make more bets.  Without competition from CDS, there are people who are willing to bet at market prices but are excluded due to the exercise of market power.  This deadweight loss is eliminated.

But how you evaluate these welfare effects turns on your philosophical stance on the meaning of beliefs.  One view is that differences in beliefs reflect differences in information and market prices reflect the aggregated information behind all of the traders’ beliefs. If competition drives the price of bonds down it is because it allows the information behind the pessimists’ beliefs to be incorporated.  If you hold this view you are less concerned about 1 because investors who would have bought bonds at marked-up prices must have been at least partially fooled.

Another view is that beliefs are just differences of opinion, more like tastes than information.  If the price of beer is low that doesn’t make me like beer any less. If this is your view then you really worry about 1 because those pessimists who drive the price down aren’t any better informed than the optimists. The concern about 1 is a rationale for banning naked CDS. But by the same argument you also care a lot about 2. Every bet between people with different beliefs, people who agree to disagree, is a Pareto improvement.

The bottom line is that arguments against naked CDS based on 1 probably also need to account for 2.

57 pages and not a single mention of me.   Via Ryan Lizza, with a story here.

We believe that $600 billion in stimulus over two years would create 2.5 million jobs relative to what would happen in the absence of stimulus. However, this falls well short of filling the job shortfall and would leave the unemployment rate at 8 percent two years from now. This has convinced the economic team that a considerably larger package is justified.

And for some reason they always seem to go up at exactly those times when you want to buy.  Like Taxis on New Year’s Eve.

Mr. Whaley was using Uber, a service that allows people to order livery cabs through a smartphone application. On New Year’s Eve, Uber, a start-up in the city, adopted a feature it called “surge pricing,” which increases the price of rides as more people request them.

Although New Year’s Eve was very profitable for Uber, customers were not happy. Many felt the pricing was exorbitant and they took to Twitter and the Web to complain. Some people said that at certain times in the evening, rides had spiked to as high as seven times the usual price, and they called it highway robbery.

Informing passengers only ex post may not be the ideal way to implement it though.

Uber’s goal is to make the experience as simple as possible, so customers are not shown their fare until the end of the ride, when it is automatically charged to their credit card. While the app does not show the total fare in dollars when customers book a ride, Uber did show a “surge pricing” multiple to customers booking rides for New Year’s Eve.

The article is an interesting read if for no other reason than the quotes from Dirk Bergemann and Liran Einav.  Thanks to Toomas Hinnosaar for the pointer.

It pays $72,000 per year and comes with only two requirements, one is flexible and one is not:

At first glance, Robert Kirshner took the e-mail message for a scam. An astronomer at King Abdulaziz University (KAU) in Jeddah, Saudi Arabia, was offering him a contract for an adjunct professorship that would pay $72,000 a year. Kirshner, an astrophysicist at Harvard University, would be expected to supervise a research group at KAU and spend a week or two a year on KAU’s campus, but that requirement was flexible, the person making the offer wrote in the e-mail. What Kirshner would be required to do, however, was add King Abdulaziz University as a second affiliation to his name on the Institute for Scientific Information’s (ISI’s) list of highly cited researchers.

As you read on, ask yourself whether a chaired professorship endowed by King Abdulaziz would survive the various criticisms.  I thank Ryan McDevitt for the pointer.

(Regular readers of this blog will know I consider that a good thing.)

Why did Apple enter an exclusive partnership with AT&T?  Michael Sinkinson has a nice theoretical model that shows how vertical exclusivity can soften competition.  A smartphone requires an accompanying wireless service in order to be useful.  While smartphones are differentiated goods, wireless service is pretty homogenous.  The market for wireless service is therefore perfectly competitive while the market for smartphones is oligopolistic.

Suppose smartphone manufacturers allow their phones to operate on any wireless network.  Then service plans for the iPhone and for the Blackberry would be priced perfectly competitively, at the marginal cost of providing service.  That means that the total price of an iPhone bundled with wireless service will be equal to the wholesale price that Apple charges the wireless providers.  In other words, Apple’s price increases are passed on dollar-for-dollar to the consumer.

At an equilibrium Apple raises its price for the iPhone up to the point where the revenues from additional price increases are offset by reduced sales (due to higher prices.) Blackberry does the same.

Now, suppose that Apple goes exclusive with AT&T.  That makes AT&T the monopoly retail supplier of the iPhone.  They will act like a monopoly and raise prices.  AT&T views Apple’s wholesale price as an input cost and we know from basic price theory that increases in input costs are passed on less than dollar-for-dollar to consumers.  The strategic effect for Apple is that now when Apple increases the wholesale price of the iPhone, sales fall off by less than they did in the non-exclusive arrangement.  It’s as if the demand curve has gotten steeper.  Relative to the non-exclusive arrangement Apple raises prices, and in fact as a response Blackberry also raises prices which has a secondary benefit for Apple.

Of course some of these new profits go to the retailer, AT&T.  No problem.  Forseeing all of this Apple and AT&T agreed to a large up-front transfer from AT&T to Apple equal to that amount.

Fracking.  Water is pumped into mines at high pressure to fracture the rock and release natural gas.  There is some controversy associated with how the water is disposed of when the fracking is done.  In Ohio the water is deposited in deep waste-water wells which happen to be near tectonic fault lines.  Probably not coincidentally there have been many earthquakes nearby over the past year.  These earthquakes have been small, the largest being about a 4.0 on New Year’s Eve.

The controversy is whether the waste water disposal is causing the earthquakes and whether this externality is properly accounted for in the fracking calculus.  Bear in find that it’s not the fracking itself that causes the earthquakes.

An earthquake is a release of pressure.  The theory here is that the water in the deep wells lubricates the fault line and allows the release of the pressure built up along fault lines.  Fracking, and the associated disposal, adds only negligibly to the total pressure built up over time.  That pressure is caused by the geological processes in the Earth.  That is, the total quantity of earthquakes over the lifetime of the Earth is a constant, independent of fracking.

What fracking does is re-allocate that supply of earthquakes toward the present, and possibly toward specific locations. If the disutiliy of earthquakes was linear in the timing and quantity of earthquakes, there would be no aggregate welfare effects.

But probably that disutility is convex.  Many small earthquakes are preferred to one large one.  The disposal of water in deep wells releases pressure sooner and avoids the buildup that would cause a large earthquake.  Under this theory the externality from fracking is positive.

They should start fracking in California.

I have known Larry since the time I was on the junior job market and he was winding down a spectacular term as chairman of the BU economics department, having built a top-ten department out of nothing.  Eight years later I spent a year as a faculty member at BU and again Larry was chairman. Everybody who has spent any time with Larry in a professional capacity agrees that he is a natural-born leader.  He just has this quality that draws people from all sides to his.  And he knows how to make an organization work.  On top of all that he is great economist with world-leading expertise on the topics that would be most important for a President right now to know. I honestly can’t think of anybody I know personally who would make a better President than Larry.  I might even vote for him.

Here’s his campaign web page. Via Tyler Cowen on Twitter.

An n-candidate election in which the electorate views n-1 of the candidates as equivalent alternatives to the nth.  But they need to solve the coordination problem of which one to vote for.  A prediction market allows speculators to drive up the price of any one of them, convincing the voters that he is the focal point and winning, or conversely to jam the signal by equalizing their share prices while at the same time going long on the nth.  Of course the voters know this, but they may not know that all of the voters know (etc) this.

How informative can prediction markets be in such a scenario?  I think Justin Wolfers might be interested in the answer.

See Rajiv Sethi for related thoughts.

From Inside Higher Ed:

In 2011, the association listed 2,836 positions, down only 6 jobs from 2010. The 2011 figures represent a 21 percent increase from the 2009 total, when the impact of the U.S. financial downturn was most evident on hiring in economics. A report by the association suggested that the total this year probably represents an increase in openings even over the 2,881 jobs reported in 2008 because many of those searches were called off after the economy nosedived in the fall of that year.

Via Lones Smith.

 

(Regular readers of this blog will know that I consider that a good thing.)

John Lazarev at Stanford GSB has a nice little theory paper (not his job market paper which is not little and not theory, but also nice.)  It’s a model of market competition which consists of two stages.  In stage one the firms simultaneously and non-cooperatively choose subsets of prices.  The interpretation is that the firm is restricting itself to later choose only prices from the restricted set.  After seeing the restriction sets each firm has chosen the firms then simultaneously choose prices from their respective sets.

This is a stylized model of the way “competition” works between airlines:

Almost every major US airline has independent pricing and yield (revenue) management departments. That operates as follows. The pricing department sets prices for each seating class (e.g. up to 6 non-refundable economy class fares) starting many days from the actual flight. These prices are subsequently updated very rarely. The revenue management department treats the prices as given but decides three times a day which of the fare classes to make available for purchase and which to keep closed. According to industry insiders, these departments do not actively interact with each other. Thus, there exist two stages of decision making. Effectively, the pricing department commits to a subset of prices, while the revenue management department chooses a price from this subset.

Its also a great question for a future prelim.  Construct an equilibrium (subgame-perfect please) in which the firms effectively collude and earn monopoly profits.

Simple.  (I will assume symmetric linear cost homogeneous product price-competition because it makes the argument simple and also quite stark:  standard Bertrand pricing leads to cutthroat competitition and zero profits.) In the first stage each firm restricts to only two prices:  the monopoly price and marginal cost pricing. If nobody deviates from this then all firms set the monopoly price.  If anybody deviates from this by either excluding the monopoly price or including an intermediate price then all firms set the lowest price in their chosen set.  All other deviations are ignored.

Its easy to check that this is a subgame perfect equilibrium and all firms earn monopoly profits.  Lazarev does the same for a more general model of differentiated products price competition.

(Regular readers of this blog will know I consider that a good thing.)

Market mechanisms of all sorts are plagued in practice by the problem of unraveling.  For example, well before completing law school, law students sign contracts to assume positions at established law firms.  Unraveling occurs when this early contracting motive causes market participants to compete by exiting the market earlier and earlier to the detriment of market efficiency.  An excellent summary of the problem and a slew of examples can be found in a paper by Roth and Xing.

One of the problems is that the formal market institutions were not designed to combat unraveling.  The adoption of stable matching mechanisms is often proposed as a solution.  A famous example is the National Medical Resident Matching program which matches residents to hospitals, a stable matching mechanism that is widely believed to have significantly curtailed unraveling in that market.

Nevertheless unraveling is a robust phenomenon and Songzi Du, in a joint paper with Yair Livne, from Stanford GSB has a very simple theoretical explanation.  Indeed, he shows that unraveling incentives are strong even in markets with a stable matching mechanism.  Moreover large market size seems only to make the problem worse.

Consider an employer and employee who are both highly ranked and suppose that they meet each other well before the matching process begins so that neither has learned anything about the quality of the rest of the market.  Let’s analyze their incentives to sign a contract now and exit the market before the formal matching process takes place.

The employee reasons that the mechanism is either going to give him a better match or its going to give the employer a better match. If he, the employee, gets a better match it is not likely to be that much better since the current employer is already highly ranked.  On the other hand, if the employer finds a better match then the employee is going to have to take his chances with the rest of the market.  Since the current employer is highly ranked, it is likely that whatever new employer he is matched with will be significantly worse.

On average going to the matching mechanism is a bad gamble.  And since the employer is in the exact same situation, they both prefer to exit than to take that gamble.

Du and Livne use this idea to quantify how large a problem unraveling is likely to be.  They takes the realistic position that participants are going to learn about the quality of close competitors prior to contracting. This gives them a rough sense of the possible matches they will get from the mechanism.  The previous intution translates naturally to this setting.  If the two potential early contractors are near the high end of this group, they will want to match early.  Du and Livne show that for any given similarly ranked pair, this will happen about 1/4 of the time, and this is true even when the market is very large.

Finally, once it is established that unraveling is the norm and not the exception, he uses a dynamic model to give a sense of what kind of equilibrium an unraveled market settles into.  And the news here is not good either.  No matter how you try to make the match work, by assigning some to match early and some to wait, there will always be some pairs that want to deviate from that plan.  That is, there is no equilibrium.

Here is one choice passage.

Economists essentially have a sophisticated lack of understanding of economics, especially macroeconomics. I know it sounds ridiculous. But the reason why I tell people they should study economics is not so they’ll know something at the end—because I don’t think we know much—but because we’re good at thinking. Economics teaches you to think things through. What you see a lot of times in economics is disdain for other’s lack of thinking. You have to think about the ramifications of policies in the short run, the medium run, and the long run. Economists think they’re good at doing that, but they’re good at doing that in the sense that they can write down a model that will help them think about it—not in terms of empirically knowing what the answers are. And we have gotten so enamored of thinking things through that the fact that we don’t know anything needs to bother us more. So, yes, it’s true that the average guy on the street doesn’t understand economics, and it’s also true that we don’t understand economics. We just have a more sophisticated lack of understanding than the guy on the street.

Read the whole thing here.

I was talking to someone about matching mechanisms and the fact that strategy-proof incentives are often incompatible with efficiency.  The question came up as to why we insist upon strategy-proofness, i.e. dominant strategy incentives as a constraint.  If there is a trade-off between incentives and efficiency shouldn’t that tradeoff be in the objective function?  We could then talk about how much we are willing to compromise on incentives in order to get some marginal improvement in efficiency.

For example, we might think that agents are willing to tell the truth about their preferences as long as manipulating the mechanism doesn’t improve their utility by a large amount.  Then we should formalize a tradeoff between the epsilon slack in incentives and the welfare of the mechanism.  The usual method of maximizing welfare subject to an incentive constraint is flawed because it prevents us from thinking about the problem in this way.

That sounded sensible until I thought about it just a little bit longer.  If you are a social planner you have some welfare function, let’s say V.  You want to choose a mechanism so that the resulting outcome maximizes V.  And you have a theory about how agents will play any mechanism you choose.  Let’s say that for any mechanism M, O(M) describes the outcome or possible outcomes according to your theory.  This can be very general:  O(M) could be the set of outcomes that will occur when agents are epsilon-truth-tellers, it could be some probability distribution over outcomes reflecting that you acknowledge that your theory is not very precise.  And if you have the idea that incentives are flexible, O can capture that:  for mechanisms M that have very strong incentive properties, O(M) will be a small set, or a degenerate probability distribution, whereas for mechanisms M that compromise a bit on incentives O(M) will be a larger set or a more diffuse probability distribution.  And if you believe in a tradeoff between welfare and incentives, your V applied to O(M) can encode that by quantifying the loss associated with larger sets O(M) compared to smaller sets O(M).

But whatever your theory is you can represent it by some O(.) function.  Then the simplest formulation of your problem is:  choose M to maximize V(O(M)). And then we can equivalently express that problem in our standard way: choose an outcome (or set of outcomes, or probability distribution over outcomes ) O to maximize V(O) subject to the constraint that there exists some mechanism M for which O = O(M).  That constraint is called the incentive constraint.

Incentives appear as a constraint, not in the objective.  Once you have decided on your theory O, it makes no sense to talk about compromising on incentives and there is no meaningful tradeoff between incentives and welfare.  While we might, as a purely theoretical exercise, comment on the necessity of such a tradeoff, no social planner would ever care to plot a “frontier” of mechanisms whose slope quantifies a rate of substitution between incentives and welfare.

At elviscostello.com

The live recording finds the Imposters in rare form, while the accompanying motion picture blueprints the wilder possibilities of the show, as it made its acclaimed progress across the United States throughout the year.

Unfortunately, we at http://www.elviscostello.com find ourselves unable to recommend this lovely item to you as the price appears to be either a misprint or a satire.

All our attempts to have this number revised have been fruitless but rather than detain you with tedious arguments about morality, panache and book-keeping – when there are really bigger fish to filet these days – we are taking the following unusual step.

If you should really want to buy something special for your loved one at this time of seasonal giving, we can whole-heartedly recommend, “Ambassador Of Jazz” – a cute little imitation suitcase, covered in travel stickers and embossed with the name “Satchmo” but more importantly containing TEN re-mastered albums by one of the most beautiful and loving revolutionaries who ever lived – Louis Armstrong.

The box should be available for under one hundred and fifty American dollars and includes a number of other tricks and treats. Frankly, the music is vastly superior.

If on the other hand you should still want to hear and view the component parts of the above mentioned elaborate hoax, then those items will be available separately at a more affordable price in the New Year, assuming that you have not already obtained them by more unconventional means.

By now those means are in fact the conventional ones, but we get the point. Slouch slouch nimpupani.

From Ariel Rubinstein of course, here’s his answer to question 5:

Q5. I have already written 30 pages. I have repeated myself several times and my proofs are much longer than necessary. I have added uncertainty wherever I could and I have moved from a discrete case to Banach spaces. My adviser still says I hardly even have enough for a note. How long should my paper be?

If you don’t have a good idea, then keep going. Don’t stop at less than 60 single-spaced pages. Nobody will read your paper in any case so at least you have a chance to publish the paper in QJE or Econometrica.

If you have a really good idea, my advice is to limit yourself to 15 double-spaced pages. I have not seen any paper in Economics which deserved more than that and yours is no exception. It is true that papers in Economics are long, but then almost all of then are deathly boring. Who can read a 50-page Econometica paper and remain sane? So make your contribution to the world by writing short papers — focus on new ideas, shorten proofs to the bare minimum (yes, that is possible!), avoid stupid extensions and write elegantly!

The rest is here, via Jakub Steiner on Facebook.

This is from an article in the New York Times.

When the taxi baron Robert Scull sold part of his art collection in a 1973 auction that helped inaugurate today’s money-soused contemporary-art market, several artists watched the proceedings from a standing-room-only section in the back. There, Robert Rauschenberg saw his 1958 painting “Thaw,” originally sold to Scull for $900, bring down the gavel at $85,000. At the end of the Sotheby Parke Bernet sale in New York, Rauschenberg shoved Scull and yelled that he didn’t work so hard “just for you to make that profit.”

The uproar that followed in part inspired the California Resale Royalties Act, requiring anyone reselling a piece of fine art who lives in the state, or who sells the art there for $1,000 or more, to pay the artist 5 percent of the resale price.


If you have a meeting scheduled at 2 and you are worried its going to drag on too long, what do you do?  Here’s a confession:  Sometimes I lie and say I have an appointment and I have to leave at 3. But it’s a double-edged sword.

Because warning my friend that I will have to leave at 3 implies that I anticipate that the hour will be a binding constraint.   That would only be true if I expect the meeting to go that long.   My friend will therefore infer that the topic of our meeting is important enough to me to potentially warrant an hour of face time.

As far as I know, had I never said anything he might have kept the meeting to 30 minutes, but now that I capped it at 3:00, its a sure thing we are going to meet for the full hour.

The problem is that there is no way I can know how long he was planning to meet. If i knew he was planning to leave at 2:30 I wouldn’t say anything. But if he is actually planning to stay until 4:30 and I don’t invent a 3:00 appointment I am hosed.

Of course some meetings really need to take more than 30 minutes and often you only discover that in the course of the meeting.  The downside of the cap is that it commits you.  Unless you want to lose all credibility you are going to have to keep to your fictional meeting and cut those meetings shorter than they should be.

So what is the optimal cap? The tradeoffs are reminiscent of textbook monopoly pricing. You have your marginal and infra-marginal meetings. If i raise the cap by a minute then the marginal meeting gets the extra minute that it really needs but the infra-marginal meeting gets needlessly extended.

Its a complicated calculation that comes down to hazard rates, incentive constraints, etc. but I will save you the effort; I have done the integration by parts.  The optimal cap is exactly 37 minutes.  You can’t say that of course because your friend will know that nobody schedules appointments at 2:37, so you will have to round up or down to the half hour.

Or schedule all your meetings to start at 23 minutes past the hour.

I stopped following Justin Wolfers on Twitter.  Not because I don’t want his tweets, they are great,  but because everyone I follow also follows Justin. They all retweet his best tweets and I see those so I am not losing anything.

Which made me wonder how increasing density of the social network affects how informed people are. Suppose you are on a desert island but a special desert island which receives postal deliveries.  You can get informed by subscribing to newspapers but you can’t talk to anybody.  As long as the value v of being informed exceeds the cost c you will subscribe.

Compare that to an individual in a dense social network who can either pay for a subscription or wait around for his friends to get informed and find out from them.  It won’t be an equilibrium for everybody to subscribe.  You would do better by saving the cost and learning from your friends.  Likewise it can’t be that nobody subscribes.

Instead in equilibrium everybody will subscribe with some probability between 0 and 1.  And there is a simple way to compute that probability.  In such an equilibrium you must be indifferent between subscribing and not subscribing.  So the total probability that at least one of your friends subscribes must be the q that satisfies vq = v – c.  The probability of any one individual subscribing must of course be lower than q since q is the total probability that at least one subscribes.  So if you have n friends, then they each subscribe with the probability p(n) satisfiying 1 – [1 – p(n)]^n = q.

(Let’s pause while the network theorists all rush out of the room to their whiteboards to solve the combinatorial problem of making these balance out when you have an arbitrary network with different nodes having a different number of neighbors.)

This has some interesting implications.  Suppose that the network is very dense so that everybody has many friends.  Then everyone is less likely to subscribe. We only need a few people to be Justin Wolfers’ followers and retweet all of his best tweets.  Formally, p(n) is decreasing in n.

That by itself is not such a bad thing. Even though each of your friends subscribes with a lower probability, on the positive side you have more friends from whom you can indirectly get informed.  The net effect could be that you are more likely to be informed.

But in fact the net effect is that a denser network means that people are on average less informed, not more. Because if the network density is such that everyone has (on average) n friends, then everybody subscribes with probability p(n) and then the probability that you learn the information is q + (1-q)p(n). (With probability q one of your friends subscribes and you learn from them, and if you don’t learn from a friend then you become informed only if you have subscribed yourself which you do with probability p(n).) Since p(n) gets smaller with n, so does the total probability that you are informed.

Another way of saying this is that, contrary to intuition, if you compare two otherwise similar people, those who are well connected within the network have a tendency to be less informed than those who are in a relatively isolated part of the network.

All of this is based on a symmetric equilibrium.  So one way to think about this is as a theory for why we see hierachies in information transmission, as represented by an asymmetric equilibrium in which some people subscribe for sure and others are certain not to.  At the top of the hierarchy there is Justin Wolfers.  Just below him we have a few people who follow him.  They have a strict incentive to follow him because so few others follow him that the only way to be sure to get his tweets is to follow him directly.  Below them is a mass of people who follow these “retailers.”

Jonah Lehrer describes an fMRI experiment published in Nature by Tricomi, Rangel, Camerer, and O’Doherty.   Subjects were first randomly assigned to be rich or poor and given an endowment accordingly.  Then they were put in the scanner.

…the scientists found something strange. When people in the “rich” group were told that a poor stranger was given $20, their brains showed more reward activity than when they themselves were given an equivalent amount. In other words, they got extra pleasure from the gains of someone with less. “We economists have a widespread view that most people are basically self-interested and won’t try to help other people,” Colin Camerer, a neuroeconomist at Caltech and co-author of the study, told me. “But if that were true, you wouldn’t see these sorts of reactions to other people getting money.”

I find it helpful to step back and think through how we can come to conclusions like this.  Some time ago, neuroscientists correlated certain brain activity measurements with the state of happiness.  They did this either by having the subject report when he was happy and then measuring his brain,  or by observing him making choices that, presumably, made him happy and then measuring his brain.

Once we have the brain data we no longer need to ask him whether he is happy or make inferences based on his choices, we can just scan his brain to find out.  And that allows us to conclude that the rich are less happy receiving $20 than when the poor get $20.

But still, if we wanted to we could just ask them.  We might learn something. What would we do if the subjects responded that in fact they would be happier having the $20 for themselves?  Would we conclude that they are lying?

Also we might learn something from just letting them decide for themselves whether to give money to the poor.  What would we conclude if we see, as we do indeed see in the world, that they do not? That they don’t understand as well as we do what makes their brain happy?

Either way we have a real problem.  Because our original reason for associating the specific brain activity with happiness was based on either believing they are honest about what makes them happy or believing that the choices they make reveal what makes them happy.  But now in order to apply what we learned we are forced to reject those same premises.

My street is a Halloween Mecca.  People flock from neighboring blocks to a section of my street and to the street just North of us.  (Ours is an East-West street as are most of the residential streets in the area.) And I have noticed that in other neighborhoods in the area and in other places I have lived there is usually a local, focal Halloween hub where most of the action is.

And on those blocks where most of the action is the residents expect that they will get most of the action.  They stock more candy, they lavishly decorate their yards, and they host haunted houses.  They even serve beer.  (To the parents)

I think I have figured out why we coordinated on my street.

In a perfectly symmetric neighborhood lattice, trick-or-treating is more or less a random walk. With a town full of randomly walking trick-or-treaters every location sees on average the same amount of traffic.  Inevitably, one location will randomly receive an unusually large amount of traffic, those residents will come to expect it next year, decorate their street, and reinforce the trend.  Then it becomes the focal point.

In this perfectly uniform grid, any location is equally likely to become that focal point.  That is the benchmark model.

But neighborhoods aren’t symmetric.  One particular asymmetry in my neighborhood explains why it was more likely that my street became the focal point.  Two streets to the South is a major traffic lane that breaks up the residential lattice.  In terms of our Halloween random walk, that street is a reflecting barrier.  People on the street just to the South of us will all be reflected to our street.  In addition we will receive the usual fraction of the traffic from streets to the North.  So, even before any coordination takes hold our street will see more than the average density of trick-or-treeters.  For that reason we have a greater chance of becoming the focal point.  And we did.

As reported on the Planet Money blog:

It sounds ridiculous today. But not so long ago, the prospect of a debt-free U.S. was seen as a real possibility with the potential to upset the global financial system.  We recently obtained the report through a Freedom of Information Act Request. You can read the whole thing here. (It’s a PDF.)

The problem?  No debt means no T-bonds.  Without T-bonds what happens to all the financial instruments linked to T-bond yields?  The white paper was co-authored by my old Berkeley mate Jason Seligman.

An auctioneer is never tempted to employ a shill bidder.

To be sure, he might want to make the winning bidder pay a higher price and using a shill bidder is one way to make that happen.  For example, in an English auction the seller could shill bid until the price reaches a point where all but one bidders have dropped out. That price is the highest revenue he would have earned without shill bidding, and by shilling a little bit longer before finally dropping out, the seller could try to extract something more.

Of course, this comes at some risk for the seller because there is a chance that the high bidder will drop out before the shill bidder does and then the seller misses out on a sale. Nevertheless, a shill bidder pays off on average if the seller thinks that this small-probability loss is outweighed by the large-probability gain.

Nevertheless, the seller would never be tempted to do this.

The reason is that he could achieve exactly the same thing using reserve price. Before the auction even begins he can ask himself what he would want to do if the price rose to that level. If he decided that he would want to use a shill bidder to raise the price even further then he could bring about exactly the same effect by setting his reserve price at the desired level.

That is, a shill bidder is just a reserve price in disguise.

(ps, you don’t have to get very fancy to see why this is wrong.)

Economics is a unique discipline in that the technical ideas have to be explainable to regular people.  Of course, the ideas in economics are not as technical as phyiscs, but there is essentially nothing at stake in being able to explain Maxwell’s Demon to a lay person, although that is certainly a talent.

And most disciplines that must be explained to the great un-unwashed are not technical enough for that to be any challenge.

Hand-in-hand with that distinctive feature is the fact that economic ideas are about things that regular people already have opinions on (usually strong ones) [how’s that for un-dangling a preposition!] To be able to have a useful dialog with them requires that you understand their opinions and most importantly why they have them.

Because strong opinions don’t come from nowhere. There is always some logic behind them.

So to be a good economist you must be familiar with, and see the logic behind, wrong ideas. This requires you to be sufficiently dumb. Because really really smart people would never have entertained those ideas and they will find them completely foreign and not worthy of consideration alongside the correct logic.

On the other hand you do have to be sufficiently smart to know why the logic is incomplete. But that by itself is usually not so demanding.

What is demanding is to be sufficiently dumb and sufficiently smart to be able to do both, AND also to be able to explain to BOTH the regular people and the smart people the other side.

Finally, I would add that being sufficiently dumb is crucial for finding good research projects. It has to do with the elusive “non-obvious” ideas. in practice “non-obvious” means “not obvious to the regular guy.” To spot those projects you need to have a replica of the same mental infrastructure that the regular guy is equipped with.

Advertisers want information about your tastes and habits so they can decide how much they are willing to pay to advertise at you.  That information is stored by your web browser on your hard drive.  Did you know that every time you access a web page you freely hand over that information to a broker who immediately sells it to advertisers who then immediately use it to bid for access to your eyeballs?

Here’s how it works.  Internet vendors, say Amazon, pass information to you about your transactions and your browser stores them in the form of cookies. Later on, advertisers are alerted when you are accessing a web page and they compete in an auction for the ad space on that page.  At that moment, unless you have disabled the passing of cookies, your browser is sending to potential advertisers all of the cookies stored on your hard drive that might contain relevant information about you.

However, many of the really valuable cookies are encrypted by the web site that put them there.  For example, if Amazon encrypts its cookies then even though your browser gives them away for free, they are of no use to advertisers.

That is, unless the advertisers purchase the key with which to decrypt your cookies. And indeed Amazon will make money from your data by selling its keys to advertisers.  It could sell them directly but it will probably prefer to sell them through an exchange where advertisers come to buy cookies by the jar.

The interesting thing about the market for cookies is that you are the owner of the asset and yet all of the returns are going to somebody else.  And its not because your asset and mine are perfect substitutes.  You are the monopolistic provider of information about you and when you arrive at a website it is you the advertisers are bidding for.

How long will it be before you get a seat at the exchange?  Nothing stops you from putting a second layer of encryption over Amazon’s cookies and demanding that advertisers pay you for the key.  Nothing stops me from paying you for exclusive ownership of your keys, cornering the market-in-you, and recouping the monopoly profit.

File under “Feel Free To Profit From This Idea.

(I learned about the market for cookies from Susan Athey’s new paper and a post-seminar dinner conversation with her and Peter Klibanoff, Ricky Vohra, and Kane Sweeney.)

(Picture:  Scale Up Machine Fail from http://www.f1me.net)

Via BoingBoing, why is the Indiana Election Commission putting cubes of styrofoam in their mailers?

The Styrofoam cube enclosed in this envelope is being included by the sender to meet a United States Postal Service regulation. This regulation requires a first class letter or flat using the Delivery or Signature Confirmation service to become a parcel and that it “is in a box or, if not in a box, is more than 3/4 of an inch thick at its thickest point.” The cube has no other purpose and may be disposed of upon opening this correspondence.