You are currently browsing the tag archive for the ‘economics’ tag.

Via kottke, Clusterflock gives five simple rules for effective bidding on eBay:

Step One:Find the product you want.

Step Two:

Save the product to your watch list.

Step Three:

Wait.

Step Four:

Just before the item ends, enter the maximum amount you are willing to pay for the item.

Step Five:

Click submit.

This is called sniping.  That’s a pejorative label for what is actually a sensible and perfectly straightforward way to bid.  eBay is essentially an open second-price auction and sniping is a way to submit a “sealed” bid.  It’s a popular strategy and advocated by many eBay “experts.”  But does it really pay off?

Tanjim Hossain and I did an experiment (ungated version.)  We compared sniping to another straightforward strategy we call squatting. As the name suggests, squatting means bidding your value on an object at the very first opportunity, essentially staking a claim on that object.  We bid on DVDs and randomly divided auctions into two groups, sniping on the auctions in the first group and squatting on the other.

The two strategies were almost indistinguishable in terms of their payoff.  But for an interesting reason.  A lot of eBay bidders use a strategy of incremental bidding.  That’s where you act as if you are involved in an ascending auction (like an English auction) and you bid the minimum amount needed to become the high biddder.  Once you are the high bidder you stop there and wait to see if you are outbid, then you raise your bid again.  You do this until either you win or the price goes above the maximum amount you are willing to pay.

Against incremental bidders, sniping has a benefit and a cost (relative to squatting.)  You benefit when incremental bidders stop at a price below their value.  You swoop in at the end, the incremental bidders have no time to respond, and you win at the low price.

The cost has to do with competition across auctions for similar objects.  If I squat on auction A and you are sniping in auction B, our opponents think there is one fewer competitor in auction B and more opponents enter auction B than A.  This tends to raise the price in your auction relative to mine.  In other words, squatting scares opponents away, sniping does not.

We found that these two effects almost exactly canceled each other out for auctions of DVDs.  We expect that this would be true for similar objects that are homogeneous and sold in many simultaneous auctions.  So the next time you are bidding in such an auction, don’t think too hard and just bid your value.

Now, I am still trying to figure out what I am going to do with all these copies of 50 First Dates we won in the experiment.

R. Duncan Luce has been elected fellow of the Econometric Society in the year 2009.  He is 84.  How could it take so long?

Here’s a model.  There is a large set of economists and each year you have to decide which to admit to a select group of “fellows.”  Assume away the problems of committee decision-making and say that an economist will be admitted if his achievements are above some standard.  The problem is that there are many economists and its costly to investigate each one to see if they pass the bar.

So you pick a shortlist of candidates who are contenders and you investigate those.  Some pass, some don’t.  Now, the next problem is that there are many fellows and many non-fellows and its hard to keep track of exactly who is in and who is out.  And again it’s costly to go and check every vita to find out who has not been admitted yet.

So when you pick your shortlist, you are including only economists who you think are not already fellows.  Someone like Duncan Luce, who certainly should have been elected 30 years ago most likely was elected 30 years ago so you would never consider putting him on your shortlist.

Indeed, the simple rule of thumb you would use is to focus on young people for your shortlist.  Younger economists are more likely to be both good enough and not already fellows.

Will…You…Play…Black Knight Again??

There is a reason I live in Winnetka and not in Evanston.  And it’s not because, as Sandeep would put it, I like to get up 30 minutes earlier than otherwise so that my daughters can put their hair up and dress like beautiful little dolls to match all the other dolls in their classes.  No, its because after all the dolls are asleep we get to go to their parents’ mansions for parties and there’s always at least one parent who makes a living doing something incredibly interesting.

Tonight I met the guy who once made a living designing the classic pinball machines.  And he designed the two pinball machines, Black Knight in 1980 and High Speed in 1986 that are bookends for a period when the most important stuff I was learning about life was learned within a few feet of at least one of these machines.

It turns out these were also major turning points in the history of pinball itself.  In 1980, pinball went digital, multi-ball, and multi-media starting with the game Black Knight.  Black Knight brought pinball to a new level, literally speaking because it was among the first games with ramps and elevated flippers, but even more importantly because it brought a new challenge that drew in and solidified a pinball crowd.  In doing so it also set the pinball market on a path that would eventually lead to its demise.

In 1986, Williams High Speed changed the economics of pinball forever.  Pinball developers began to see how they could take advantage of programmable software to monitor, incentivize, and ultimately exploit the players.  They had two instruments at their disposal:  the score required for a free game, and the match probability.  All pinball machines offer a replay to a player who beats some specified score.  Pre-1986, the replay score was hard wired into the game unless the operator manually re-programmed the software.  High Speed changed all that.  It was pre-loaded with an algorithm that adjusted the replay score according to the distribution of scores on the specified machine over a specific time interval.

The early versions of this algorithm were crude, essentially targeting a weighted moving average.  But later implementations were more sophisticated.  The goal was to ensure that a fixed percentage, say the top 5% of all scores would win a free game.  The score level that would implement this varies with the machine, location, and time.  The algorithm would compute a histogram of scores and set the replay threshold at the empirical cutoff of 5%.  Later designs would allow the threshold to rise quickly to combat the wizard-goes-to-the-cinema problem.  The WGTTC problem is where a machine has adjusted down to a low replay score because it is mostly played by novices.  Then anytime an above average player gets on the machine, he’s getting free games all day long.

The other tool is the match probability: you win a free game if the last two digits of your score match an apparently random draw.  While adjustments to the high-score threshold is textbook price theory, the adjustments to the match probability is pure behavioral economics.  Let’s clear this up right away. No, the match probability is not uniform and yes, it is strategically manipulated depending on who is playing and when.  For example, if the machine has been idle for more than three minutes, the match probability is boosted upward.  You will never match if you won a free game by high score.  And it gets more complicated than that.  Any time there are two or more players and they finish a game with no credits left, one player (but only one) is very likely to match.  Empirically, the other players will more often than not put in another quarter to play again.

(The tilt tolerance, by contrast has always been controlled by a physical device which is adjusted manually and rarely in response to user habits.)

Pinball attracted a different crowd than video games like Defender (my new pal designed Defender and Stargate too,) and this is the fundamental theorem of pinball economics.  Pinball skill is transferrable.  If you can pass, stall, nudge, and aim on one machine you can do it on any machine.  This is both a blessing and a curse for pinball developers.  The blessing is that pinball players were a captive market. The curse was that to keep the pinball players interested the games had to get more and more intricate and challenging.

Pinball developers struggled with this problem as pinball was slowly losing to video games.  Video games competed by adding levels of play with increasing difficulty.  Any new player could quickly get chops on a new game because the low levels were easy.  This ensured that new players were drawn in easily, but still they were continually challenged because the higher levels got harder and harder.  By contrast, the physical nature of pinball, its main attraction to hardcore players, meant that there was no way to have it both ways.

Eventually, to keep the pinballers playing, the games became so advanced that entry-level players faced an impossible barrier.  High-schoolers in 1986 were either dropouts or professionals in 1992 and without inflow of new players that year essentially marked the end of pinball.  In 1992 The Addams Family was the last machine to sell big. By this time, pinball machines used a free-game system called replay boost. After any replay, the score required was increased by some increment.  Apparently, only hardcore pinballers were left and this was the only way to prevent them playing indefinitely for free.

Today Williams owns Bally but they make slot machines and video poker.  There currently exists one botique manufacturer of pinball machines but its fair to say that innovation stopped in 1992.

My new best friend has a basement full of Black Knight, High Speed, Defender, Pac Man, Asteroids, and everything else you inserted quarters into when you were 16.  Now I just have to find a supplier of C45, Djarums, and gooney-birds and I’ll be ditching class to hear sirens and “Pull Over Buddy.”

News Corp., parent company of Fox News is reported to have made an offer for NBC Universal in competition with Comcast.  Who should be willing to pay more for an upstream supplier (NBC), the downstream monopolist (Comcast), or an upstream competitor (News Corp.)?

We have spent most of the course using the tools of dominant-strategy mechanism design to understand efficient institutions and second-best tradeoffs.  These topics have a normative flavor:  they describe the limits of what could be achieved if institutions were designed with efficiency as the goal.

But most economic activity is regulated not by efficieny-motivated planners but by self-interested agents.  This adds an additional friction which potentially moves us even further from the first-best.  Self-interested mechanism designers will probably introduce new distortions into their mechanisms because as they try to tilt the distribution of surplus their way.

In this lecture we use the model of an auction to see the simplest version of this.  We consider the problem of designing an auction for two bidders with the goal of maximizing revenue rather than efficiency.  We do not have the tools necessary to do the full-blown optimal auction problem but we can get intuition by studying a narrower problem:  find an optimal reserve price in an English auction.

With a diagram we can see the tradeoffs arising from adjusting the reserve price above the efficient level.  The seller loses because sometimes the good will go unsold but in return he gains from receiving a higher price when the good is sold.  The size and shape of the regions where these gains and losses occur suggest that it should be profitable to raise the reserve price above cost.

Without solving explicitly for the optimal reserve price we can give a pretty compelling, albeit not 100% formal, argument that this is indeed the case.  At the efficient reserve price (equal to the cost of selling) total surplus is maximized.  A graph of total expected surplus as a function of the reserve price should be locally flat at the efficient point. (We are implicitly assuming differentiability of total expected surplus which holds if the distribution of bidder values is nice.) Buyers’ utility is unambigously declining when the reserve price increases.  Since total surplus is by definition the sum of buyers’ utility and seller profit, it follows that seller profit is locally increasing as the reserve price is raised above the efficient level.

Thus, while we know that in principle this allocation problem can be solved efficiently, when the allocation is controlled by a profit maximizer, there is a new source of inefficiency.  The natural next question is whether competition among profit-maximizing sellers will mitigate this.

Here are the slides.

I have a student who is in charge of Northwestern’s Undergraduate Economics Society and he is planning an event in the Spring.  They have some money and they want to organize an activity for their membership that will be fun and economics-oriented.  Think of this as an opportunity to design an experiment involving any number of students (up to hundreds of students), but it should be fun as well as educational.  I know that our readers will have some good ideas for them.  Please share them in the comments.

Shows that are widely time-shifted are not losing money due to skipped commercials.

Against almost every expectation, nearly half of all people watching delayed shows are still slouching on their couches watching messages about movies, cars and beer. According to Nielsen, 46 percent of viewers 18 to 49 years old for all four networks taken together are watching the commercials during playback, up slightly from last year.

On net, the gain in viewership from time-shifted shows often more than compensates for the few who skip ads.

When NBC added the “The Jay Leno Show” at 10 each weeknight, it boasted that the show would be “DVR proof,” meaning that because the humor was topical, viewers were more likely to watch it live, avoiding much of the commercial-skipping that was expected to plague recorded shows.

Now being “DVR proof” looks like a disadvantage. Mr. Leno’s shows were among the few with three-day commercial ratings lower than their live ratings. Not enough people have been recording the show and playing it back to overcome the commercial-skipping being done by a percentage of its live viewers.

From the NY Times.

James Surowiecki of the New Yorker describes and analyzes a price war for Stephen King:

Wal-Mart began by marking down the prices of ten best-sellers—including the new Stephen King and the upcoming Sarah Palin—to ten bucks. When Amazon, predictably, matched that price, Wal-Mart went to nine dollars, and, when Amazon matched again, Wal-Mart went to $8.99, at which point Amazon rested. (Target, too, jumped in, leading Wal-Mart to drop to $8.98.) Since wholesale book prices are traditionally around fifty per cent off the cover price, and these books are now marked down sixty per cent or more, Amazon and Wal-Mart are surely losing money every time they sell one of the discounted titles. The more they sell, the less they make. That doesn’t sound like good business.

We have a few answers to avoid this.  But if tell you, I have to redo large chunks of my class…..

So long, anonymity — it’s been swell. For nearly ten years now, I have done my job incognito. Now I am joining the ranks of no-longer-anonymous restaurant critics. Last Friday, I gave a lecture to the students and faculty of the Texas A&M Meat Science Center without the usual hat and sunglasses. I didn’t wear a disguise on Sunday when I appeared at the Texas Book Festival either. Soon you will be able to Google grainy photos of me to your heart’s content. I also have given my publishers an author’s photo to use for publicity.

So writes Robb Walsh, the no-longer-anonymous food critic for the Houston Press. He is the latest critic to shed his anonymity since the google-able Sam Sifton took over the job at the New York Times. Before that, professional food critics were expected to visit restaurants anonymously and indeed the presumption was that anonymity was required for a critic to provide a useful review. But there are arguments either way.

You might think that the job of a critic is to distinguish the great chefs from the merely good ones. A conspicuous critic would get special treatment and this biases the test. But as long as the critic (or the reader) accounts for this and can “invert the mapping,” essentially factoring out the extra effort, this is not really a problem.

We may only want a relative ranking of chefs and adding a constant to each chef’s baseline quality won’t change that.

Noise in the signal can complicate the inversion but this could go either way. One theory is that the effect of extra effort is to reduce variance in the quality of the dish. If so, a conspicuous chef gets a better signal. Alternatively it could be there is a uniform upper bound and any competent chef can hit that upper bound with enough effort. In this case, anonymity is required.

An anonymous critic generates other welfare gains. Every diner has a positive probability of being Ruth Reichl and so every diner gets a slightly better meal than otherwise. Once critics out themselves, we are all 100% nobodies again.

We may not care who is the most talented chef but instead we want to know where we (nobodies) are going to get the best meal. As long as these are sufficiently correlated, again not much is lost from going conspicuous. But in any event it is not clear that a single critic provides much more information about this than could be had from data on popularity alone. If we want critics to break herds, then they should be anonymous.

Maybe we want critics to start herds. Critics are most influential for tourists and locals prefer to avoid tourists. Conspicuous critics enable efficient market segmentation where restaurants wishing to cater to tourists give special treatment and get good reviews. A good review can destroy a restaurant that caters to locals so all parties benefit if the critic is conspicuous ensuring he is given a bad meal.

(Arising from conversations with Ron Siegel, Mike Whinston, Jeroen Swinkels, Eddie Dekel and Phil Reny.)

Comcast is in talks with GE to buy NBC Universal which would give Comcast all of NBC’s television and movie assets. According to the Wall Street Journal we should know in a matter of weeks if agreement is reached but any deal would certainly be given a lengthy review by anti-trust authorities. A concern often cited is the motive of vertical foreclosure: a merged Comcast-NBC would use their alliance to gain advantage over competitors for content provision. This issue also foreshadows those that would arise with internet content provision should net-neutrality be abandoned.

Comcast is a monopoly provider of access to content. Think of Comcast as the guy at the door charging you a fee to get into the party. You want to get in because inside there are people providing various services, perhaps for an additional fee. The best structure of all for Comcast would be to take ownership of all the service-providers inside and act as a joint monopoly collecting entrance fees and selling the services inside.

What would such a monopoly do to maximize profits? It would maximize the value of the services offered inside and then extract that value in the form of an entrance fee.

But this same outcome is achieved with the structure in which the services inside are provided competitively. Competition among service providers maximizes the value of the service thereby enabling the monopoly gatekeeper to earn the same profits as if it owned the entire enterprise.

So if you think that content is provided competitively (in my opinion its pretty close) then you shouldn’t worry too much about vertical foreclosure. On the other hand we should still wonder why Comcast is interested in NBC. Are there any plausible efficiency gains from a merger?

Merger review is based on looking for likely anti-competitive results or motives and if there is no clear anti-competitive motive then the merger is approved. But it’s worth considering a different standard here (and in the net-neutrality debate as well.) If there are no clear efficiency gains and a merger enables anti-competitive behavior even though that behavior may not have any clear rationale, then the merger should be rejected.

Allowing the merger would be like leaving scissors within reach of my (then) three-year-old. No good will come of it, and if I trust that she acts in her self-interest no harm would come either. But she is hard to predict:

IMG_0779

Here is an experiment that as far as I know has not been done.  (Please correct me if I am wrong.)  Offer contestants the choice of two raffles.  Raffle A pays the winner $1000, Raffle B pays the winner $1000+x where x is a positive number.  Contestants must pick one of the raffles and can buy at most one raffle ticket.  They choose simultaneously.  There will be one winner from each raffle and the winners will be determined by random draw.

In equilibrium the expected payoff in the two raffles should be equalized.  This means that more people should enter raffle B to compete away the extra $x prize money.  My hypothesis is that in fact too many people will enter raffle B so that raffle A will have a higher expected payoff.  I am thinking that the contestants will inusfficiently account for the strategic effect of free entry and will naively assume that B is the better choice.  And I believe this effect will be large even when x is very small.

If this is true then it has important consequences for markets.  Suppose two job market  candidates are almost equally qualified but candidate A is a little better than candidate B.  Candidate A will get too many interviews and candidate B will get too few.  Candidate B’s slight disadvantage will be amplified by the market and will go too often unemployed.

In the economics job market for new PhD’s, economics departments are often asked by potential employers for rankings of their candidates.  Departments are often unwilling to give more than coarse rankings and I believe that the effect I describe is the reason.

Apple is opening a new retail store on a neglected chunk of land in the middle of a gentrifying part of the North side of Chicago.  Before moving in they will tidy up a bit by landscaping an adjacent lot and refurbishing a run-down CTA subway station entrance and underground train platform (the North and Clybourn red line station) with a total cost of up to $4 million.

Over the years, the CTA’s building has fallen behind on maintenance. The paint is peeling, the windows are filthy, an electrical sign has dangling wires, and metal framing is rusting. Inside the building and underground, the station features white tile walls and fluorescent lighting, with hallways leading to two narrow platforms underground.

In the agreement approved at an August 19th Chicago Transit Board meeting, in exchange for the improvements the CTA will lease the bus turnaround to Apple at no cost for 10 years, with options on four, five-year extensions. The CTA will also give Apple “first rights of refusal”  for naming the station and placing advertising within the station, if the CTA later decides to offer those rights.

Memo to Steve Jobs:  you will probably also want to take care of the crater-sized potholes on North Avenue just West of your new home.  Thanks.

Via Mac Rumors.

Its one of the many novel ideas from David K. Levine:  the non-journal.  You write your papers and you put them on your web site.  Congratulations, you just published!  Ah, but you want peer review.  The editors of NAJ just might read your self-published paper and review it.  We supply the peer-review, you supply the publication.  Peer-review + publication = peer-reviewed publication. That was easy.

(NAJ is an acronym that stands for NAJ Ain’t a Journal.)

Its been around for a few years with pretty much the same set of editors.  Its gone through some very active phases and some slow periods.  David is trying to breathe some new life into NAJ by rotating in some new editors.  So far so good.  Arthur Robson is a new editor and he just reviewed a very cool paper by Emir Kamenica and Matthew Gentzkow called “Bayesian Persuasion.”

The paper tells you how a prosecutor manages to convict the innocent.  Suppose that a judge will convict a defendant if he is more than 50% likely to be guilty and suppose that only 30% of all defendants brought to trial are actually guilty.  A prosecutor can selectively search for evidence but cannot manufacture evidence and must disclose all the evidence he collects.  The judge interprets the evidence as a fully rational Bayesian.  What is the maximum conviction rate he can achieve?

The answer is 60%.  This is accomplished with an investigation strategy that has two possible outcomes.  One outcome is a conclusive signal that the defendant is innocent.  Since the judge is Bayesian, the innocent signal occurs with probability zero when the defendant is actually guilty.  The other outcome is a partially informative signal.  If the prosecutor designs his investigation so that this signal occurs with probability 3/7 when the defendant is innocent (and with probability 1 when guilty) then

  1. conditional on this signal, the defendant is 50% likely to be guilty (we can make it strictly higher than 50% if you like by changing the numbers slightly)
  2. 3/7 of the innocent and all of the guilty will get this signal.  (3/7 times 70%) + 30% = 60%.

The paper studies the optimal investigation scheme in a general model and uses it in a few applications.

 

 

One of the major bones of contention derives from just how good this year has been. The Rioja is the only wine-producing region in Spain that bears the label DOC (denominación de origen de calidad), and therefore is subject to especially stringent controls regarding quantity and quality of production. Each farmer may sell a certain amount of grapes, no more, depending on the amount of land under cultivation.

This year the grapes have been wildly abundant, and it is estimated that some 80 million kilograms have been left on the vine because they exceeded the quotas permitted for sale. Indeed, due to the weather conditions during harvest week, the grapes left on the vine are probably the very finest quality ones. For economically-pinched farmers, it is a blow to the heart to see those grapes left for the birds, and many demand to be allowed to sell the surplus as well, despite DOC regulations. Others claim this would flood the market and imperil the status of the DOC product.

The story is here.

We all know about generic drugs and their brand-name counterparts.  The identical chemical with two different prices.  Health insurance companies try to keep costs down by incentivizing patients to choose generics.  You have a larger co-pay when you buy the name brand.  Except when you don’t:

Serra, a paralegal, went to his doctor a few months ago for help with acne. She prescribed Solodyn. Serra told her he’d previously taken a generic drug called minocycline that worked well. The doctor told him that the two compounds are basically the same, but that you have to take the generic version in the morning and the evening. With Solodyn, you take one dose a day.

Serra told her that if the name-brand medicine was going to cost a lot more, he’d prefer the generic. “And then she presented this card,” he says. She explained that it was a coupon, and that he should give it to the pharmacist for a break on his insurance copay.

Without the card, Serra’s copay would have been $154.28. But when he got to the pharmacy, he presented his card. “They went to ring it up at the register,” he remembers. “And when it came up, the price was $10.”

NPR has the story. Chupulla chuck:  Mike Whinston.

Net neutrality refers to a range of principles ensuring non-discriminatory access to the internet.  A particularly contentious principle urges prohibition of “managed” or “tiered” internet service wherein your internet service provider is permitted to restrict or degrade service.  ISPs argue that without such permission they are unable to earn sufficient return on investment in network capacity and would be deterred from making such improvements.

One argument is based on congestion.  Managed service controls congestion, raising the value to users and allowing providers to capture some of this value with access fees.  This is a logical argument and one I will take up in a later post, but here I want to discuss another aspect of managed service:  price discrimination.

Enabling providers to limit access, say by bandwidth caps, opens the door to “tiered” service where users can buy additional bandwidth at higher prices.  This generally raises profits and so we should expect tiered service if net neutrality is abandoned.  What effect does the ability to price discriminate have on an ISP’s incentive to invest in capacity?

It can easily reduce that incentive and this undermines the industry argument against net neutrality.  Here is a simple example to illustrate why.  Suppose there is a small subset of users who have a high willingness to pay for additional bandwidth.  Under net neutrality, all users are charged the same price for access, and none have bandwidth restrictions.  An ISP then has only two choices.  Set a high price and sell only to the high-end users, or set a low price and sell to all users.  When the high-end users are relatively few, profits are maximized with low prices and wide access.  It is reasonable to think of this as describing the present situation.

Suppose tiered access is now allowed.  This gives the ISP a new range of pricing schemes.  The ISP can offer a low-price service plan with a bandwidth cap alongside a high-priced unrestricted plan.  As we vary the cap associated with the low-end plan, we can move along a continuum from no cap at all to a 100% cap.  These two extremes are equivalent to the two price systems available under net neutrality.

Often one of these in-between solutions will be more profitable than either of the two extremes.  The reason is simple.  The bandwidth cap makes the low-end plan less attractive to high-end users and as a result the ISP can raise the price of un-capped access to high-end users.  It’s true that low-end users will pay less for capped service but often the trade-off is favorable to the ISP and total profits increase.

The upshot of this is that total bandwidth is lower, not higher, when an ISP unconstrained by net-neutrality uses the profit-maximizing tiered-service plan.  Couched in the industry’s usual terms, the ISP’s incentive to increase network capacity is in fact reduced by moving away from net neutrality.

(Of course it can just as easily go the other way.  For example, it may be that presently only the high-end users are being served because to lower price enough to attract the low end users, the ISP would lose too much profit from the high-end.  In that case, allowing tiered service would induce the ISP to raise capacity and offer a capped service to previously excluded low-end users without significantly reducing profits from the high-end.  Note however, this is not typically how industry lobbyists frame their argument.)

Sara Silverman wants to end world hunger (and get those disturbing images off her 48inch plasma TV).  Her solution:  sell the Vatican and use the proceeds to feed the world.

This is good, but only second best.  The buyer who values the Vatican the most is in fact its current occupant.  So selling the Vatican would lower its value.  (It is true that the new owner knows he can sell it back to the Pope and takes this into account when deciding how much to offer.  However, this adds needless transaction costs, plus the Pope will have bargaining power in the resale which would not be internalized by the buyer.)

A better idea is to give the Vatican directly to the poor and allow then to charge the Pope rent.  Subject to this small ammendment, I wholly endorse the following (although the bit about the Holocaust may require the consent of more than just Ms Silverman and me.)

Nobel Laureate Eric Maskin gives an extended interview at The Browser arguing that economic theory was indeed equipped to see and understand the roots of the financial crisis.  Its a unique interview because Eric picks 5 or so academic articles, discusses them in detail and weaves together a story of the crisis based on these.  The story has some standard ingredients:  bank runs, moral hazard, liquidity crises, and contagion.  He illustrates each of these with a specific paper.  The story also has some non-standard ingredients, such as leverage cycles described in a paper by Fostel and Geanakoplos.

The interview concludes thusly,

Q: So policymakers, especially people in Congress, need to read these papers.

A: Yes, or at least understand what’s in them. I think most of the pieces for understanding the current financial mess were in place well before the crisis occurred. If only they hadn’t been ignored. We’re not going to eliminate financial crises altogether, but we can certainly do a better job of preventing and containing them.

Highly recommended.

I read the Nobel “Scientific Background” to find out what her research is about.  Turns out a much better summary is the video here at EatMeDaily.

This prize is long overdue.  The theory of the firm is one of the big ideas in economics and as far as I can tell the Nobel committee was right to trace it back to Williamson.

A firm is a container for a bunch of highly idiosyncratic, repeated, informal transactions.  A great thought experiment is to wonder why these transactions are not conducted through a market, making the firm dissolve away.  Instead of giant firms building and selling cars, why aren’t there a bunch of tiny firms each doing a tiny part with all of their interaction governed by the market or by contract?  There are three main reasons why.

First, its costly to use the market.  If the chassis firm is going to be buying axles from the axle firm all the time, it would save transaction costs by just integrating.  Then it can “procure” axles with a memo.

Second, the contracts would be impossibly complicated and unwieldy.  Imagine writing a complete blueprint for the car, breaking it down into individual instructions for every actor who is supposed to contribute, laying out the timing when each party is supposed to arrive and do his part, describing payments as a function of the performance of each interdependent action, etc.  That is probably already impossible, but imagine you could do that.  Now suppose that a supply shock requires you to use different materials for the chassis.  This would require a coordinated change in many parts of the car, to keep structural integrity, balance, etc.  The entire volume of contracts would have to be re-written.

The third reason is the one that adds richness to the theory of the firm.  Most of the transactions that occur within firms require parties to make investments that only make sense within the context of that specific firm.  The party making the investment has little or no option to recover the value of the investment outside the firm.  When the chassis firm contracts with the firm building auto bodies, it writes down minute specifications that must be met.  The bodies produced could not be sold to any other chassis maker.  The firm building auto bodies must invest in the machinery that can make these specific bodies.  Once sunk that investment has no value outside of the specific relationship with the chassis firm.

The reason this adds richness to the theory is that it explains which transactions must be encompassed within firms.  Transactions that require relation-specific investments would be crippled if conducted across firm boundaries.  Once the die is cast for building these specific auto bodies, the chassis firm has no reason to pay a price that compensates for the sunk cost because there is no outside option.  This hold-up problem implies that the auto body firm has poor incentives to make the investment in the first place, unless it integrates with the chassis firm and becomes a claimaint to the profits of the integrated firm.

I never heard of her.

Shiller tops the poll. Igal Hendel garnered 5 votes.

Since I am willing to pay $X that means my opportunity cost of not buying is -$X, thus my willingness to pay is indeed $X.   That appears to be what Google CEO Eric Schmidt is saying in the following deposition transcript talking about Google paying X=$1.65 billion for YouTube, a $1billion premium over what he estimated YouTube to be worth.  From an article at cnet.

Baskin: So you orally communicated to your board during the course of the board meeting that you thought a more correct valuation for YouTube was $600 million to $700 million; is that what you said, sir?

Mancini objects to characterization of the testimony.

Schmidt: Again, to help you along, I believe that they were worth $600 million to $700 million.

Baskin: And am I correct that you were asking your board to approve an acquisition price of $1.65 billion; correct?

Schmidt: I did.

Mancini objects.

Baskin: I’m not very good at math, but I think that would be $1 billion or so more than you thought the company was, in fact, worth.

Mancini objects.

Schmidt: That is correct.

Later…

Baskin: Can you tell us what reasoning you explained?

Schmidt: Sure, this is a company with very little revenue, growing quickly with user adoption, growing much faster than Google Video, which was the product that Google had. And they had indicated to us that they would be sold, and we believed that there would be a competing offer–because of who Google was–paying much more than they were worth. In the deal dynamics, the price, remember, is not set by my judgment or by financial model or discounted cash flow. It’s set by what people are willing to pay. And we ultimately concluded that $1.65 billion included a premium for moving quickly and making sure that we could participate in the user success in YouTube.

Steve Levitt links to his paper with Sudhir Venkatesh documenting some stylized facts about street prostitution in Chicago.  It’s definitely worth a read, and one part is fodder for theory:

Prostitutes in their sample report using condoms 90  percent of the time, compared to only 25 percent in our sample for vaginal sex, and 21 percent for anal sex.  Among their Mexican prostitutes, condom use is the default from which customers must bargain away, potentially inducing large increases in prices.  In contrast, in our sample no condom appears to be the default choice, perhaps making it harder for the prostitute to credibly argue for a higher price if no condom is used.  Moreover, in an equilibrium in which condom use is infrequent, infection rates among prostitutes are likely to be extremely high, so that the primary value of condoms to women may be protecting the women from becoming pregnant and hygiene, rather than the spread of disease.  Indeed, one would expect that the johns would likely gain more in disease reduction from condoms than the prostitutes.

SOME DISCUSSION OF HOW CONDOM USE VARIES ACROSS PROSTITUTES IN OUR SAMPLE.  SOME QUOTES ABOUT WHY THEY DON’T USE THEM. SOME FACTS ABOUT AIDS RATES AMONG JOHNS AND PROSTITUTES FROM MEDICAL LITERATURE.

(hmmm, it appears they are not quite done with the paper 🙂 ) They focus on the cost to the prostitute due to increased infection and the like, but there is already some unusual aspects to the demand side.

A John values unprotected sex over protected sex but even moreso if he is the only John, or among very few, who get that privelege.  Holding fixed her frequency of unprotected sex, there is a downward sloping demand for unprotected sex as a function of the price premium over condom-clad.  But that frequency is not verifiable, except insofar as it can be inferred from the price.  Thus, as an equilibrium response the demand curve itself shifts with adjustments to the price.

This means that the prostitute cannot just choose any price.  The price must be such that x% of Johns are willing to pay that price when they assume that x% of other Johns are having unprotected sex.  Typically there will be just a few values of x that satisfy this fixed-point relationship.

So a cross-section of pricing patterns will exhibit a bang-bang (quiet down Beavis) or bi-modal (Beavis!) histogram with high prices and low prices and none in-between.  The high prices correspond to the equlibria in which few Johns have unprotected sex so Johns are willing to pay a lot, and the low prices correspond to the equilibria in which many Johns have unprotected sex and Johns place lower value on it.

It could even happen that the price premium is for protected sex.  In fact it could even be profit maximizing to distort downward the price of unprotected sex in order to signal how risky that would be, enabling the prostitute to raise the price of protected sex.

Read about it in the Wall Street Journal.

Many of his papers have been highly theoretical works focusing on imperfections in financial markets. “He’s probably the most abstract thinker ever to head a Federal Reserve bank,” said Robert Lucas, a Nobel Prize-winning economist who is serving as a consultant to the Minneapolis Fed.

Mr. Kocherlakota’s colleagues say he is a pragmatic person who is hard to identify fully with any one camp.

“He believes in the freshwater world, but he’s not that radical,” says Luigi Pistaferri, a frequent co-author with whom Mr. Kocherlakota worked for three years at Stanford University. “He agrees that there are market failures, and his attitude is, ‘How do we make the best of a world in which there are such failures?’ “

I once took Narayana to see The Bad Plus in Minneapolis on a visit there.  Narayana is Canadian I believe and that night they busted out Tom Sawyer.  I don’t think he was all that into it.

I assume this means we will need a new macro co-editor at Theoretical Economics.  Volunteers?

Iceland is seeing a small baby boom.

The Icelandic press buzzed with the good news. One article quoted a midwife in the town of Húsavik who noted a bump in births in June and July — an auspicious nine months after the worst of Iceland’s meltdown. Wrote blogger Alda Sigmundsdóttir: “I think many, many of us must have sought solace in love and sex and all that good stuff.”

Italians too, and condom sales were brisk at the low point of the recession in the US.  But historical pattern has been procyclical procreation*

“total fertility” — roughly, the average number of children per woman during her childbearing years — was 2.53 in 1929 and had slid to 2.15 by 1936. Then came the baby boom of postwar prosperity: The birth rate crossed 3 in 1947 and remained above that threshold until the mid-1960s. The next trough, 1.74, came in 1976 — a year earlier, unemployment had hit a postwar peak of 8.5%.

The article is in the Wall Street Journal.

__________

*The pun involving “hump” is an exercise left to the reader.

The most important development in the way we interact on the web will come when a system of micropayments is in place.  The big difficulties are coordination problems and security.  The strongest incentive to build and control a massive social network is that it will enable Facebook to host a micropayments economy within its closed environment, solving both the coordination problem and a big part of the security problem.

Here’s the future of Facebook.  You will subscribe to your friends.  A subscription costs you a flow of micropayments.  Your friends will include the likes of Tyler Cowen, The Wall Street Journal, gmail, Jay-Z, Harry Potter and the Deathly Hallows, etc.

Remember that the next time you hear somebody say that there is no way to monetize Facebook or Twitter.

If you are the owner of a large enterprise and are ready to retire, what do you do?  Sell to the highest bidder.  Before selling, do you want to split your firm into competing divisions and sell them off separately?  No, because, that would introduce competition, reduce market power and lower the bids so the sum total is lower than what you would get for the monopoly.  Searle, the drug company, sold itself off to Monsanto as one unit.

Miguel Angel Felix Gallardo, the Godfather of the Mexican illegal drug industry, lived a peaceful life as a rich monopolist.   Then he was caught in 1989 and decided to sell off his business.  In principle, Gallardo should sell off a monopoly just like Searle.  But he did not (see end of article)  The difference is that property rights are well defined in a legal business so Searle belongs to Monsanto.  But Gallardo can’t commit not to sell the same thing twice as property rights are not well-defined.  There is also considerable secrecy so it’s hard to know if the territory you are buying was already sold to someone else before.  And after you’ve sold one bit for a surplus, you have the incentive to sell of another chuck as you ignore the negative impact of this on the first buyer.

The result is that selling illegal drug turf results in a more competitive market than the ex ante ideal.  As the business is illegal anyhow, all the gangs can shoot it out to capture someone else’s territory.  Exactly what’s happening now.

The centerpiece of Greg Mankiw’s column in the New York Times is this paragraph about the little white pill he takes every day:

Not long ago, I read that a physician estimated that statins cost $150,000 for each year of life saved. That approximate figure reflects not only the dollars patients and insurance companies spend on the treatment but also — and just as important — an estimate of how effective it is in prolonging life. (That number is for men. Women have a lower risk of heart disease.)

Mankiw used the word cost but I would be that what he is referring to is price. With monopolized drugs and dysfunctional health care insurance there is a huge difference between price and cost.  And with this in mind, Mankiw’s column completely misses the real economic problem exemplified by his pills.

We all work for google now.  Previous posts on reCAPTCHA here and here.  beanie bow:  lance fortnow.