Compared to non-fiction.  Co-authorships leverage specialization.  Certainly there are heterogeneous strengths in fiction writing and this should create gains from collaboration.  But we don’t see it.  I can’t think of any great work of fiction that was co-authored.  There must be a good reason.

  1. Writing style is crucial in fiction.  Multiple voices would make the work feel disjointed.  They could try to collaborate on the writing process and together create one voice but maybe this puts too much of a drag on the creative process.
  2. Still, there are some who are good at imagining plots and characters and others who excel at the stage of actually writing once the idea has been conceived.  Why don’t we see this kind of partnership?
  3. I bet there are great partnerships like this but we never know it because the partners agree to a single nom de plume.

My bottom line is that, ironically, the attraction of great fiction is a connection with the author.  When we read beautiful prose or get turned on by an ingenious plot twist, we think of the author and we enjoy being close to the mind that created it.  Multiple authors would confuse and dillute this feeling.

Jonah Lehrer illustrates a common misunderstanding of (im)probability.  He writes:

It’s been a hotly debated scientific question for decades: was Joe DiMaggio’s 56-game hitting streak a genuine statistical outlier, or is it an expected statistical aberration, given the long history of major league baseball?

He is referring to the observation that 56-game hitting streaks while intuitively improbable will nevertheless happen when the game has been around for long enough.  Does this make it less of a feat?

  1. Say I have a monkey banging on a keyboard.  Take any seqeunce of letters.  The chance that the monkey will bang out that particular sequence is impossibly small.  But one sequence will be produced.  When we see that sequence produced do we change our minds and say that’t not so surprising after all because there was certain to be one unlikely sequence produced?  No.  Similarly, the chance that somebody will hit safely in 56 straight games could be high, but the chance that it will be player X is small.  Indeed, that probability is equal to the probability that player X is the greatest streak hitter ever to play the game.  So if X turns out to be Joe DiMaggio then we conclude that Joe DiMaggio indeed accomoplished quite a feat.
  2. We might be asking a different question.  We grant that DiMaggio achieved the highly improbable and hit for the longest streak of any player in history, but we ask whether 56 is really all that long?  After all, he didn’t hit for 57, which is even less likely.  To address this question we might ask, on average, how many players “should” hit safely in 56 straight games in the time that the game has been around?  But this question is very easy to answer.  Our best estimate of the expected number of players to hit 56-game streaks is 1, the actual number.  (Because the number is close to zero, this estimate is noisy, but this is still the best estimate without making any assumptions about the underlying distribution.)

Should we be more scared of North Korea after their recent nuclear tests? Kim Byung-Yeon and Gerard Roland say “No”!  They study the impact of major events related to North Korea (e.g.  the conduct of the nuclear test in 2006) on South Korean financial markets and conclude:

“We found basically no effects on financial markets of events perceived as increasing the tension on the Korean peninsula. In a nutshell, the financial markets in South Korea are not afraid of Kim Jong-Il.”

They use “event study methodology” which is typically used in finance to study mergers.  Prices before and after the merger are used to estimate its impact on value.  The key is getting the date of the event right.  If you get it too late for example, the price already incorporates the event.  For example, if the South Korean markets had already incorporated the “news” of the North Korean test, they would not react significantly to the actual test.  Not sure how the authors deal with this issue.  In any case, their approach is a highly original attempt to apply economic methods to foreign policy.

In case you have not been following the catfight, let me get you up to speed.  Chris Anderson wrote a book called Free.  I haven’t read it, but it apparently says “all your ideas are belong to us” because the price of ideas is crashing to zero.  Malcom Gladwell says “please don’t let my employer read that”…I mean, “No its not.”

Let’s have a model.  There are tiny ideas and big ideas.  The tiny ideas are more like facts, or observations or experiences.  They are costless to produce but costly to communicate.  They are highly decentralized in that everybody produces their own heterogenous tiny ideas.  The big ideas are assembled from a large quantity of tiny ideas.  Different people have different production technologies for producing big ideas from small ones.  These could differ just in cost, or also in terms of the quality of big ideas that are produced, it changes the story a little but doesn’t change the economics.

Start with a world where the marginal cost of communicating a tiny idea to another individual is large.  Then the equilibrium market structure has big-idea producers who incur the high cost of acquiring tiny ideas, assemble them into big ideas and communicate the big ideas to the masses for a price.  This market structure sustains high prices for big ideas and sustains entry by big-idea specialists.

Now suppose the marginal cost of communicating the tiny ideas shrinks to zero.  Then an alternative for end users is to assemble their own big ideas for their own consumption out of the tiny ideas they acquire themselves for close to nothing.  The cost disadvantage that the typical end user has is compensated by his ability to customize his palette of tiny ideas and resulting big ideas to complement his idiosyncratic endowment of other ideas, tastes, etc.   The price of big ideas crashes.  Former producers of big ideas exit the market.  This is all efficient.

An important implication of this model is that the products that Anderson expects to be free are not the products Gladwell produces.  So when Gladwell says that this is absurd because the economics do not support big ideas being sold at a price of zero, he is right.  But that is because the big ideas are not being sold at all, and this is all efficient.

Our colleagues, Eran Shmaya and Rakesh Vohra have started a blog, The Leisure of the Theory Class.  Only three posts so far, but it promises to be a feast of Gale-Stewart games, and gossip.  I look forward to them making fun of Sandeep too.

When you self-check-in to United Airlines, they make a series of offers where you can upgrade to Economy Plus, get an aisle seat etc.

Why do they do this at the last minute?   They are already experts at price discrimination but this usually occurs when you buy a ticket using advanced purchase, cancellation fees etc to separate out different buyers with different willingness to pay.   You could do the same with aisle seats and Economy Plus.

Implementing price discrimination at the last minute helps you respond to information, e.g. how full the plane is, which you only have available at that time.  Is that the only advantage?

There is a possible disadvantage for the airlines- price discrimination at the time of purchase is transparent to other airlines.  They can coordinate on the pricing.  But last minute pricing is intransparent.  Maybe United even cuts the advance purchase fare, gives only interior seats to get surplus back via an “add-on” price for an aisle seat.  This might trigger a price war.

On the other hand “add-ons” are very effective for extracting suplus at hotels (mini-bars etc) and for credit card companies (late fees etc).  Is this what airlines are hoping?

Not Exactly Rocket Science describes an experiment in which vervet monkeys are observed to trade grooming favors for fruit.  At first one of the monkeys had an exclusive endowment of fruit and earned a monopoly price.  Next, competition was introduced.  The endowment was now equally divided between two duopolist monkeys and as a result the price in terms of willingness-to-groom dropped.

Now, were the monkeys playing Cournot (marginal cost equals residual marginal revenue) or Bertrand (price equals marginal cost)?  (The marginal cost of trading an apple for a grooming session is the opportunity cost of not eating it.)  We need another treatment with three sellers to know.  If the price falls even further then its Cournot.  In Bertrand the price hits the competitive point already with just two.

Intermediate micro question:  Can Monkey #1 increase his profits by buying the apples from Monkey #2 at the equilibrium price and then acting as a monopolist?

At the blog Everything Finance, Jonathan Parker breaks down the implications of the State of California issuing IOUs to rollover its debts, essentially creating a new currency whose value is pegged to the US Dollar.  He makes a number of interesting points including the observation that since California cannot print Dollars, and cannot issue (conventional) debt, the IOUs place the State in a predicament reminiscent of financially-distressed countries having to defend a pegged exchange rate.

And unfortunately, the history of fixed exchange rates in practice includes lots and lots of these effective defaults.  Governments that can issue these i.o.u.’s and have trouble balancing budgets tend to issue a greater value of their currencies than they have the will or ability to maintain.  And default follows.

Prior to “maturity” will these IOUs trade at some market price reflecting the probability of default?  One question is whether banks will be interested in buying IOUs, offering liquidity in return for the asset and a premium?  The strategic issue is whether politically the State will find it more or less attractive to default if the IOUs are still largely held by private citizens, or instead mostly by banks?

My guess is that, in a crisis, a small number of banks would more effectively pressure the State to meet their obligations than if IOU holdings were less concentrated.  If so, then I would expect banks to be buying IOUs at a steep discount.  But does this create a Grossman-Hart style free-rider problem analogous to tendering shares in takeover bids?

As Jeff pointed out in an earlier post, David Levine thinks rational choice theory is remarkably successful and that behavioral economics may be doomed.  This message has made it into experiments and the New York Times:

[S]uppose, instead of scanning people’s brains as they’re sipping wine in a laboratory, you tested them in a more realistic situation: a restaurant where they’re spending their own money. That challenge was undertaken at an upscale restaurant in Tel Aviv by two behavioral economists, Ori Heffetz of Cornell and Moses Shayo of the Hebrew University of Jerusalem, who expected to be able to manipulate diners’ choices by changing the prices on the menu.

Unbeknownst to the diners or to their waiters, the economists monitored the choices of people who ordered from the prix fixe menu. The three-course meal cost included a choice of five entrees: shrimp gnocchi, pork shank, red mullet fillet, sausage or stuffed artichoke.

Next to each of these entrees on the menu, in parentheses, was the cost of what it would cost to order that entree from the à la carte menu. These prices didn’t affect the cost of the prix fixe meal, which was the equivalent of $30 no matter what the entree, but the researchers expected just the sight of the prices to make a difference. If the mullet were listed at $20 and the other entrees were $17, more people would presumably be enticed into ordering the seemingly more valuable fish.

But after three months of testing various combinations of prices, the researchers found they couldn’t sway the customers. Putting a higher price on the shrimp or any other entree didn’t make people more likely to order it.

This same stubbornly independent streak was manifest in another food experiment by the same researchers. This time they let people sample two kinds of candies — peanut butter bars and caramels — and varied the sticker prices for each one.

Superficially, the manipulation seemed to work, because people said they would be willing to pay more for a candy if it had a higher sticker price, but that was just in answer to a hypothetical question. When people were given a chance to pick a bag of candy to take home, they pretty much ignored the sticker prices and chose what they liked.

Why weren’t people duped into favoring the high-priced candies and entrees? Why did they follow their own tastes?

“Maybe, sometimes, old-fashioned economics is just about right,” Dr. Shayo says. “Maybe when it comes to food, people do have reasonably stable preferences. Some people like shrimp and some don’t, even if it’s worth a lot of money.”

Interestingly, the results also back up another hobbyhorse of economists: experiments with real payoffs give very different results to those relying on answers to hypothetical questions.  As economic decisions involve real payoffs, its the results with real consequences that are a better predictor of what decision-makers will do when faced with real decisions.  Economists insist that research papers with experiments use monetary rewards.  I always wondered if this really mattered – perhaps it does.

Apparently we have arrived at the long run and we are not dead.

Do you remember the Microsoft anti-trust case?  The anti-trust division of the US Department of Justice sought the breakup of Microsoft for anti-competitive practices mostly centering around integrating Internet Explorer into the Windows operating system.  In fact, an initial ruling found Microsoft in violation of an agreement not to tie new software products into Windows and mandated a breakup, separating the operating systems business from the software applications business.  This ruling was overturned on appeal and evnetually the case was settled with an agreement that imposed no further restrictions on Microsoft’s ability to bundle software but did require Microsoft to share APIs with third-party developers for a 5 year period.

Today, all of the players in that case are mostly irrelevant.  AOL, Netscape, Redhat.  Java.  Indeed, Microsoft itself is close to irrelevance in the sense that any attempt today at exploiting its operating system market power to extend its monopoly would cause at most a short-run adjustment period before it would be ignored.

Microsoft was arguing at the time that it was constantly innovating to maintain its market position and it was impossible to predict from where the next threat to its dominance would appear.  Whether or not the first part of their claim was true, the second part certainly turned out to be so.  It is hard to see a credible case that the Microsoft anti-trust investigation, trial, and settlement played anything more than a negligible role in bringing us to this point.  Indeed the considerations there, focusing on the internals of the operating system and contracts with hardware manufacturers, are orthogonal to developments in the market since then.  The operating system is a client and today clients are perfect substitutes.  The rents go to servers and servers live on the internet unconstrained by any “platform” or “network effects”, indeed creating their own.

The lesson of this experience is that in a rapidly changing landscape, intervention can wait.  Even intervention that looks urgent at the time.  Almost certainly the unexpected will happen that will change everything.

I read news mostly through an rss reader.  The Wall Street Journal syndicates only short excerpts of their articles and if I click through I get a truncated version of the article follwed by a friendly invitation to subscribe to the journal in order to view the rest of the article.  It looks like this.

But its not hard to get the full text of the article.  I just use google and type in the title of the article.  The first link I get is a link to the full text, no subscription required.  I always explained this to myself using a simple market-segmentation idea.  WSJ will not give their content away to someone who is browsing their site directly because that person has revealed a high value for WSJ content.  Someone who is googling has revealed that they are looking for relevant content, without regard to source.  There is more competition for such a user so the price is lower.

But today I noticed that bing, Microsoft’s new search engine, does not get the same special treatment.  If I bing “At Chicken Plant, A Recession Battle,” the link provided leads to the same truncated article as my rss reader.  Since users have free entry across search platforms I can’t see any reason why bing-searchers (bingers?) would be systematically different than googlers in terms of the economics above.  Therefore I am giving up on my theory.  What are the alternatives?

  1. Google has a contract with WSJ?
  2. WSJ would like to shut out googlers too but finds it hard to shut off a service that users have come to expect. Knowing this, they are keeping bingers out from the outset.
  3. The game between content providers has multiple equilibria.  On the google platform they are playing the users’ preferred equilibrium.  On the bing platform they have coordinated on their preferred equilibrium.
  4. Google has figured out a secret back-door that bing hasn’t found and WSJ just hasn’t gotten around to closing.

Ok the ideas are gettng more and more lame.  I am stumped.

Incidentally, there was an article in the New York Times about DOJ investigations of Google, and a Google PR offensive:

“Competition is a click away,” Mr. Wagner says. It’s part of a stump speech he has given in Silicon Valley, New York and Washington for the last few months to reporters, legal scholars, Congressional staff members, industry groups and anybody else who might influence public opinion about Google.

“We are in an industry that is subject to disruption and we can’t take anything for granted,” he adds.

Rings a bell.

I collect kludges.  Its an especially welcome addition to the collection when it involves a tasty snack:

DunceCap Doff:  There I Fixed It.

100_0900I’ve been on Capri for a week for work.   Here are some impressions largely of Anacapri.

Hotel

We stayed at the Casamariantonia.  There were four of us so we got a suite.  It’s pretty pricey but actually cheaper than the hotels.  The hotel is family-owned and they are pretty helpful – the father walked with us part of the way to show directions to a rustic path from Anacapri to the Blue Grotto.  The grandmother makes fresh tarts for the breakfast buffet. Our room was nice and had a small kitchen.  There’s a grocery store opposite so you can cook if you want to.   There’s balcony where you  can hang out and good air-conditioning.  Two downsides: no swimming pool (they are waiting for a permit) and no WiFi in rooms (you have to go downstairs to the lobby).  This is the main reason for my lack of posts!

Restaurants

Our favorite by far was Da Gelsomina (photo was taken there).  They have a swimming pool too.  There is hefty charge (incl for sunbeds, towels, umbrella as well as entrance) but you get a discount if you eat there and/or stay at Casamariantonia.  There is the usual Capri fare, ravioli caprese, insalata caprese, pennette aumm aumm etc, and it’s all done very well.  There are also dishes you do not find elsewhere (e.g. gnocchi with gorgonzola and arugula), great fried stuff as an appetizer.  They make their own organic wines which are delicious.  Down the road from the restaurant there are two spots with amazing views of the lighthouse and the Faroglioni, three dramatic rocks in the ocean.  They also have rooms.  It’s a bit out of the loop at the top of a hill but they have a free bus service to drop you off and pick you up in downtown Anacapri.   Might try it next time.

At 1.4 Euros to the dollar, costs mount up.  Pizza is a good standby to tighten the belt.  Ristorante Arcate does good pizza.  Trattoria Il Solitario does pizza and also some original pastas (e.g. paccheri with lardo and fava beans).

What to do

1. Capri Walk up to Villa Jovis, Emperor Tiberius’s old home.  Now in a state of decay.  Great views.  Walk back into town and eat at Bar Jovis or at Da Gemma, Graham Greene’s favorite restaurant with great views over the mountain and sea.  There’s a bunch of chi-chi shops if you are into that kind of thing.

2. Boat trip Splash out for the personal boat ride (around E 50 more than the sardine can version).  How else would you ride through the hole in the central Faroglioni?

3. Walk to Grotta Azzurra Take nice old pedestrian walk from Anacapri, not the main road.  If you get lost you can find the main road.

4. Hike: There is a great walk along the sea from one pirate watchtower to another (not suitable for young kids).

Things to watch out for: Chair lift up to top of Monte Solaro has individual seats – not good for kids.  Grotta Azzurra closes when sea is choppy and there can be a long wait.  Go either before tourist hordes arrive from Naples or after they leave.  Incidentally, Naples is a bit overwhelming.  It feels like Bombay.  So be prepared!

Top chess players, until recently, held their own against even the most powerful chess playing computers.  These machines could calculate far deeper than their human opponents and yet the humans claimed an advantage:  intuition.  A computer searches a huge number of positions and then finds the best.  For an experienced human chess player, the good moves “suggest themselves.”  How that is possible is presumably a very important mystery, but I wonder how one could demonstrate that qualitatively the thought process is different.

Having been somewhat obsessed recently with Scrabble, I thought of the following experiment.  Suppose we write a computer program that tries to create words from scrabble tiles using a simple brute-force method.  The computer has a database of words.  It randomly combines letters and checks whether the result is in its database and outputs the most valuable word it can identify in a fixed length of time.  Now consider a contest between to computers programmed in the same way which differ only in the size of their database, the first knowing a subset of the words known by the second.  The task is to come up with the best word from a fixed number of tiles.  Clearly the second would do better, but I am interested in how the advantage varies with the number of tiles. Presumably, the more tiles the greater the advantage.

I want to compare this with an analogous contest between a human and a computer to measure how much faster a superior human’s advantage increases in the number of tiles.  Take a human scrabble player with a large vocabulary and have him play the same game against a fast computer with a small vocuabulary.  My guess is that the human’s advantage (which could be negative for a small number of tiles) will increase in the number of tiles, and faster than the stronger computer’s advantage increased in the computer-vs-computer scenario.

Now there may be many reasons for this, but what I am trying to get at is this.  With many tiles, brute-force search quickly plateaus in terms of effectiveness because the additional tiles act as noise making it harder for the computer to find a word in its database.  But when humans construct words, the words “suggest themselves” and increasing the number of tiles facilitates this (or at least hinders it more slowly than it hinders brute-force.)

We will take a first glimpse at applying game theory to confront the incentive problem and understand the design of efficient mechanisms.  The simplest starting point is the efficient allocation of a single object.  In this lecture we look at efficient auctions.  I start with a straw-man:  the first-price sealed bid auction.  This is intended to provoke discussion and get the class to think about the strategic issues bidders face in an auction.  The discussion reaches the conclusion that there is no dominant strategy in a first-price auction and it is hard to predict bidders’ behavior.  For this reason it is easy to imagine a bidder with a high value being outbid by a bidder with a low value and this is inefficient.

The key problem with the first-price auction is that bidders have an incentive to bid less than their value to minimize their payment, but this creates a tricky trade-off as lower bids also mean an increased chance of losing altogether.  With this observation we turn to the second-price auction which clearly removes this trade-off altogether.  On the other hand it seems crazy on its face:  if bidders don’t have to put their money whether mouths are won’t they now want to go in the other direction and raise their bid above their value?

We prove that it is a dominant strategy to bid your value in a second-price auction and that the auction is therefore an efficient mechanism in this setting.

Next we explore some of the limitations of this result.  We look at externalities:  it matters not just whether I get the good, but also who else gets it in the event that I don’t.  We see that a second-price auction is not efficient anymore.  And we look at a setting with common values:  information about the object’s value is dispersed among the bidders.

For the comon-value setting I do a classroom experiment where I auction an unknown amount of cash.  The amount up for sale is equal to the average of the numbers on 10 cards that I have handed out to 10 volunteers.  Each volunteer sees only his own card and then bids.  If the experiment works (it doesnt always work) then we should see the winner’s curse in action:  the winner will typically be the person holding the highest number, and bidding something close to that number will lose money as the average is certainly lower.

Here are the slides.

(I got the idea from the winner’s curse experiment from Ben Polak, who auctions a jar of coins in his game theory class at Yale.  Here is a video. Here is the full set of Ben Polak’s game theory lectures on video.  They are really outstanding.  Northwestern should have a program like this.  All Universities should.)

Wine and movies have a lot in common.  They are both worldwide markets for highly differentiated products with critics who are visible and economically important.  But while there are as many film critics as there are films and opinions about films, there are just a handful of highly influential wine critics, Robert Parker’s Wine Advocate, The Wine Spectator, and a few others.  This is somewhat counterintuitive because there are many, many more wines than films.  Here are a few thoughts.

  1. People know their taste in movies better than they know their taste in wine.  This makes it easier to find idiosyncratic movie critics that have similar tastes.  Similar critics face an entry barrier in the wine world.
  2. All wines taste the same and the role of a critic is just to tell you which wines you are supposed to like and which wines you can brag about drinking.  This creates a natural oligopoly among the wine critics who the market coordinates on.
  3. Wines are given as gifts and movies are not. This means that wine critics are rewarded for reflecting general rather than specialized tastes.
  4. A very small fraction of wines are good and wine criticism just means tasting thousands of wines until you find the good ones.  This creates increasing returns to scale in wine criticism, another source of natural monopoly power.
  5. The movie businesss is less competitive so a blockbuster film earns more rents and as a result there is more rent seeking, especially in marketing.  Thus the emergence of David Manning.  There is no analogous force behind “The feel good wine of the year!”
  6. Wine critics provide a service for wine-makers, film critics are serving film-goers.  What makes a good wine critic is the ability to articulate what wine buyers will buy.  Whoever is best at this will dominate.

Cynics believe some version of 6 and 2 (Parkerization.)  I don’t understand why 5 wouldn’t be the same for wine and film maybe this is just a matter of time.  4 may be true in the mid-range but whether this matters depends on whether you think wine critics are really influential here or rather at the high end where there are relatively few consistent performers.  I lean toward 1, Gary Vaynerchuck notwithstanding, which is a less cynical version of 6.

I believe that the study referred to in this CNN piece is pure noise.  (Don’t bother watching it.  Bottom line:  1 in 5 teens admits to “sexting.”)  But that doesn’t mean that it carries no information.  The mere fact that this claim would be repeated, at the expense of the marginal piece of news, turns pure noise into information.

Evolutionary Psychology and, increasingly, behavioral economics spin a lot of intriguing stories explaining foibles and otherwise mysterious behaviors as the byproduct of various tricks nature utilizes to get us to do her bidding.  I am on record in this blog as being a fan of this methodology.  But I also maintain a healthy skepticism and not just at the tendency to concoct “just-so” stories that often ask us to reformulate our theories of huge chunks of evolutionary history just to explain some nano-economic peculiarity.

Instead, when evaluating some theory of how emotions have evolved to induce us to behave in certain ways, skepticism should be aimed squarely at the basic premise.  The theory must come with a convincing explanation why nature would rely on a blunt instrument like emotions as opposed to all of the other tools at her disposal.  These questions seemed especially pressing when I read the following article about depression as a tool to blunt ambitions:

Dr Nesse’s hypothesis is that, as pain stops you doing damaging physical things, so low mood stops you doing damaging mental ones—in particular, pursuing unreachable goals. Pursuing such goals is a waste of energy and resources. Therefore, he argues, there is likely to be an evolved mechanism that identifies certain goals as unattainable and inhibits their pursuit—and he believes that low mood is at least part of that mechanism.

Why not a simpler mechanism:  just have us figure out that the goal is unattainable and (happily) go do something else? Don’t answer by saying that this emotional incentive mechanism evolved before our brains were advanced enough to do the calculation because the existence of an emotional response indicating the right course of action presupposes that this calculation is being made somewhere in the system.

Even granting that nature finds it convenient to do the calculation sub-(or un-)consciously and then communicate only the results to us, why using emotions?  Plants respond to incentives in the environment and they don’t need emotions to do it, presumably they are just programmed to change their “behavior” when conditions dictate.  Why would nature bother with such a messy, noisy, and indirect system of incentives rather than just give us neutral impulses?

Finally, you could try answering with the argument that evolution does not find optimal solutions, just solutions that work.  But that argument by itself can be made into a defense of everything and we are back to just-so stories.

How often does your mind wander?

Some of the most striking evidence comes from Jonathan Schooler, a psychologist at the University of California at Santa Barbara who is one of the leading researchers on mind wandering. In 2005 he and his colleagues told a group of undergraduates to read the opening chapters of War and Peace on a computer monitor and then to tap a key whenever they realized they were not thinking about what they were reading. On average, the students reported that their minds wandered 5.4 times in a 45-minute session. Other researchers have gotten similar results with simpler tasks, such as pronouncing words or pressing a button in response to seeing particular letters and numbers. Depending on the experiment, people spend up to half their time not thinking about the task at hand—even when they’ve been told explicitly to pay attention.

When I was a kid I thought there was something wrong with me because I would “read” pages at a time without paying attention to what I was reading.  My eyes would crawl over the words and move from line to line and in a certain real sense I was reading but my conscious mind was completely uninvolved.  After a few pages I would notice that I had absorbed nothing.

I still have a wandering mind but over time I have come to view it as a net asset.  The key is learning to teach your wandering mind to leave breadcrumbs.  Because it knows how to get to places that your conscious mind doesn’t.

Because a fair amount of mind wandering happens without our ever noticing, the solutions it lets us reach may come as a surprise. There are many stories in the history of science of great discoveries occurring to people out of the blue. The French mathematician Henri Poincaré once wrote about how he struggled for two weeks with a difficult mathematical proof. He set it aside to take a bus to a geology conference, and the moment he stepped on the bus, the solution came to him. It is possible that mind wandering led him to the solution. John Kounios of Drexel University and his colleagues have done brain scans that capture the moment when people have a sudden insight that lets them solve a word puzzle. Many of the regions that become active during those creative flashes belong to the default network and the executive control system as well.

The article is worth a read. (akubura ack:  Mindhacks)

I came across this philosophy paper (miter missive:  The Browser) which ponders whether the hypothesis of an omnipotent and omniscient God is any more likely to imply that God is good rather than God is evil.

Suppose, for example, that the universe shows clear evidence of having been designed. To conclude, solely on that basis, that the designer is supremely benevolent would be about as unjustified as it would be to conclude that it is, say, supremely malevolent, which clearly would not be justified at all.

The problem always appears at a much more basic level for me.  Suppose you are an omnipotent God.  What do you do?  Obviously to answer that question you should start by identifying all of the feasible alternatives (ok that one is easy, everything is feasible because you are omnipotent), rank them according to your preferences, and do the one that ranks at the top.  Wait a minute.  What are your preferences?

You are omnipotent remember.  Its not just that you get to choose your preferences.  Your preferences do not exist until you create them.  Ok.  So first you choose your preferences then solve the problem of what to do given those preferences.  How do you choose your preferences?  It is no help trying to choose the preferences that are easiest to satisfy blissfully because you are omnipotent.  All preferences are trivial to satisfy blissfully.  But why do you want to want that anyway?  How do you even know what you want to want?  You don’t have any preferences yet right?

So I think that an omnipotent God would be too neruotic to even get out of bed and decide whether to be good or evil.

Should texting, emailing and browsing be banned in meetings? This article discusses the current climate.

Despite resistance, the etiquette debate seems to be tilting in the favor of smartphone use, many executives said. Managing directors do it. Summer associates do it. It spans gender and generation, private and public sectors.

A few years ago, only “the investment banker types” would use BlackBerrys in meetings, said Frank Kneller, the chief executive of a company in Elk Grove Village, Ill., that makes water-treatment systems. “Now it’s everybody.” He said that if he spotted 6 of 10 colleagues tapping away, he knew he had to speed up his presentation.

While I would always prefer to have my iPhone handy, I would volunteer to keep the meeting smartphone free.  And that is not because I want the undivided attention of my colleagues.  If we all deprive ourselves we create high-powered incentives to keep the meeting as short as possible.  That sentiment is echoed here:

Mr. Brotherton, the consultant, wrote in an e-mail message that it was customary now for professionals to lay BlackBerrys or iPhones on a conference table before a meeting — like gunfighters placing their Colt revolvers on the card tables in a saloon. “It’s a not-so-subtle way of signaling ‘I’m connected. I’m busy. I’m important. And if this meeting doesn’t hold my interest, I’ve got 10 other things I can do instead.’ ”

Wimbledon, which has just gotten underway today, is a seeded tournament, like all major tennis events and other elimination tournaments.  Competitors are ranked according to strength and placed into the elimination bracket in a way that matches the strongest against the weakest.  For example, seeding is designed so that when the quarter-finals are reached, the top seed (the strongest player)  will face the 8th seed, the 2nd seed will face the 7th seed, etc.   From the blog Straight Sets:

When Rafael Nadal withdrew from Wimbledon on Friday, there was a reshuffling of the seeds that may have raised a few eyebrows. Here is how it was explained on Wimbledon.org:

The hole at the top of the men’s draw left by Nadal will be filled by the fifth seed, Juan Martin del Potro. Del Potro’s place will be taken by the 17th seed James Blake of the USA. The next to be seeded, Nicolas Kiefer moves to line 56 to take Blake’s position as the 33rd seed. Thiago Alves takes Kiefer’s position on line 61 and is a lucky loser.

Was this simply Wimbledon tweaking the draw at their whim or was there some method to the madness?

Presumably tournaments are seeded in order to make them as exciting as possible for the spectators.  One plausible goal is to maximize the chances that the top two players meet in the final, since viewership peaks considerably for the final.  But the standard seeding is not obviously the optimal one for this objective:  it makes it easy for the top seed to make the final but hard for the second seed.  Switching the positions of the top ranked and second ranked players might increase the chances of having a 1-2 final.

You would also expect that early round matches would be more competitive.  Competitiveness in contests, like tennis matches, is determined by the relative strength of the opponents.  Switching the position of 1 and 2 would even out the matches played by the top player at the expense of unbalancing the matches played by the second player, the average balance across matches would be unchanged.  If effort is concave in the relative strength of the opponents then the total effect would be to increase competitiveness.

When you start thinking about the game theory of tournaments, your first thought is what has Benny Moldovanu said on the subject.  And sure enough, google turns up this paper by Groh, Moldovanu, Sela, and Sunde which seems to have all the answers.  Incidentally, Benny will be visiting Northwestern next fall and I expect that he will be bringing his tennis racket…

One of the highly touted features of the iPhone is the abundance of applications available for near-instantaneous download and seamless installation.  In traditional Apple fashion, in order to keep full control of the software environmnet and maintain this seamless experience, Apple exercises strict control over which apps are made available through the app store.  Short of jailbreaking your phone, there is no other way to install third-party software.

The process by which apps are submitted and reviewed strikes many as highly inefficient.  (It also strikes many as anti-competitive, but that is not the subject of this post.  There are legitimate economic arguments supporting Apple’s control of the platform and for my purposes here I will take those as given, although for now I remain agnostic on the question.) Developers sink significant investment producing launch-ready versions of their software and only then learn definitively whether the app can be sold.  There is no recourse if the submission is denied.

(Just recently, we witnessed an extreme example of the kind of deadweight loss that can result.  A fully licensed, full-featured Commodore64 Operating System emulator, 1 year in the making, was just rejected from the app store. )

Unfortunately, this is an inevitable inefficiency due to the ubiquitous problem of incomplete contracting.  In a first-best world, Apple would publicize an all-encompassing set of rules outlining exactly what software would be accepted and what would be rejected.  In this imaginary world of complete contracts, any developer would know in advance whether his software would be accepted and no effort would be wasted.

In reality it is impossible to conceive of all of the possibilities, let alone describe them in a contract.  Therefore, in this second-best world, at best Apple can publish a broad set of guidelines and then decide on a case-by-case basis when the final product is submitted.  This introduces inefficiencies at two levels.  First, the direct effect is that developers face uncertainty whether their software will pass muster and this is a disincentive to undertake the costly investment at the beginning.

But the more subtle inefficiency arises due to the incentive for gamesmanship that the imperfect contract creates.  First, Apple’s incentive in constructing guidelines ex ante is to err on the side of appearing more permissive than they intend to be.  Apple knows even less than the developers what the final product will look like and Apple values the option to bend the (unwritten) rules a bit when a good product materializes.  While it is true that Apple’s desire to keep a reputation for transparent guidelines mitigates this problem to some extent, the fact remains that Apple does not internalize all the costs of software development.

Second, because Apple cannot predict what software will appear it cannot make binding commitments to reject software that is good but erodes slightly their standards.  This gives developers an incentive to engage in a form of brinkmanship:  sink the cost to create a product highly valued by end users but which is questionable from Apple’s perspective.  By submitting this software the developer puts Apple in the difficult position of publicly rejecting software that end users want and the fear of bad publicity may lead Apple to accept software that they would have like to commit in advance to reject.

The iPhone app store is only a year old and many observers think of it as a short-run system that is quickly becoming overwhelmed by the surprising explosion of iPhone software.  When the app store is reinvented, it will be interesting to see how they approach this unique two-sided incentive problem.

Update: Mark Thoma further develops here.  He didn’t ask in advance for permission to do that, but if he did I would have given encouraging signals but then rejected it ex post.

That’s the title of a new essay by the omnipresent David Levine.  An excerpt:

The key difference between psychologists and economists is that psychologists are interested in individual behavior while economists are interested in explaining the results of groups of people interacting. Psychologists also are focused on human dysfunction – much of the goal of psychology (the bulk of psychologists are in clinical practices) is to help people become more functional. In fact, most people are quite functional most of the time. Hence the focus of economists on people who are “rational.” Certain kinds of events – panics, for example – that are of interest to economist no doubt will benefit from understanding human dysfunctionality. But the balancing of portfolios by mutual fund managers, for example, is not such an obvious candidate. Indeed one of the themes of this essay is that in the experimental lab the simplest model of human behavior – selfish rationality with imperfect learning – does an outstanding job of explaining the bulk of behavior.

Jonah Lehrer suggests leveraging “mental accounting” to create a free lunch by imposing a tax on homeowners to pay for energy retro-fitting that they won’t notice because of its small size relative to the price of the house:

But I can already hear the naysayers: Won’t homeowners object? Won’t that just add thousands of dollars to the cost of buying a home?Enter mental accounting, an irrational bias that can be tweaked to produce positive outcomes. Because a home is already such a gigantic purchase, and because the home buying process is already so saturated in peculiar fees (inspection charges, loan points, escrow fees, mortgage broker expenses, etc.) I’d argue that consumers will be much less sensitive to the cost of a home renovation. They’ll barely even notice the $5000 “energy efficiency charge” when it appears on their massive bill from the real estate agent. (Besides, they’ll get a big chunk of the money back as a tax credit.) In other words, they’ll act just like me the last time I stayed at a fancy hotel, when I ordered the internet and ate the peanuts.

I have always preferred Guinness at the warmer temperatures I have had it served to me in Britain.  And I always assumed that 45-50F was the recommended serving temperature.  That is why I was surprised to see this:

photo

I assume that this is US-specific marketing.  In the US, beer is always served ice cold and the marketing around this fact can be hysterical.  I once remember an advertisement for Miller Genuine Draft which claimed that it was the “coldest.”

Anyway, does anybody know what temperature Guinness is served, say in Ireland?  And on these new bottles with the “widget” and the nitrogen, does it also read “Serve Extra Cold” where you live?

(In the background is guacamole made in a molcajete.  Grind 1/2 white onion, chopped, one jalapeno diced, and a small handful of cilantro in the bottom of the molcajete, with some kosher salt.  2 ripe avocados and the juice of one lime.  Mash the avocado with the onion/cilantro/chile using a plastic fork.  top with some more diced white onion and chopped cilantro. no tomatoes!  Pair with… well duh.)

It is well-known that when you ask a person to construct a random sequence, say of zeroes and ones, the sequence they create differs in systematic ways from a “truly random” sequence.  For example, they exhibit regression to the mean:  the person constructing the sequence is too careful to make sure that the short-run averages are 50-50 resulting in too-frequent alternations between zero and one.

Knowing this, here is a simple bet you can use as a money pump at parties.  Tell someone to write down a random sequence of heads and tails, and bet them that you can guess the numbers in their seqeunce.  A simple strategy that correctly predicts more than 50% of the time is to randomly guess the first number and then guess that each subsequent number is the opposite of the previous.  But if you study this article (and its links), you can refine your strategy and do even better.

And soon, as icing on the cake, you can offer your victim favorable odds, say you pay $1.10 every time you are wrong and she pays you $1.00 every time you are right. You will still make money.

Then after you have relieved your fellow revelers of their pocket cash, and they want to turn the tables on you, remember to use one of the coins you have just won to construct your sequence in a truly random fashion.

I’ve had a couple of excellent bottles of Coltassala in the past.  Definitely had to try it again.  Came away it a bit disappointed.  It was very heavy.  Quite bitter.  May have drunk it about 10 years too early in my attempt to recreate old memories.  Its almost 100% Sangiovese.  I’ve never liked Chianti which is also Sangiovese.  I think it needs to be blended with another grape.

Whic brings me to ….Balifico which was very delicious.  It’s a Cabernet/Sangiovese blend.  Very smooth.  Lots of fruit, deep red color.

The Chianti from Castello di Volpaia is quite easy to get hold in the States.  I have found Coltassala in the past.  Hope I can get hold of Balifico somehow.

As you can guess from my earlier post, I totally love Volpaia.  Castello di Volpaia has a tasting room and what looks like a restaurant under construction.  There are at least two other places to eat, including the wonderful Bottega.  Not sure what would keep the Euros rolling in but I would love to move here.  It’s a cliche.

Q: How do you prove the existence of Spring in Chicago?

A: By continuity.

In February it was zero Farenheit. Today it is muggy and approaching 90.  By continuity, Spring happened somewhere in between.  But note that this existence proof is not constructive.  It is of no help in telling us exactly when it was that Spring fluttered by.  I must have been sleeping at the time.

A Slate article reports that in surveys the proportion of people who say they voted for Obama over McCain does not match the results of the election.   Of course this panders to my inner economist.  I’m interested in how much of the difference can be attributed to outright lying versus self-deception.  An outright liar knows he is lying while credible self-deception involves some chance you actually believe the story you tell yourself.

It would be cool to have an experiment that distinguished between the two.  Maybe it’s already out there?