You are currently browsing jeff’s articles.
This lecture brings together everything built up to this point. We are going to develop an intuition for why competitive markets are efficient using a model of profit maximizing sellers who compete in an auction market by setting reserve prices. In the previous lecture we saw how the profit maximization motive leads a seller with market power to choose an inefficient selling mechanism. This came in the form of a reserve price above cost. Here we begin by getting some intuition why competition should reduce the incentive to distort price in this way.
(This is probably the weak link in the whole class. I do not have a good idea of how to teach this and in fact I am not sure I understand it so well myself. This is the first place to work on improving the class next time. Any suggestions would be appreciated.)
Finally, we jump to a model with a large number of buyers and sellers all competing in a simultaneous ascending double auction. With so much competition, if sellers set reserve prices above their costs there will be
- no sellers who are doing better than if they just set the reserve price equal to cost
- a positive mass of sellers who would do strictly better by reducing their reserve price to equal their cost
In that sense it is a dominant strategy for all sellers to set reserve price equal to their cost. This equates the “supply” curve with the cost curve and produces the utilitarian allocation. Here are the notes.
We saw The Fantastic Mr Fox a few weeks ago. It was a thoroughly entertaining movie and I highly recommend it. But this is not a movie review. Instead I am thinking about movie previews and why we all subject ourselves to sitting through 10-plus minutes of previews.
The movie is scheduled to start at the top of the hour, but we all know that what really starts at the top of the hour are the previews and they will last around 10 minutes at least. Why don’t we all save ourselves 10 minutes of time and show up 10 minutes late?
Maybe you like to watch previews but I don’t and in any case I can always watch them online if I really want to. I will assume that most people would prefer to see fewer previews than they do.
One answer is that the theater will optimally randomize the length of previews so that we cannot predict precisely the true starting time of the movie. To guarantee that we don’t miss any of the film we will have to take the chance of seeing some previews. But my guess is that this doesn’t go very far as an explanation and anyway the variation in preview lenghts is probably small.
In fact, even if the theater publicized the true start time we would still come early. The reason is that we are playing an all-pay auction bidding with our time for the best seats in the theater. Each of us decides at home how early to arrive trading off the cost of our time versus the probability of getting stuck in the front row. The “winner” of the auction is the person who arrives earliest, the prize is the best seat in the theater, and your bid is how early to arrive. It is “all pay” because even the loser pays his bid (if you come early but not early enough you get a bad seat and waste your time.)
In an all pay auction bidders have to randomize their bids. Because if you knew how everyone else was bidding you would arrive just before them and win. But then they would want to come earlier too, etc. The randomizations are calibrated so that you cannot know for sure when to arrive if you want to get a good seat and the tradeoffs between coming earlier and later are exactly balanced.
As a result most people arrive early, sit and wait. Now the previews come in. Since we are all going to be there anyway, the theater might as well show us previews. Indeed, even people like me would rather watch previews than sit in an empty theater, so the theater is doing us a favor.
And this even explains why theater tickets are always general admission. Let’s compare the alternative. The theater knows we are “buying” our seats with our time. The theater could try to monetize that by charging higher prices for better seats. But it’s a basic principle of advertising that the amount we are willing to pay to avoid being advertised at is smaller than the amount advertisers are willing to pay to advertise to us. (That is why pay TV is practically non-existent.) So there is less money to be made selling us preferred seats than having us pay with our time and eyeballs.
Musicians and academics are promiscuous collaborators. They flit from partnership to partnership sometimes for one-off gigs, sometimes for ongoing stints. In academia, regardless of the longevity of the group, the individual author is always the atomic unit. Co-authorships are identified simply with the names of the authors. Whereas musicians eventually form bands.
Bands have identities separate from the individuals in the bands. The name of the band stores that identity. It also solves a problem we face in academia of how to order the names of the contributors. You don’t. (There is evidence that the lexical ordering of names is good for Andersons and bad for Zames.) We should form bands too.
The idea of a band is important enough that sometimes even solo musicians incorporate themselves as bands. Roger Myerson is the Nine Inch Nails of game theory.
Bands work in the studio (writing papers) and then tour (giving seminars.) Musicians have two typical ways of organizing these. Jazz and pop bands create and perform as a group. Classical music is usually performed by specialists rather than the composer herself.
Our bands do something in between which is hard to understand when you think of it this way. We compose as a band but then perform as individuals. That’s weird because you would think that either you want to hear the composer do the performing or a performance specialist. If it is always the composer then it must be because the composer has a special insight into the performance. But then why not all of them? We should tour as bands some times. And we should also reward performance specialists who perform others’ work.
I want to name my bands. I want my next co-authored paper to be “by (insert name of band here) ” Sandeep, what do you say? Our torture paper will be “by Cheap Talk.” I look forward to making petulant demands and trashing hotel rooms.
In that other post, I was being serious. But here, just for fun, let’s name some of the great economics bands. I will start.
- Fudenberg and Levine: The Gossamer Anvil, an early 70s jam band.
- Gul and Pesendorfer: Mixtürhëad.
- Morris and Shin: Eskalator, prog rock.
When I read this (via Ryan Sager) about the classic good cop/bad cop negotiating ploy:
BUT there was also a twist we did not address in our research, and in fact, would have been tough to do as we were studying people in “the wilds” of organizational life. Their research shows that starting with a good cop and then using a bad cop was not effective, that the method only was effective for negotiating teams when the bad cop went first and the good cop followed. So, this may mean it really should be called “The Bad Cop, Good Cop Technique.”
it brought to mind some famous studies of Daniel Kahneman on perception and the timing of pleasure and pain. There is one you will never forget once you hear about it. Proctologists randomly varied the way in which they administered a standard colonoscopy procedure. Some patients received the usual treatment in which a camera was plunged into their rectum and then fished around for a few minutes. The fishing around is extremely uncomfortable.
An experimental group received a treatment which was identical except that at the very end <you can do better than that Beavis> the camera was left in situ <ok that’s pretty good> for an extra 20 seconds or so. The subjects were interviewed during the procedure and asked to report their level of pain, and after the procedure to report on the procedure overall. As intended, those in the experimental group reported that the final 20 seconds were less painful than the main part of the procedure. But the headline finding of the experiment was that those subjects receiving the longer treatment found the procedure overall to be more tolerable than the control group.
Regular readers of this blog will know that I consider that a good thing.
The financial crisis is motivating a search for new models of asset markets and their interaction with the real economy. It seems obvious that, for example the housing bubble can only be explained by a model in which asset prices are bid up by the activity of highly optimistic investors or speculators. Models which build in these divergent beliefs (and not just differences in information) are, perhaps very surprisingly to outsiders, only recently coming to mainstream economic theory.
Alp Simsek asks whether the presence of optimistic traders can inflate the price of assets, say housing prices. It seems obvious, but remember that investment in housing is leveraged using collateralized loans where the house itself is the collateral. If the optimists are borrowing from the “realists” to buy houses at overinflated prices, and they are offering up the house as collateral, then surely the realists aren’t willing to lend?
Alp shows that this logic sometimes holds, but not always. And he formalizes a precise way of measuring optimism which determines whether the presence of optimists will inflate asset prices, or alternatively their optimism will be filtered due to realists’ witholding of credit.
Suppose that you are a realist and you are making a loan to me to purchase a house. A year later we will see whether housing prices have gone up or down. If they go up, I will pay off the loan and realize a profit. If they go down I will default on the loan. A key idea is to understand that the loan effectively makes us partners in the purchase of the house. I own it on the upside (and I pay you back your loan) and you own it on the downside. We pay for the house together too: you contribute the loan amount and I contribute the down pament.
The equilibrium price of the house will be determined by how much we, as partners, are willing to pay. I am an optimist and I would like to pay a lot for it, but I am financially constrained so my contribution to the total price is some fixed amount, my down payment. Thus, our total willingness to pay is determined by how much you are willing to pay to enter this partnership.
Now we can see how my optimism plays a role. Suppose I am more optimistic than you in the sense that I think there is a lower probability of default than you. It turns out this doesn’t make our willingness to pay any higher than it would be if I were a realist just like you. That’s because you own the house in the event of default so it’s the probability that you assign to default that enters into our total value, not the probability that I assign. It’s true that I assign a higher probability to the good event that the price goes up, but I am already putting all of my cash into the partnership. I can’t do anything more to leverage this form of optimism.
But suppose instead that the way in which I am more optimistic than you is slightly different. We both assign the same probability to default, i.e. the event that the price falls. Where we differ is in terms of our beliefs conditional on the price going up. In particular I think that conditional on the upside, the expected price increase is higher than you think it is. Now we have a new way to leverage our partnership. Since I expect to have a higher upside, I am prepared to offer you a higher payment in the event of that upside. (That is, I am willing to pay back a larger loan amount.) And the promise of that higher payment on the upside coupled with the same old house on the downside makes this a strictly more attractive partnership for you and you are willing to pay more to enter it. (That is, you are willing to loan more to me.)
Indeed these collateralized loans seem to be the ideal contracts for us to make the most of our differences in beliefs. And once we see how that works, it is easy to go from there to a theory of a dynamic housing bubble. Tomorrow there might be optimistic investors who will partner will creditors to bid up housing prices. Today, you and I might have differences in beliefs about the probability that those optimistic investors might materialize. If I am more optimistic than you about it, you and I can enter into a partnership which leverages our different beliefs about tomorrow’s differences in beliefs, etc.
There is an important thing to keep in mind when considering models with heterogenous beliefs. We don’t have a good handle on welfare concepts in these models. For example, in Simsek’s model the efficient allocation is to give the asset to the optimists. Indeed, the financial friction is only an impediment to achieving an efficient allocation. A planner, faced with the same constraint, would not do anything different than the market. If we apply standard welfare notions like this, then these models are not a good framework for discussing financial reform.
- The applied physics of pizza tossing.
- Video of Glenn Gould playing the Goldberg Variations. Starts at around 6:30 and goes on for six clips. Really good.
- Charles Mingus’ cat toilet training program.
- The sexual battles of ducks.
- On behalf of the two spaces between sentences I would like to say I think they are beautiful.And that it must be lonely to be one space.And I know this is wrong.
Out now is a collection of academic essays on The Big Lebowski.
Where cult films go, academics will follow. New in bookstores, and already in its second printing, is “The Year’s Work in Lebowski Studies,” an essay collection edited by Edward P. Comentale and Aaron Jaffe (Indiana University Press, $24.95). The book is, like the Dude himself, a little rough around the edges. But it’s worth an end-of-the-year holiday pop-in. Ideally you’d read it with a White Russian — the Dude’s cocktail of choice — in hand.
Chullo chuck: gappy3000. And here is a Big Lebowski random quote generator.
Here’s a purely self-interested rationale for affirmative action in hiring. An organization repeatedly considers candidates for employment. A candidate is either good or just average and there are minority and non-minority candidates. The quality of the candidate and his race are observable. The current members decide collectively whether to make a job offer to the candidate.
What’s not observable is whether the applicant is biased against the other race. A biased member prefers not to belong to an organization with members of the other race. In particular, if hired, he will tend to vote against hiring them.
Unbiased non-minority members of such an organization will optimally hold minority applicants to a lower quality standard, at least initially. The reason is simple. An organization with no minority members will have their job offers more often accepted by biased non-minority candidates who will then make it harder to hire high quality minority candidates in the future. Since bias is not observable, affirmative action is an alternative instrument to ensure that the organization is not hospitable to those who are biased.
The effect is weaker in the opposite direction. Even if there are minority applicants who are biased in favor of minorities, their effect on the organization’s decision-making will be smaller because they are in the minority. So at the margin there is a gain to practicing at least some affirmative action.
(This also explains why every economics department should have at least one structural and one reduced-form empirical economist.)
To those following me on Twitter, I am not losing my mind. (Or at least not any faster than always.)
- Don’t go near that tree, there’s a guy who looks just like Danny Bonaduce perched up there hurling pears at unsuspecting passersby. (Partridge in a pear tree.)
- 11th-hour negotiations avert war between the two great superpowers of the turtle world. (Two turtle doves, get it? 🙂 )
- Frottez les trois poules avec du romarin, puis faites-les revenir dans une poêle profonde avec de l’ail. (Three french hens)
- This is getting out of hand. Four times already this morning! How do I register for the avian do-not-call list? (Four calling birds.)
- five golden rings
- I was frozen with terror. But then I had a vision. Half a dozen geese. All my fears were put to rest. (Six geese allaying.)
- Someone threw my favorite Sufjan Stevens album into Lake Michigan. (See here.)
- Sir great news from the servants in the dairy. I know you’ve been worried about the cows, but today 8 made some milk, King Hexanoel. (Say out loud “8 made some milk King.”)
- Madame, we have shoes for your Christmas ball somewhere in these boxes. hmm…Men’s running? No. Ah here it is, Size 9 Ladies’ Dancing.
- Let’s get this party started, where are those lords I keep hearing about? What, sleeping?? Off with their heads! Wait, what? Oh never mind. (10 lords a leaping, not sleeping.)
- I feel like a sewer rat being pulled in 11 different directions. (11 pipers piping.)
- Hey you two elves, grab your sticks and give me a drum roll. This is the grand finale…#twelvetweetsofchristmas. (Two elf drummers drumming.)
Readers of this blog know that I view that as a very good thing.
Justin Rao from UCSD analyzes shot-making decisions by the Los Angeles Lakers over the course of 60 games in the 2007-2008 NBA season. He collected data on the timing of the shot and identity of the shooter and then recorded additional data such as defensive pressure and shot location by watching the games on video. The data were used to check some basic hypotheses of the decision theory and game theory of shot selection.
The team cooperatively solves an optimal stopping problem in deciding when to take a shot over the course of a 24 second possession. At each moment a shot opportunity is realized and the decision is whether to take that shot or to wait for a possibly better opportunity to arise. Over time the option value of waiting declines because the 24 second clock winds down and the horizon over which further opportunities can appear dwindles. This means that the team becomes less selective over time. As a consequence, we should see in the data that the success rate of shots declines on average later in the possession. Justin verifies this in the data.
Of course, the shot opportunities do not arise exogenously but are the outcome of strategy by the offense and defense. The defense will apply more pressure to better shooters and the offense will have their better shooters take more shots. Both of these reduce the shooting percentage of the better shooters and raise the shooting percentage of the worse shooters. (For example when the better shooter takes more shots he does so by trying to convert less and less promising opportunities.)
With optimal play by both sides, this trend continuues until all shooters are equally productive. That is, conditional on Kobe Bryant taking a shot at a certain moment, the expected number of points scored should be the same as the alternative in which he passes to Vladimir Radmanovic who then shoots. To achieve this, Kobe Bryant shoots more frequently but has a lower average productivity. Also the defense covers Radmanovic more loosely in order to make it relatively more attractive to pass it to him. This is all verified in the data.
Finally, these features imply that a rising tide lifts all boats. That is, when Kobe Bryant is on the court, in order for productivities to be equalized across all players it must be that all other players’ productivities are increased relative to when Kobe is on the bench. He makes his teammates better. This is also in the data.
The equal productivity rule applies only to players who actually shoot. In rare cases it may be impossible to raise the productivity of the supporting cast to match the star’s. In that case the optimal is a corner solution: the star should take all the shots and the defense should guard only him. On March 2, 1962 Wilt Chamberlin was so unstoppable that despite being defended by 3 and sometimes 4 defenders at once, he scored 100 points, the NBA record.
- The top 20 internet lists of 2009. It starts off with a bang at #20: 5 cats that look like Wilford Brimley.
- A million lists of top-three books of 2009. I was not asked, but if I were asked I would have proven how literate and practical I am by listing Bolano’s 2666 even though I didn’t read it (and it was published in 2004.)
- Year-end list of lists about jazz in 2009. Coming in at #1 on the list of worst mustaches in the Bill McHenry Quintet… Bill McHenry!
- The Noughtie List. From Kottke.org. It’s got almost everything covered. One omission: I did not find the list of things not listed on that list.
My sketch of the snowball fight reminded Eddie Dekel of a popular children’s game. After he described it to me, I recognized it as a game I have seen my own kids play. It works like this. Two kids face off. At each turn they simultaneously choose one of three actions: load, shoot, defend. (They do this by signaling with their arms: cock your wrist to load, make a gun with your fist to shoot, cross your arms across your chest to defend. They first clap twice to synchronize their choices, just like in rock-paper-scissors.)
If you shoot when the other is loading you win. You cannot shoot unless you have previously loaded. If you shoot unsuccessfully (because the opponent either defended or also shot) your gun is empty and you must reload again. (Your gun holds only one bullet. But Eddie mentioned a variant in which guns have some larger, but still finite, capacity.)
The game goes on until someone wins. In practice it usually ends pretty quick. But what about in theory?
First a little background theory. This is a symmetric, zero-sum, complete information multi-stage game. If we assign a value of 1 to winning and 0 to losing, the symmetric zero-sum nature means that each player can guarantee an expected payoff of 1/2. In that respect the game is similar to rock-scissors-paper. Indeed the game appears to be a sort-of dynamic extension of RSP.
But, despite appearances, it is actually much less interesting than RSP. In RSP, the ex ante symmetry (each player expects a payoff of 1/2) is typically broken ex post (often one player wins and the other loses, although sometimes it is a draw.) By contrast, with best play LSD (load, shoot, defend silly I actually don’t know if it has an official name) is never decisive and in fact it never ends.
Here’s why. The game has four “states” corresponding to how many bullets (zero or one) the two players currently have in their guns. Obviously the game cannot end when the state is (0,0) and since playing load is either forbidden (depending on the local rules) or dominated when the state is (1,1), the game cannot end there either.
So it remains to figure out what best play prescribes when the game is imbalanced, either state (1,0) or (0,1). The key observation is that just as at the beginning of the game, where symmetry implied that each player had an expected payoff of 1/2, it is still true at this state of the game that even the weaker player can guarantee an expected payoff of 1/2. Simply defend. Forever if need be. There is no reason to think that this is an optimal strategy but still its one strategy at your disposal so you certainly can’t do worse than that.
The surprising thing is that best play requires it. To see why, suppose that the weaker player chooses load with positive probability. Then the opponent can play shoot with probality 1 and the outcome is either (shoot, load) [settle down Beavis] in which case the opponent wins, or (shoot, defend) in which case the game transits to state (0,0). Since the value of the first possibility is 1 and the value of the second is 1/2 (just as at the start of the game), this gives an expected payoff to the opponent larger than 1/2. But since payoffs add up to 1, that gives the weaker player an expected payoff less than 1/2 which he would never allow.
So the weaker player must defend with probability 1 and this means that the game will never end. Pretty boring for the players, but rather amusing for the spectators.
We can try to liven it up a bit for all involved. The problem is that its not really like RSP or its 2-action cousin Matching Pennies which work the way they do because of the cyclical relation of their strategies. (Rock beats Scissors which beats Paper which beats Rock…) We can add in an element of that by removing the catch-all defend action and replace it with two actions defend-left and defend-right (say the child leans to either side.) And then instead of plain-old shoot, we have shoot-left and shoot-right. Your shot misses only when you shoot to the opposite side that he leans. There are a number of ways to rule on what happens when I shoot-left and he loads, but I would guess that anything sensible would produce a game that is more interesting than LSD.
Telephone is a kids’ game where the children sit in a circle and pass around a message by whispering in the neighbor’s ear. The game is to see how garbled the message gets by the time it makes it all the way around the circle. My 21st century version of telephone is played solitaire, using Google Translate.
- No, the only option at this point is to tackle her, landing you both deep in the snow
- (English -> French) Non, la seule option possible à ce point est à sa face, vous êtes tous deux d’atterrissage dans la neige
- (French -> Italian) No, l’unica opzione a questo punto è sul suo volto, si sono entrambi di atterraggio in mezzo alla neve
- (Italian -> Spanish) No, la única opción en este momento está en su cara, han de aterrizaje en la nieve
- (Spanish -> Chinese) 沒有,唯一的選擇在這個時候是你的臉,已經降落在雪地裡
- (Chinese -> Estonian) Ei, ainus võimalus sel ajal on oma nägu, on lossitud lumi
- (Estonian -> Afrikaans) Nee, die enigste opsie is op die oomblik is op jou gesig, dit geland in die sneeu
- (Afrikaans -> Turkish) Hayır, şu anda tek seçenek in your face, bu kar indi
- (Turkish -> English) No, currently the only option in your face, the snow fell
Hilarity ensues! (As usual, one of the kids seems to be sabatoging the game. Here we can see that it is the child translating from Afrikaans to Turkish.)
If this is all obvious, forgive me I came late to this (I grew up in Orange County, CA where it last snowed in December of Yeah Right.)
The first thing to do, obviously is to make a snowball. Your enemy combatant will do the same. You each now have one snowball in your stockpile. What next?
If you throw your snowball you will be unarmed and certain to pay the consequences. So you don’t. Neither does she. You are at a standoff, but very soon you figure out what to do while you wait for the standoff to resolve. Make another snowball. Of course she does the same.
Now you each have an arsenal of two snowballs. Two is very different from one however because if you throw your snowball you still have one to defend yourself with. But you will have one fewer than she. This still puts her at an advantage because once you use your last snowball you are again unarmed. So you will only throw your first snowball if you have a reasonable chance of landing it.
The alternative is to make another snowball. Which of these is the better option depends on what she is expecting. If she knows you will throw, she is prepared to dodge it and then press her advantage. If she knows you will make another one she will wait for you to reach down into the snow when you are most vulnerable and she will draw first blood.
So you have to randomize. So does she. There are two possible outcomes of these independent randomizations. First, one or two snowballs may fly resulting in a sequence of volleys which eventually deplete your stocks down to one or two snowballs left. The second possibility is that both of you increase your stockpile by one snowball.
Thus, equilibrium of a well-played snowball fight gives rise to the following stochastic process. At each stage, with a certain positive probability, the stockpiles both increase by one snowball. This continues without bound until, with the complementary probability in each stage, a fight breaks out depleting both stockpiles and beginning the process again from zero.
Special mention should be made of a third strategy which is to be considered only in special circumstances. Rather than standing and throwing, you can charge at her and take a shot from close range. This has the obvious advantages but clearly leaves you defenseless ex post. Running away should be ruled out because you will be giving up your entire store of snowballs and eventually you will have to come back. No, the only option at this point is to tackle her, landing you both deep in the snow. With the right adversary, this mutually assured destruction could be the best possible outcome.
It’s trendy to get your economist on around the holidays and complain about the inefficiency of gift exchange. Giving money is a more efficient way to make the recipient better off. But that’s a fallacy that only trips up poser-economists. To a real economist, that’s like observing that eating an omelette is an inefficient way to get all of the nutrients we need in our breakfast. Yeah, so? That’s not why I ate it.
A real economist recognizes unregulated, voluntary exchange when he sees it. He doesn’t bother inventing some hypothetical motivation for the exchange because he understands revealed preference. If they are doing it voluntarily then it is efficient, regardless of what they think they are getting out of it. Indeed, the pure consumption value of buying a plaid sweater for somebody is a perfectly good motivation. And since the recipient voluntarily accepts the gift, even better. If there was a Pareto superior alternative they would have done that instead.
So this holiday, swat that poser economist in red off your left shoulder, hold hands with the real economist in white on your right shoulder and give to your hearts’ content. (Oh and I am very easy to shop for. Just don’t forget to include a gift receipt!)
Or dead salmon?
By the end of the experiment, neuroscientist Craig Bennett and his colleagues at Dartmouth College could clearly discern in the scan of the salmon’s brain a beautiful, red-hot area of activity that lit up during emotional scenes.
An Atlantic salmon that responded to human emotions would have been an astounding discovery, guaranteeing publication in a top-tier journal and a life of scientific glory for the researchers. Except for one thing. The fish was dead.
Read here for a lengthy survey of the pitfalls of fMRI analysis. Via Mindhacks.
Tyler Cowen, quoting Ezra Klein on “penalties” for failing to purchase private insurance:
If you don’t have employer-based coverage, Medicare, Medicaid, or anything else, and premiums won’t cost more than 8 percent of your monthly income, and you refuse to purchase insurance, at that point, you will be assessed a penalty of up to 2 percent of your annual income. In return for that, you get guaranteed treatment at hospitals and an insurance system that allows you to purchase full coverage the moment you decide you actually need it. In the current system, if you don’t buy insurance, and then find you need it, you’ll likely never be able to buy insurance again. There’s a very good case to be made, in fact, that paying the 2 percent penalty is the best deal in the bill.
Luca Malbec 2007. It’s Argentina in a bottle. The wine is huge, almost black, and over the top in terms of fruit extraction, oak, and alcohol (14.5%). The bottle itself weighs twice as much as wimpy French wine bottles.
Its perfectly agreeable wine but it has no complexity. It smells like you just walked into a tool shed and found a blueberry pie cooling on the shelf. And it is clearly built to stand up to those fat steaks Argies are so fond of. So as a vegetarian I have almost no use for this wine except for one thing. I am usually the only wine drinker in the house, so I drink a bottle over the course of a few days. This wine is so huge that it tastes exactly the same three days later as it did when I opened the bottle.
Now I have discovered a second thing. It makes a perfect pairing with dark Belgian chocolate. The chocolate masks some of the oak and dark fruit flavors and allows the slight acidity and strawberry flavors to come out and those perfectly complement the chocolate. These aspects are typical Argentinian Malbec so I would bet this pairing would work with any you can get your hands on.
I found this out because my daughters came home from a birthday party bringing a box of Belgian dark chocolate. The birthday girl’s father is a friend of ours who is Belgian, so that explains the chocolate. Now, his wife is from Argentina, and her father is a winemaker and yes, his best wine is a Malbec. So the pairing works on many levels.
That’s the title of David Mitchell’s forthcoming novel. It’s been a few years since Black Swan Green, his last. Here’s the blurb on Amazon:
The author of Cloud Atlas‘s most ambitious novel yet, for the readers of Ishiguro, Murakami, and, of course, David Mitchell.
The year is 1799, the place Dejima, the “high-walled, fan-shaped artificial island” that is the Japanese Empire’s single port and sole window to the world. It is also the farthest-flung outpost of the powerful Dutch East Indies Company. To this place of superstition and swamp fever, crocodiles and courtesans, earthquakes and typhoons, comes Jacob de Zoet. The young, devout and ambitious clerk must spend five years in the East to earn enough money to deserve the hand of his wealthy fiancée. But Jacob’s intentions are shifted, his character shaken and his soul stirred when he meets Orito Aibagawa, the beautiful and scarred daughter of a Samurai, midwife to the island’s powerful magistrate. In this world where East and West are linked by one bridge, Jacob sees the gaps shrink between pleasure and piety, propriety and profit. Magnificently written, a superb mix of historical research and heedless imagination, The Thousand Autumns of Jacob de Zoet is a big and unforgettable book that will be read for years to come.
Here’s a review from someone who got a pre-release copy. It’ll be released June 29, 2010 and I’ve pre-ordered already. (If you are looking for something to read and haven’t read these already, I recommend Number9Dream and Cloud Atlas.)
From the fantastic blog, Letters of Note.
Circa 1986, Jeremy Stone (then-President of the Federation of American Scientists) asked Owen Chamberlain to forward to him any ideas he may have which would ‘make useful arms control initiatives’. Chamberlain – a highly intelligent, hugely influential Nobel laureate in physics who discovered the antiproton – responded with the fantastic letter seen below, the contents of which I won’t mention for fear of spoiling your experience. Unfortunately, although I can’t imagine the letter to be anything but satirical, I’m uninformed when it comes to Chamberlain’s sense of humour and have no way of verifying my belief. Even the Bancroft Library labels it as ‘possibly tongue-in-cheek’.
In this video, Steve Levitt and Stephen Dubner talk about their finding that you are 8 times more likely to die walking drunk than driving drunk.
Levitt says this
“anybody could have done it, it took us about 5 minutes on the internet trying to figure out what some of the statistics were… and yet no one has every talked or thought about it and I think that’s the power of ideas… ways of thinking about the world differently that we are trying to cultivate with our approach to economics.”
Dubner cites the various ways a person could die walking drunk
- step off the curb into traffic.
- mad dash across the highway.
- lie down and take a nap in the road.
Which leads him to see how obvious it is ex post that drunk walking is so much more dangerous than drunk driving.
I thought a little about this and it struck me that riding a bike while drunk should be even more dangerous than walking drunk. I could
- roll or ride off a curb into traffic.
- try to make a mad dash across an intersection.
- get off my bike so that i can lie down in the road to take a nap.
plus so many other dangerous things that i can do on my bike but could not do on foot. And what the hell, I have 5 minutes of time and the internet so I thought I would do a little homegrown freakonomics to test this out. Here is an excerpt from their book explaining how they calculated the risk of death by drunk walking.
Let’s look at some numbers, Each year, more than 1,000 drunk pedestrians die in traffic accidents. They step off sidewalks into city streets; they lie down to rest on country roads; they make mad dashes across busy highways. Compared with the total number of people killed in alcohol-related traffic accidents each year–about 13,000–the number of drunk pedestrians is relatively small. But when you’re choosing whether to walk or drive, the overall number isn’t what counts. Here’s the relevant question: on a per-mile basis, is it more dangerous to drive drunk or walk drunk?
The average American walks about a half-mile per day outside the home or workplace. There are some 237 million Americans sixteen and older; all told, that’s 43 billion miles walked each year by people of driving age. If we assume that 1 of every 140 of those miles are walked drunk–the same proportion of miles that are driven drunk–then 307 million miles are walked drunk each year.
Doing the math, you find that on a per-mile basis, a drunk walker is eight times more likely to get killed than a drunk driver.
I found the relevant statistics for cycling here, on the internet. I calculate as follows. Estimates range between 6 and 21 billion miles traveled by bike in a year. Lets call it 13 billion. If we assume that 1 out of every 140 of these miles are cycled drunk, then that gives about 92 million drunk-cycling miles. There are about 688 cycling related deaths per year (average for the years 200-2004.) Nearly 1/5 of these involve a drunk cyclist (this is for the year 1996, the only year the data mentions.) So that’s about 137 dead drunk cyclists per year.
When you do the math you find that there are about 1.5 deaths per every million miles cycled drunk. By contrast, Levitt and Dubner calculate about 3.3 deaths per every million miles walked drunk.
Is walking drunk more dangerous than biking drunk?
Here is another piece of data. Overall (drunk or not) the fatality rate (on a per-mile basis) is estimated to be between 3.4 and 11 times higher for cyclists than motorists. From Levitt and Dubner’s conclusion that drunk walking is 8 times more dangerous than drunk driving we can infer that there are about .4 deaths per million miles driven drunk. That means that the fatality rate for drunk cyclists is only about 3.8 times higher than for drunk motorists.
That is, the relative riskiness of biking versus driving is unaffected (or possibly attenuated) by being drunk. But while walking is much safer than driving overall, according to Levitt and Dubner’s method, being drunk reverses that and makes walking much more dangerous than both biking and driving.
There are a few other ways to interpret these data which do not require you to believe the implication in the previous paragraph.
- There was no good reason to extrapolate the drunk rate of 1 out of every 140 miles traveled from driving (where its documented) to walking and biking (where we are just making things up.)
- Someone who is drunk and chooses to walk is systematically different than someone who is drunk and chooses to drive. They are probably not going to and from the same places. They probably have different incomes and different occupations. Their level of intoxication is probably not the same. This means in particular that the fatality rate of drunk walkers is not the rate that would be faced by you and me if we were drunk and decided to walk instead of drive. To put it yet another way, it is not drunk walking that is dangerous. What is dangerous is having the characteristics that lead you to choose to walk drunk.
These ideas, especially the one behind #2 were the hallmark of Levitt’s academic work and even the work documented in Freakonomics. His reputation was built on carefully applying ideas like these to uncover exciting and surprising truths in data. But he didn’t apply these ideas to his study of drunk walking. Of course, my analysis is no better. I just copied some numbers off a page I found on the internet and applied the Levitt Dubner calculation. It only took me 5 minutes. (And I would appreciate if someone can check my math.) But then again, I am not trying to support a highly dubious and dangerous claim:
So as you leave your friend’s party, the decision should be clear: driving is safer than walking. (It would be even safer, obviously , to drink less, or to call a cab.) The next time you put away four glasses of wine at a party, maybe you’ll think through your decision a bit differently. Or, if you’re too far gone, maybe your friend will help sort things out. Because friends don’t let friends walk drunk.
Via The Volokh Conspiracy, I enjoyed this discussion of the NFL instant replay system. A call made on the field can only be overturned if the replay reveals conclusive evidence that the call was in error. Legal scholarship has debated the merits of such a system of appeals relative to the alternative of de novo review: the appelate body considers the case anew and is not bound by the decision below.
If standards of review are essentially a way of allocating decisionmaking authority between trial and appellate courts based on their relative strengths, then it probably makes sense that the former get primary control over factfinding and trial management (i.e., their decisions on those matters are subject only to clear error or abuse of discretion review), while the latter get a fresh crack at purely “legal” issues (i.e., such issues are reviewed de novo). Heightened standards of review apply in areas where trial courts are in the best place to make correct decisions.
These arguments don’t seem to apply to instant replay review. The replay presumably is a better document of the facts than the realtime view of the referee. But not always. Perhaps the argument against in favor of deference to the field judge is that it allows the final verdict to depend on the additional evidence from the replay only when the replay angle is better than that of the referee.
That argument works only if we hold constant the judgment of the referee on the field. The problem is that the deferential system alters his incentives due to the general principle that it is impossible to prove a negative. For example consider the (reviewable) call of whether a player’s knee was down due to contact from an opposing player. Instant replay can prove that the knee was down but it cannot prove the negative that the knee was not down. (There will be some moments when the view is obscured, we cannot be sure that the angle was right, etc.)
Suppose the referee on the field is not sure and thinks that with 50% probability the knee was down. Consider what happens if he calls the runner down by contact. Because it is impossible to prove the negative, the call will almost surely not be overturned and so with 100% probability the verdict will be that he was down (even though that is true with only 50% probability.)
Consider instead what happens if the referee does not blow the whistle and allows the play to proceed. If the call is challenged and the knee was in fact down, then the replay will very likely reveal that. If not, not. The final verdict will be highly correlated with the truth.
So the deferential system means that a field referee who wants the right decision made will strictly prefer a non-call when he is unsure. More generally this means that his threshold for making a definitive call is higher than what it would be in the absence of replay. This probably could be verified with data.
On the other hand, de novo review means that, conditional on review, the call made on the field has no bearing. This means that the referee will always make his decision under the assumption that his decision will be the one enforced. That would ensure he has exactly the right incentives.
Each Christmas my wife attends a party where a bunch of suburban erstwhile party-girls get together and A) drink and B) exchange ornaments. Looking for any excuse to get invited to hang out with a bunch of drunk soccer-moms, every year I express sincere scientific interest in their peculiar mechanism of matching porcelain trinket to plastered Patricia. Alas I am denied access to their data.
So theory will have to do. Here is the game they play. Each dame brings with her an ornament wrapped in a box. The ornaments are placed on a table and the ladies are randomly ordered. The first mover steps to the table, selects an ornament and unboxes it. The next in line has a choice. She can steal the ornament held by her predecessor or she can select a new box and open it. If she steals, then #1 opens another box from the table. This concludes round 2.
Lady #N has a similar choice. She can steal any of the ornaments currently held by Ladies 1 through N-1 or open a new box. Anyone whose ornament is stolen can steal another ornament (she cannot take back the one just taken from her) or return to the table. Round N ends when someone chooses to take a new box rather than steal.
The game continues until all of the boxes have been taken from the table. There is one special rule: if someone steals the same ornament on 3 different occasions (because it has been stolen from her in the interim) then she keeps that ornament and leaves the market (to devote her full attention to the eggnogg.)
Theoretical questions:
- Does this mechanism produce a Pareto efficient allocation?
- Since this is a perfect-information game (with chance moves) it can be solved by backward induction. What is the optimal strategy?
- How can this possibly be more fun than quarters?
First in a continuing series.
When you learn to snowboard you make a commitment before you begin whether you will ride regular or “goofy.” Goofy means you place your right foot in front. As the name suggests, goofy footers are the minority. Since snowboard bindings must be fixed in place for regular or goofy foot, somebody who doesn’t know whether they are goofy will more likely start out regular (because its the best guess and because most rental boards will be setup for regular foot.) Even if they are naturally goofy, once they invested a day learning, they are unlikely to switch and try it out and so may never know.
A surfboard is foot-neutral. Anybody can ride any given surfboard whether they are regular or goofy. And jumping up on a surfboard for the first time happens so fast that you have no time to even think which foot is going forward. So a surfer is very likely to find his natural footing early on, unless he learned to snowboard already in which case he will naturally jump to the footing he is used to.
We should be able to see the effect of this by comparing a regular-foot snowboarder who first learned to surf with a regular-foot surfer who first learned to snowboard. The first guy is going to be better at snowboarding than the second guy is at surfing.
What will be the comparison for goofies?
Paul Bley is the most influential jazz pianist you have never heard of. And its not because he is an abstract, inaccessible, musician’s musician. His playing is as lyrical and straightforward as Keith Jarrett. Go to Pandora and create a Paul Bley station. Here, I did it for you.
Ethan Iverson wrote an essay on Paul Bley, focusing especially on an album entitled Footloose! (which I have never heard. I have stuck mostly to his solo stuff.)
Not just Jarrett and Corea but a whole generation of mostly caucasian post-1970 NYC jazz pianists checked out Footloose!: Richie Beirach, Joanne Brackeen, Jim McNeely, Marc Copland, Kenny Werner, Fred Hersch, etc., all seem to have made room for Paul Bley to hang at the same table that Bill Evans presides over. Bley’s peers Steve Kuhn and Denny Zeitlin seem to have paid attention, too. I suspect that not all these comparatively straight-ahead musicians paid the same kind of attention to more hardcore classic Bley albums like the ferocious Barrage or the minimalist Ballads. But since Footloose! is so swinging, it has always been interesting to just about everybody. Indeed, I believe that Bley’s influence crossed the color line with Geri Allen in the 80s and that now he is considered a resource for any curious musician regardless of background.
The article is typically brilliant Iverson writing, but this bit was just precious:
When I finally met Paul Bley a couple of years ago, I was about to go onstage with his old associate Charlie Haden. Bley was rather chilly at first handshake. These days he’s a famous contrarian, and I sensed I needed to not grovel but respond in kind. I leaned into him and told him, viciously, “I had all your records at one point. But you know what? I can’t play like you, and why would I want to? I gave all your records away when I was 24. I turned my back.”
Bley looked astonished, but then he grinned. “I’m glad you got rid of all my records, that’s what I tell all pianists to do.”
I responded, “Yeah…good. Well, recently I got some of your records, and I decided to love you again.”
Bley said, “That was a mistake. Get rid of my records.”
Auction sites are popping up all over the place with new ideas about how to attract bidders with the appearance of huge bargains. The latest format I have discovered is the “lowest unique bid” auction. It works like this. A car is on the auction block. Bidders can submit as many bids as they wish ranging from one penny possibly to some upper bound, in penny increments. The bids are sealed until the auction is over. The winning bid is the lowest among all unique bids. That is, if you bid 2 cents and nobody else bids 2 cents, but more than one person bid 1 cent, then you win the car for 2 cents.
In some cases you pay for each bid but in some cases bids are free and you pay only if you win. Here is a site with free bidding. An iPod shuffle sold for $0.04. Here is a site where you pay to bid. The top item up for sale is a new house. In that auction you pay ~$7 per bid and you are not allowed to bid more than $2,000. A house for no more than $2,000, what a deal!
I suppose the principle here is that people are attracted by these extreme bargains and ignore the rest of the distribution. So you want to find a format which has a high frequency of low winning bids. On this dimesion the lowest unique bid seems even better than penny auctions.
Caubeen curl: Antonio Merlo.


