You are currently browsing the category archive for the ‘Uncategorized’ category.

Every few years, a fad comes along that takes the business world by storm. Jack Welch loved Six Sigma, others look for “synergies”, “core competencies”, “blue sky strategies”, etc etc. These fads usually involve over-generalization from a key example or set of examples.

Occasionally, a nay-sayer identifies the over-generalization. Jill Lepore has an article in the New Yorker that goes further by debunking even some of the key examples that underlie the theory of “disruptive innovation” of Clayton Christensen. What is disruptive innovation? Lepore describes it thus:

Manufacturers of mainframe computers made good decisions about making and selling mainframe computers and devising important refinements to them in their R. & D. departments—“sustaining innovations,” Christensen called them—but, busy pleasing their mainframe customers, one tinker at a time, they missed what an entirely untapped customer wanted, personal computers, the market for which was created by what Christensen called “disruptive innovation”: the selling of a cheaper, poorer-quality product that initially reaches less profitable customers but eventually takes over and devours an entire industry.

Another key example for Christensen is the disk-drive industry. Lepore follows the key companies and concludes:

As striking as the disruption in the disk-drive industry seemed in the nineteen-eighties, more striking, from the vantage of history, are the continuities. Christensen argues that incumbents in the disk-drive industry were regularly destroyed by newcomers. But today, after much consolidation, the divisions that dominate the industry are divisions that led the market in the nineteen-eighties. (In some instances, what shifted was their ownership: I.B.M. sold its hard-disk division to Hitachi, which later sold its division to Western Digital.) In the longer term, victory in the disk-drive industry appears to have gone to the manufacturers that were good at incremental improvements, whether or not they were the first to market the disruptive new format. Companies that were quick to release a new product but not skilled at tinkering have tended to flame out.

Josh Gans finds the Lepore takedown to be easy pickins’ and also does a great job explaining why Christensen’s attempt to make his theory predictive contradicted the essence of his own argument. While the takedown does not surprise Gans, it irritates the tech community:

@pmarca: What does Jill Lepore PhD in American Studies from Yale think about Bayesian algorithmic filtering?

To which I replied: “What does Clayton Christensen DBA at Harvard know about ….?” In other words, both are equally qualified/unqualified to discuss innovation. Also, why not attack Lepore’s argument not her?

But I have my own bone to pick with disruptive innovation. Let’s say an incumbent firm has a great product and buys into the disruptive innovation idea. What should it do? Since its core product is under threat of disruption, it seems the company should disrupt it themselves and invest in all sorts of technologies that look weak right now but might improve dramatically. But this does not make any sense because it implies huge costs but with little expected gain because most crappy-looking initial ideas do in fact end up on the shelf. On the other hand not investing opens up the company to disruption. To make the theory operational, we need to understand the tradeoffs. For that, you need a toy model of some sort.

The obvious candidate for such a model is Ken Arrow’s (1962?) idea of the “replacement effect” (this term was coined by Tirole). (We teach related material in our MECN 441 Competitive Strategy elective.) The profits from a new invention that supersedes the incumbent’s old product will replace the profits from the old product. Hence, the bigger the profits from the old product, the smaller are the incentives to innovate. You would destroy your own profits so no need to make the better Rice Crispy when the exisiting one is doing great. Past success rationally constrains incentives for future innovation. This theory would predict that incumbents innovate less than entrants who have no exisiting profit flow to replace. Bit like Christensen’s theory, no?  Arrow pre-disrupted Christensen’s main thesis but based on rational choice analysis and with a coherent argument for assessing new investments (roughly, compare the expected NPV of current product with expected NPV of new one minus cost of investment).

As MOOCs come along, Christensen’s employer HBS has to decide how to proceed. The tradeoff is is clear and quite similar to Arrow’s point:

Universities across the country are wrestling with the same question — call it the educator’s quandary — of whether to plunge into the rapidly growing realm of online teaching, at the risk of devaluing the on-campus education for which students pay tens of thousands of dollars, or to stand pat at the risk of being left behind.

Ironically, HBS has decided not to side with Christensen but with Porter who sees no major disruption:

“Do it cheap and simple,” Professor Christensen says. “Get it out there.”

But Harvard Business School’s online education program is not cheap, simple, or open. It could be said that the school opted for the Porter theory. Called HBX, the program will make its debut on June 11 and has its own admissions office. Instead of attacking the school’s traditional M.B.A. and executive education programs — which produced revenue of $108 million and $146 million in 2013 — it aims to create an entirely new segment of business education: the pre-M.B.A.

 

Ed O’Bannon’s anti-trust suit against the NCAA moves forward today. Roger Noll of Stanford is likely to testify on his behalf. Here is a sample of  his views from a related case:

[R]esearch in the economics of sports concluded long ago that the only way to achieve competitive parity among schools was to randomly allocate athletes and coaches among teams and prohibit athletes and coaches from switching after they have been allocated. With an unfettered competitive market for coaches and freedom of choice among student-athletes, the expected result is that the colleges with the most revenue will hire the best coaches and build the best facilities, and that as a result they will attract the best student-athletes. Interestingly, a market for student-athletes actually could improve competitive balance. If teams can pay different amounts to different students, a lesser school may find that it is willing to pay more for its first five-star athlete than Alabama or USC is willing to pay for its tenth five-star athlete. If so, the lesser schools could be somewhat more successful than they are now in recruiting top players. But even in the best of circumstances, as long as coaches and athletes have a choice, the colleges with the most to spend will have the best teams. The main effect of the scholarship limits in comparison to a market allocation is to transfer wealth from studentathletes to expenditures on coaches and facilities.

Full testimony can be found here.

These are my thoughts and not those of Northwestern University, Northwestern Athletics, the Northwestern football team, nor of the Northwestern football players.

  1. As usual, the emergence of a unionization movement is the symptom of a problem rather than the cause.  Also as usual, a union is likely to only make the problem worse.
  2. From a strategic point of view the NCAA has made a huge blunder in not making a few pre-emptive moves that would have removed all of the political momentum this movement might eventually have.  Few in the general public are ever going to get behind the idea of paying college athletes.  Many however will support the idea of giving college athletes long-term health insurance and guaranteeing scholarships to players who can no longer play due to injury.  Eventually the NCAA will concede on at least those two dimensions.  Waiting to be forced into it by a union or the threat of a union will only lead to a situation which is far worse for  the NCAA in the long run.
  3. The personalities of Kain Colter and Northwestern football add to the interest in the case because as Rodger Sherman points out Northwestern treats its athletes better than just about any other university and Kain Colter is on record saying he loves Northwestern and his coaches.  But these developments are bigger than the individuals involved. They stem from economic forces that were going to come to a head sooner or later anyway.
  4. Before taking sides, take the following line of thought for a spin.  If today the NCAA lifted restrictions on player compensation, tomorrow all major athletic programs and their players would mutually, voluntarily enter into agreements where players were paid in some form or another in return for their commitment to the team.  We know this because those programs are trying hard to do exactly that every single year.  We call those efforts recruiting violations.
  5. Once that is understood it is clear that to support the NCAA’s position is to support restricting trade that its member schools and student athletes reveal year after year that they want very much.  When you hear that universities oppose removing those restrictions you understand that whey they really oppose is removing those restrictions for their opponents.  In other words, the NCAA is imposing a collusive arrangement because the NCAA has a claim to a significant portion of the rents from collusion.
  6. Therefore, in order to take a principled position against these developments you must point to some externality that makes this the exceptional case where collusion is justified.
  7. For sure, “Everyone will lose interest in college athletics once the players become true professionals” is a valid argument along these lines.  Indeed it is easy to write down a model where paying players destroys the sport and yet the only equilibrium is all teams pay their players and the sport is destroyed.
  8. However, the statement in quotes above is almost surely false. Professional sports are pretty popular. And anyway this kind of argument is usually just a way to avoid thinking seriously about tradeoffs and incremental changes. For example, how many would lose interest in college athletics if tomorrow football players were given a 1% stake in total revenue from the sale of tickets to see them play?
  9. My summary of all this would be that there are clearly desirable compromises that could be found but the more entrenched the parties get the smaller will be the benefits of those compromises when they eventually, inevitably, happen.

Quite disturbing even though you know no volts are coursing through the subject’s body.

I just saw Malcolm Gladwell on The Daily Show.  Apparently his book David and Goliath is about how it can actually be an advantage to have some kind of disadvantage.  He mentioned that a lot of really successful people are dyslexic for example.

But its either an absurdity or just a redefinition of terms to say that disadvantages can be advantageous.  The evidence appears to be a case of sample selection bias. Here’s a simple model. Everyone chooses between two activities/technologies. There is a safe technology, think of it as wage labor, that pays a certain return to everybody except those the disadvantaged. The disadvantaged would earn a significantly lower return from the safe technology because of their disadvantage

Then there is another technology which is highly risky. Think of it as entrepreneurship. There is free entry but only a randomly selected tiny fraction of entrants succeed and earn returns exceeding the safe technology. Everyone else fails and earns nothing. Free entry means that the expected return (or utility thereof) must be lower than the safe technology else all the advantaged would abandon the latter.

The disadvantaged take risks because of their disadvantage and a small fraction of them succeed.  All of the highly successful people have “advantageous” disadvantages.

Some people were asked to name their favorite number, others were asked to give a random number:

More here.  Via Justin Wolfers.

I liked this account very much:

there are two ways of changing the rate of mismatches. The best way is to alter your sensitivity to the thing you are trying to detect. This would mean setting your phone to a stronger vibration, or maybe placing your phone next to a more sensitive part of your body. (Don’t do both or people will look at you funny.) The second option is to shift your bias so that you are more or less likely to conclude “it’s ringing”, regardless of whether it really is.

Of course, there’s a trade-off to be made. If you don’t mind making more false alarms, you can avoid making so many misses. In other words, you can make sure that you always notice when your phone is ringing, but only at the cost of experiencing more phantom vibrations.

These two features of a perceiving system – sensitivity and bias – are always present and independent of each other. The more sensitive a system is the better, because it is more able to discriminate between true states of the world. But bias doesn’t have an obvious optimum. The appropriate level of bias depends on the relative costs and benefits of different matches and mismatches.

What does that mean in terms of your phone? We can assume that people like to notice when their phone is ringing, and that most people hate missing a call. This means their perceptual systems have adjusted their bias to a level that makes misses unlikely. The unavoidable cost is a raised likelihood of false alarms – of phantom phone vibrations. Sure enough, the same study that reported phantom phone vibrations among nearly 80% of the population also found that these types of mismatches were particularly common among people who scored highest on a novelty-seeking personality test. These people place the highest cost on missing an exciting call.

From Mind Hacks.

It doesn’t make sense that exercise is good for you.  Its just unnecessary wear and tear on your body. Take the analogy of a car.  Would it make sense to take it out for a drive up and down the block just to “exercise” it?  Your car will survive for only so many miles and you are wasting them with exercise.

But exercise is supposed to pay off in the long run. Sure you are wasting resources and subjecting your body to potential injury by exercising but if you survive the exercise you will be stronger as a result. Still this is hard to understand. Because its your own body that is making itself stronger. Your body is re-allocating resources away from some other use in order to build muscles. If that’s such a good thing to do why doesn’t your body just do it anyway? Why do you first have to weaken yourself and risk injury before your body begrudgingly does this thing that it should have done in the first place?

It must be an agency issue. Your body can either invest resources in making you stronger or use them for something else. The problem for your body is knowing which to do, i.e. when the environment is such that the investment will pay off. The physiological processes evolved over too long and old a time frame for them to be well-tuned to the minute changes in the environment that determine when the investment is a good one.  Your body needs a credible signal.

Physical exercise is that signal.  Before people started doing it for fun, more physical activity meant that your body was in a demanding environment and therefore one in which the rewards from a stronger body are greater. So the body optimally responds to increased exercise by making itself stronger.

Under this theory, people who jog or cycle or play sports just to “stay fit” are actually making themselves less healthy overall. True they get stronger bodies but this comes at the expense of something else and also entails risk. The diversion of resources and increased risk are worth it only when the exercise signals real value from physical fitness.

My friend and Berkeley grad school classmate Gary Charness posted this on Facebook:

It has finally happened. This could be a world record. I now have 63 published and accepted papers at the age of 63. I doubt that there is anyone who *first* matched their (positive) age at a higher age. Not bad given that my first accepted paper was in 1999. I am very pleased !!

Note that Gary is setting a very strict test here.  Draw a graph with age on the horizontal axis and publications on the vertical.  Take any economist and plot publications by age.  It’s already a major accomplishment for this plot to cross the 45 degree line at some point.  Its yet another for it to still be above the 45 degree line at age 63.  But its absolutely astounding that Gary’s plot first crossed the 45 degree line at age 63.

(Yes Gary was my classmate at Berkeley when I was 20-something and he was 40-something.)

The less you like talking on the phone the more phone calls you should make.  Assuming you are polite.

Unless the time of the call was pre-arranged the person placing the call is always going to have more time to talk than the person receiving the call simply because the caller is the one making the call.  So if you receive a call but you are too polite to make an excuse to hang up you are going to be stuck talking for a while.

So in order to avoid talking on the phone you should always be the one making the call.  Try to time it carefully.  It shouldn’t be at a time when your friend is completely unavailable to take your call because then you will have to leave a voicemail and he will eventually call you back when he has plenty of time to have a nice long conversation.

Ideally you want to catch your friend when they are just flexible enough to answer the phone but too busy to talk for very long.  That way you meet your weekly quota of phone calls at minimum cost in terms of time actually spent on the phone.  What could be more polite?

Matthew Rabin was here last week presenting his work with Erik Eyster about social learning. The most memorable theme of their their papers is what they call “anti-imitation.” It’s the subtle incentive to do the opposite of someone in your social network even if you have the same preferences and there are no direct strategic effects.

You are probably familiar with the usual herding logic. People in your social network have private information about the relative payoff of various actions. You see their actions but not their information. If their action reveals they have strong information in favor of it you should copy them even if you have private information that suggests doing the opposite.

Most people who know this logic probably equate social learning with imitation and eventual herding. But Eyster and Rabin show that the same social learning logic very often prescribes doing the opposite of people in your social network. Here is a simple intuition. Start with a different, but simpler problem.  Suppose that your friend makes an investment and his level of investment reveals how optimistic he is. His level of optimism is determined by two things, his prior belief and any private information he received.

You don’t care about his prior, it doesn’t convey any information that’s useful to you but you do want to know what information he got. The problem is the prior and the information are entangled together and just by observing his investment you can’t tease out whether he is optimistic because he was optimistic a priori or because he got some bullish information.

Notice that if somebody comes and tells you that his prior was very bullish this will lead you to downgrade your own level of optimism. Because holding his final beliefs fixed, the more optimistic was his prior the less optimistic must have been his new information and its that new information that matters for your beliefs. You want to do the opposite of his prior.

This is the basic force behind anti-imitation. (By the way I found it interesting that the English language doesn’t seem to have a handy non-prefixed word that means “doing the opposite of.”) Suppose now your friend got his prior beliefs from observing his friend. And now you see not only your friend’s investment level but his friend’s too. You have an incentive to do the opposite of his friend for exactly the same reason as above.

This assumes his friend’s action conveys no information of direct relevance for your own decision. And that leads to the prelim question. Consider a standard herding model where agents move in sequence first observing a private signal and then acting.  But add the following twist. Each agent’s signal is relevant only for his action and the action of the very next agent in line.  Agent 3 is like you in the example above.  He wants to anti-imitate agent 1. But what about agents 4,5,6, etc?

If you are like me and you believe that thinking is better path to success than not thinking, its hard not to take it personally when an athlete or other performer who is choking is said to be “overthinking it.” He needs to get “untracked.” And if he does and reaches peak performance he is said to be “unconscious.”

There are experiments that seem to confirm the idea that too much thinking harms performance.  But here’s a model in which thinking always improves performance and which is still consistent with the empirical observation that thinking is negatively correlated with performance.

In any activity we rely on two systems:  one which is conscious, deliberative and requires “thinking.”  The other is instinctive.  Using the deliberative system always gives better results but the deliberation requires the scarce resource of our moment-to-moment attention.  So for any sufficiently complex activity we have to ration the limited capacity of the deliberative system and offload many aspects of performance to pre-programmed instincts.

But for most activities we are not born with an instinctive knowledge how to do it.  What we call “training” is endless rehearsal of an activity which establishes that instinct.  With enough training, when circumstances demand we can offload the activity to the instinctive system in order to conserve precious deliberation for whatever novelties we are facing which truly require original thinking.

An athlete or performer who has been unsettled, unnerved, or otherwise knocked out of his rhythm finds that his instinctive system is failing him.  The wind is playing tricks with his toss and so his serve is falling apart.  Fortunately for him he can start focusing his attention on his toss and his serve and this will help.  He will serve better as a result of overthinking his serve.

But there is no free lunch.  The shock to his performance has required him to allocate more than usual of his deliberative resources to his serve and therefore he has less available for other things.  He is overthinking his serve and as a result his overall performance must suffer.

(Conversation with Scott Ogawa.)

Boredom is wasted on the bored

 

I coach my daughter’s U12 travel soccer team. An important skill that a player of this age should be picking up is the instinct to keep her head up when receiving a pass, survey the landscape and plan what to do with the ball before it gets to her feet.  The game has just gotten fast enough that if she tries to do all that after the ball has already arrived she will be smothered before there is a chance.

Many drills are designed to train this instinct and today I invented a little drill that we worked on in the warmups before our game against our rivals from Deerfield, Illinois. The drill makes novel use of a trick from game theory called a jointly controlled lottery.

Imagine I am standing at midfield with a bunch of soccer balls and the players are in a single-file line facing me just outside of the penatly area.  I want to feed them the ball and have them decide as the ball approaches whether they are going to clear it to my left or to my right. In a game situation, that decision is going to be dictated by the position of their teammates and opponents on the field. But since this is just a pre-game warmup we don’t have that.  I could try to emulate it if I had some kind of signaling device on either flank and a system for randomly illuminating one of the signals just after I delivered the feed.  The player would clear to the side with the signal on.

But I don’t have that either and anyway that’s too easy and quick to read to be a good simulation of the kind of decision a player makes in a game.  So here’s where the jointly controlled lottery comes in.  I have two players volunteer to stand on either side of me to receive the clearing pass.  Just as I deliver the ball to the player in line the two girls simultaneously and randomly raise either one hand or two.  The player receiving the feed must add up the total number of hands raised and if that number is odd clear the ball to the player on my left and if it is even clear to the player on my right.

The two girls are jointly controlling a randomization device.  The parity of the number of hands is not under the control of either player.  And if each player knows that the other is choosing one or two hands with 50-50 probability, then each player knows that the parity of the total will be uniformly distributed no matter how that individual player decides to randomize her own hands.

And the nice thing about the jointly controlled lottery in this application is that the player receiving the feed must look left, look right, and think before the ball reaches her in order to be able to make the right decision as soon as it reaches her feet.

We beat Deerfield 3-0.

I loved that show. She died last month. Here are 30 selected episodes. Definitely check out the Keith Jarett one.

  1. Facebook’s business problem is that it is the social network of people you see in real life.  All the really interesting stuff you want to do and say on the internet is stuff you’d rather not share with those people or even let them know you are doing/saying.
  2. What is the rationale for offsides in soccer that doesn’t also apply to basketball?
  3. If the editors of all the journals were somehow agreeing to publish each others’ papers what patterns would we look for in the data to detect that?
  4. I need to know in advance the topic of the next 3 Gerzensee conferences so that I can start now writing papers on those topics in hopes of getting invited.

Suppose you are writing a referee report and you are recommending that the paper be rejected. You have a long list of reasons. How many should you put in your report? If you put only your few strongest arguments you run the risk that the author (or editor) finds a response to those and accepts the paper.

You will have lost the chance to use your next few strongest arguments to their full effect, even if there is a second round. The reason has to do with a basic friction of rhetoric.  Nobody really knows what’s true or false, but the more you’ve thought about it the better informed you are. So there is always a signaling aspect to rhetoric. Even if the opponent can’t find a counterargument, when it is known that you rank your argument low in terms of persuasiveness, your argument will as a result be in fact less persuasive.  Your ranking reveals that you believe that the probability is high that a counterargument could be found, even if by chance this time it wasn’t.

On the other hand you also don’t want to put all of your arguments down. The risk here is that the author refutes all but your strongest one or two arguments. Then the editor may conclude that your decision to reject was made on the basis of that long list of considerations and now that a large percentage of them have been refuted this seals the case in favor. Had you left out all the weak arguments your case would look stronger.

It may even be optimal to pick a non-interval subset of arguments. That is you might give your strongest argument, leave out the second strongest but include the third strongest. The reason is that you care not just about the probability that any single one of your arguments is refuted but the probability that a large subset of your arguments survive. And here correlation matters. It may be that a refutation of the strongest argument is likely also to partially weaken the second-strongest. You pick the third because it is orthogonal to the first.

Grocery chain Trader Joe’s has opened up a legal can of whup ass on its self-professed “best customer,” Pirate Joe’s.

Vancouver, British Columbia shopkeeperMichael Hallatt, claims to have spent more than $350,000 at Trader Joe’s in the past two years. Trader Joe’s would like him to stop shopping there. What gives?

Hallatt, makes frequent drives across the border to shop the U.S. stores, then resells popular Trader Joe’s branded products in his own store, cannily called Pirate Joe’s.

Various commentators are at a loss to explain why Trader Joe’s would cut off its best customer.  But isn’t it obvious?  Trader Joe’s always had the option of opening a store in Vancouver. Because it never did, it must be that it would not be profitable. Now the joint profits between Trader Joe’s and Pirate Joe’s cannot be higher than the profit that Trader Joe’s would have earned if it opened its own store.  At worst Trader Joe’s could just replicate what Pirate Joe’s is doing, but probably it could do it more efficiently.  So Trader Joe’s which earns only a share of the joint TJ/PJ profit must be less profitable than it would be if it opened its own store in Vancouver which it has already calculated to be unprofitable.

Here is a nice essay on the idea that “over thinking” causes choking.  It begins with this study:

A classic study by Timothy Wilson and Jonathan Schooler is frequently cited in support of the notion that experts, when performing at their best, act intuitively and automatically and don’t think about what they are doing as they are doing it, but just do it. The study divided subjects, who were college students, into two groups. In both groups, participants were asked to rank five brands of jam from best to worst. In one group they were asked to also explain their reasons for their rankings. The group whose sole task was to rank the jams ended up with fairly consistent judgments both among themselves and in comparison with the judgments of expert food tasters, as recorded in Consumer Reports. The rankings of the other group, however, went haywire, with subjects’ preferences neither in line with one another’s nor in line with the preferences of the experts. Why should this be? The researchers posit that when subjects explained their choices, they thought more about them.

The upcoming Northwestern home game versus Ohio State has now sold out. Danny Ecker at Crain’s Chicago Business has the post-mortem:

Sales so far show the school was effective in its experimental“Purple Pricing” offer for about 5,000 single-game seats for the game.

The modified Dutch auction system, which guarantees that buyers don’t pay any more for tickets than anyone else in their section, ended up selling out at $195, $151 and $126 for seats on the sideline, corner and end zones, respectively.

On the secondary market, sideline seats have sold for an average of $190, corner seats for $135 and end-zone seats for $127. That suggests that fans haven’t been able to flip them for a profit — at least, not yet.

Suppose you and a friend of the opposite sex are recruited for an experiment. You are brought into separate rooms and told that you will be asked some questions and, unless you give consent, all of your answers will be kept secret.

First you are asked whether you would like to hook up with your friend. Then you are asked whether you believe your friend would like to hook up with you. These are just setup questions. Now come the important ones. Assuming your friend would like to hook up with you, would you like to know that? Assuming your friend is not interested, would you like to know that? And would you like your friend to know that you know?

Assuming your friend is interested, would you like your friend to know whether you are interested? Assuming your friend is not interested, same question. And the higher-order question as well.

These questions are eliciting your preferences over you and your friend’s beliefs about (beliefs about…) you and your friend’s preferences. This is one context where the value of information is not just instrumental (i.e. it helps you make better decisions) but truly intrinsic. For example I would guess that for most people, if they are interested and they know that the other is not that they would strictly prefer that the other not know that they are interested. Because that would be embarrassing.

And I bet that if you are not interested and you know that the other is interested you would not like the other to know that you know that she is interested. Because that would be awkward.

Notice in fact that there is often a strict preference for less information. And that’s what makes the design of a matching mechanism complicated.  Because in order to find matches (i.e. discover and reveal mutual interest) you must commit to reveal the good news. In other words, if you and your friend both inform the experimenters that you are interested and that you want the other to know that, then in order to capitalize on the opportunity the information must be revealed.

But any mechanism which reveals the good news unavoidably reveals some bad news precisely when the good news is not forthcoming. If you are interested and you want to know when she is interested and you expect that whenever she is indeed interested you will get your wish, then when you don’t get your wish you find out that she is not interested.

Fortunately though there is a way to minimize the embarrassment. The following simple mechanism does pretty well. Both friends tell the mediator whether they are interested.  If, and only if, both are interested the mediator informs both that there is a mutual interest. Now when you get the bad news you know that she has learned nothing about your interest. So you are not embarrassed.

However it doesn’t completely get rid of the awkwardness. When she is not interested she knows that *if* you are interested you have learned that she is not interested. Now she doesn’t know that this state of affairs has occurred for sure. She thinks it has occurred if and only if you are interested so she thinks it has occurred with some moderate probability. So it is moderately awkward. And indeed you know that she is not interested and therefore feels moderately awkward.

The theoretical questions are these:  under what specification of preferences over higher-order beliefs over preferences is the above mechanism optimal? Is there some natural specification of those preferences in which some other mechanism does better?

Update: Ran Spiegler points me to this related paper.

A firm has a basic goal:  maximize profits.  And then it has day-to-day decisions. It is far too complicated to every day try to trace through the consequences of those basic decisions on the fundamental objective of maximizing profits. A manager who tried to do that would spend so much time thinking that by the time he figured it out the day would be over and he’d have to start thinking again about tomorrow’s decision.

So firms don’t hire managers like that. Managers cling to intermediate goals, like say maximize market share. The best intermediate goals are the ones that are easy to monitor and which do a pretty good job of proxying for the underlying goal. These intermediate goals eventually become part of the culture of the firm and knowledge of their connection to the underlying goal can get lost. The manager can’t distinguish between intermediate goals and fundamental goals.

Now a consultant comes in to advise the manager. A consultant’s job is to show the manager how best to pursue his goals. So the very first thing a consultant should do is find out what the manager’s goals are. And here’s where the dilemma arises. The consultant might actually be smart enough to figure out that the manager’s goals are just intermediate goals. Does he say “De Gustibus” and advise the manager on how to pursue his goals even if he can see that in this particular instance it works against what the manager should really be maximizing?

Or does he have enough ambition in his job as advisor to try to convince the manager that his goals are all wrong, that he should really be maximizing something else? I honestly wonder what the smart consultant does in these situations.

More generally, in everyday life we have arguments about what’s the right thing to do. A lot of the time these arguments are confounded by the inability to distinguish whether we are arguing about the right course of action given our common goals (an argument that can be settled) or whether we have really just chosen different intermediate goals (loggerheads.)

Presh Talwalker tells us about this study of parking strategies:

They observed two distinct strategies: “cycling” and “pick a row, closest space.” They compared the results. “What was interesting,” [Professor Andrew Velkey found], “was although the individual cycling were spending more time driving looking for a parking space, on average they were no closer to the door, time-wise or distance-wise, than people using ‘pick a row, closest space.’”

And commenters are inferring that hunting for the best spot is a sub-optimal strategy.  But those that are searching for the best parking spot are not interested in reducing their expected parking time, rather they care about the second moment.  When we have an appointment there is a deadline effect:  our payoff drops precipitously if we arrive past the deadline.  Faced with such a payoff function we are typically wiling to increase our expected parking time if in return we can at least increase the probability of getting lucky with a really good spot.  “Pick a row, closest space” guarantees we will be a bit late.  “Cycling” may increase the average searching time but at least gives us a chance of being on time.

Dr. Doom did not get approval for the tub from the Department of Buildings, which resulted in the violation. He’s been ordered to remove not only the tub, but also the deck and party room (replete with a bar and bathroom) which he had constructed on the roof of his $5.5 million East First Street pad. The ruling apparently came about after a complaint levied in February.

If you’re still grappling to get a sense of what will be lost now that these parties—at least, in the form they previously inhabited—will cease to exist, here’s a wonderful quote Roubini gave toNew York which paints quite the picture of both his allure and the nature of the shindigs.

“[The models who attend my parties] love my beautiful mind. I am ugly, but they’re attracted to the brains. I’m a rock star among geeks, wonks and nerds,” he said. “[What makes the parties so great are] fun people and beautiful girls. I look for 10 girls to one guy.”

The full story is here.

  1. Exercise: find a name such that when you sing The Name Game (“banana fana fo …”) all three words you get are insults.
  2. The Northwestern Women’s Lacrosse team has won the NCAA championship like every year but two in the past decade.  The two losses make the overall dynasty more impressive.  Discuss.
  3. Why do fat people slide farther when they reach the bottom of a water slide?
  4. When hiking in a group, if an accurate measure of (changes in) elevation is unavailable but you have a watch and a GPS it’s better to share the work of carrying the backpack by dividing the time rather than the distance.

You are walking back to your office in the rain and your path is lined by a row of trees. You could walk under the trees or you could walk in the open. Which will keep you drier?

If it just started raining you can stay dry by walking under the trees. On the other hand, when the rain stops you will be drier walking in the open. Because water will be falling off the leaves of the tree even though it has stopped raining. Indeed when the rain is tapering off you are better off out in the open. And when the rain is increasing you are better off under the tree.

What about in steady state? Suppose it has been raining steadily for some time, neither increasing nor tapering off. The rain that falls onto the top of the tree gets trapped by leaves. But the leaves can hold only so much water. When they reach capacity water begins to fall off the leaves onto you below. In equilibrium the rate at which water falls onto the top of the tree, which is the same rate it would fall on you if you were out in the open, equals the rate at which water falls off the leaves onto you.

Still you are not indifferent: you will stay drier out in the open. Under the tree the water that falls onto you, while constituting an equal total volume as the water that would hit you out in the open, is concentrated in larger drops. (The water pools as it sits on the leaves waiting to be pushed off onto you.) Your clothes will be dotted with fewer but larger water deposits and an equal volume of water spread over a smaller surface area will dry slower.

It is important in all this that you are walking along a line of trees and not just standing in one place. Because although the rain lands uniformly across the top of the tree, it is probably channeled outward away from the trunk as it falls from leaf to leaf and eventually below. (I have heard that this is true of Louisiana Oaks.) So the rainfall is uniform out in the open but not uniform under the tree. This means that no matter where you stand out in the open you will be equally wet, but there will be spots under the tree in which the rainfall will be greater than and less than that average. You can stand at the local minimum and be drier than you would out in the open.

Why are conditional probabilities so rarely used in court, and sometimes even prohibited?  Here’s one more good reason:  prosecution bias.

Suppose that a piece of evidence X is correlated with guilt.  The prosecutor might say, “Conditional on evidence X, the likelihood ratio for guilt versus innoncence is Y, update your priors accordingly.”  Even if the prosecutor is correct in his statistics his claim is dubious.

Because the prosecutor sees the evidence for all suspects before deciding which ones to bring to trial.  And the jurors know this.  So the fact that evidence like X exists against this defendant is already partially reflected in the fact that it was this guy they brought charges against and not someone else.

If jurors were truly Bayesian (a necessary presumption if we are to consider using probabiilties in court at all) then they would already have accounted for this and updated their priors accordingly before even learning that evidence X exists.  When they are actually told it would necessarily move their priors less than what the statistics imply, perhaps hardly at all, maybe even in the opposite direction.

A balanced take in the New Yorker.  Here is an excerpt.

A core objection is that neuroscientific “explanations” of behavior often simply re-state what’s already obvious. Neuro-enthusiasts are always declaring that an MRI of the brain in action demonstrates that some mental state is not just happening but is really, truly, is-so happening. We’ll be informed, say, that when a teen-age boy leafs through the Sports Illustrated swimsuit issue areas in his brain associated with sexual desire light up. Yet asserting that an emotion is really real because you can somehow see it happening in the brain adds nothing to our understanding. Any thought, from Kiss the baby! to Kill the Jews!, must havesome route around the brain. If you couldn’t locate the emotion, or watch it light up in your brain, you’d still be feeling it. Just because you can’t see it doesn’t mean you don’t have it. Satel and Lilienfeld like the term “neuroredundancy” to “denote things we already knew without brain scanning,” mockingly citing a researcher who insists that “brain imaging tells us that post-traumatic stress disorder (PTSD) is a ‘real disorder.’ ”

And

It’s perfectly possible, in other words, to have an explanation that is at once trivial and profound, depending on what kind of question you’re asking. The strength of neuroscience, Churchland suggests, lies not so much in what it explains as in the older explanations it dissolves. She gives a lovely example of the panic that we feel in dreams when our legs refuse to move as we flee the monster. This turns out to be a straightforward neurological phenomenon: when we’re asleep, we turn off our motor controls, but when we dream we still send out signals to them. We really are trying to run, and can’t.

He died earlier this week.  If you grew up in Southern California, and you watched TV, you may have forgotten Cal Worthington but his dog Spot, the acres and acres of cars, the “Go See Cal”, the giant selection of cars and trucks on sale, the “open every day til midnight” and the music in the way “nineteen” springboarded the cars vintage out of your set and into your ears are all still stored away in some synapses somewhere in there and they’re all gonna come flowing out when you watch this video and probably bring with them a whole bunch of other stuff lost in there that you are gonna be pretty tickled to find again.  RIP Cal Worthington.

Should restaurants put salt shakers on the table?  A variety of food writers weigh in on the question here.

The naive argument is that salt shakers give diners more control. They know their own tastes and can fine tune the salt to their liking. The problem with this argument is that salt shaken over prepared food is not the same as salt added to food as it is cooked.  A chef adds salt numerous times through the cooking process to different items on the plate because some need more salt than others.

So the benefit of control comes at the cost of excess uniformity in the flavor. But beyond that, there is an interesting strategic issue. When there is no salt shaker on the table the chef chooses the level of saltiness to meet some median or average diner’s taste for salt. All diners get equally salty food independent of their taste. Diners to the left of the median find their dish too salty and diners to the right wish they had a salt shaker.

A reduction in the level of saltiness benefits those just to the left of the median at the expense of those far to the right and at an optimum those costs outweigh the benefits.

But when there is a salt shaker, the chef can reduce the level of saltiness at a lower cost because those to the right can compensate (albeit imperfectly) by adding back the salt. So in fact the optimal level of salt added by a chef whose restaurant puts salt shakers on the table is lower.

So the interesting observation is that salt shakers on the table benefit diners who like less salt (and also those that like a lot of salt) at the expense of the average diner (who would otherwise be getting his salt bliss point but is now getting too little).

Imagine that the President convenes his top economic advisors to get a recommendation on a pressing policy issue. They say unequivocally “do X.” The President asks why and they say “its complicated. Do X.” The President, not happy with that, decides he is going to read the economic literature on the pros and cons of doing X. After a thorough study he comes back to his advisors and says “You economists don’t understand your own science. I read the literature and I should do Y.”

I think we would agree that’s a bad outcome. For probably exactly the same reason that Doctors don’t seem to be happy with economist Emily Oster’s apparent advice to pregnant women to drink alcohol “like a European adult.”

But let’s assume that Emily truly can interpret the published statistical literature better than her Obstetrician. There is another reason to question her recommendations.

An advisor’s job is to advise on the risks of an activity. Because the advisor is the expert on that. The decision-maker is the expert on her own preferences. The correct decision is based on weighing both of these.

A recommendation to have up to a glass of wine per day while pregnant confounds the two sides. What it really means is “I like wine a lot.  I also read about the risks and decided that my taste for wine was strong enough that I am willing to live with the risks.” Thus her recommendation amounts to “If you like wine as much as I do you should drink up to a glass per day when you are pregnant.”

When I asked my doctor about drinking wine, she said that one or two glasses a week was “probably fine.” But “probably fine” isn’t a number.

The problem is that there is no way to quantify how much she likes wine and so no way for her readers to know whether they like wine as much as she does. Likewise it is too much for Emily to demand her doctors to say much more than “probably fine.”

The doctors’ advice is based on some assumption about the patient’s taste for wine weighed against the risks. Emily’s advice is based on a different assumption. As for the risks, when Emily reads the literature and concludes that the evidence is weak of the danger of drinking alcohol she then jumps to the conclusion that it is weaker than what the doctors thought. She makes the identifying assumption that their recommendation was conservative because they overestimated the risks and not because they underestimated her taste for wine. But there does not seem to be any basis for that assumption because her doctors never told her what they believed the risks to be and they never asked her how much she likes wine.

Jeff’s Twitter Feed

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Join 951 other followers

Follow

Get every new post delivered to your Inbox.

Join 951 other followers