You are currently browsing the tag archive for the ‘psychology’ tag.

Via Marginal Revolution, an essay exploring the psychology of watching a sporting event after the fact on your DVR.  Is it less enjoyable than watching the same game live when it happens?  I love this question and I love the answers he gives.  Strangely though, he divides his reasons into the “rational” and the “irrational” and with only one exception I would give the opposite classification.  Here are his rational ones:

  1. Removing commercials reduces drama.  I suppose he calls this rational because he thinks that its true and perfectly sensible.  The unavoidable delay before action resumes builds suspense.  But even though I agree with that, I call this an irrational reason because of course I can always watch the commercials or just sit around for 2 minutes if I’d rather not see yet another Jacob’s Creek wine commercial.  If in fact I don’t do that, then that’s irrational.
  2. If you know it has already happened then it is less interesting.  Again, this may be true for many people, but to make it into the rational category it has to be squared with the fact that we watch movies, TV dramas, even reality TV shows whose outcomes we know are already determined.
  3. Recording gives me too much control.  Same as #1.
Now for the irrational ones:
  1. I don’t get to believe that my personal involvement will affect the game. This one I agree with.  Many people are under this illusion and it would be hard to call it rational for someone to think they are any less in control when the event is already over.
  2. If this were a really exciting game I would have found out about it independently by now no matter how hard I tried to avoid it.  I would call this the one truly rational reason and I think its a big problem for most major sports.  If something really exciting happened that information is going to find you one way or another.  So if you are sitting down to watch a taped event and the information didn’t find you, then you know it can only be so good.  Even worse, if the game reaches a state where it would take a dramatic comeback to change the outcome, you know that comeback isn’t going to happen.

I would add two of my own, one rational and one irrational.  First, you don’t watch a DVR’d sporting event with friends.  The whole point of recording it is to pick the optimal time to watch it and that’s not going to be your friend’s optimal time.  Plus he probably already saw it, plus who is going to control the fast-forward?  Watching with friends adds a dimension to just about anything, especially sports so DVR’d events are going to be less interesting just for the lack of social dimension having nothing to do with the tape delay.

Second, there is something very strange about hoping for something to happen when in fact it has either already happened or already not.  Now, this is irrelevant for people who easily suspend disbelief watching movies.  Those people can yell at the fictitious characters on the screen and feel elation and despair when their pre-destined fate is played out.  But people who can’t find the same suspense in fiction look to sports for the source of it.  For those people too many existential questions get in the way of enjoying a tape-delayed broadcast.

A mother I know was looking into a week of golf camp for her son.  She was quoted a price and it sounded reasonable to her but the fact is she doesn’t really know what a reasonable price is for golf camp.  Think about your own experience in a situation like this.  Somehow, whether this is rational or not, the price itself tells you what a reasonable price is.  Once you hear the price you are anchored to it. For sure anything more than that would be unreasonable.

Now back to the mother who is the subject of this story.  Having been quoted a reasonable price she is inclined to go for it.  But first, she has some further inquiries. What happens if it rains?  Will there be a refund for that day?

There is some checking with higher ups and a return phone call with the answer in the negative.  Camp is rain or shine.  In the event of rain the children will play board games in the clubhouse.

Now the pro-rated daily fee is, by basic arithmetic, a reasonable price for a day of golf camp but not a reasonable price to pay for day care.  Thus, given the non-negligible probability of rain, the value of golf camp has just dropped by a non-negligible amount.  And indeed this price which was a reasonable price for 5 days of golf camp is not reasonable for an expected 4.25 days of golf camp.

No golf camp for junior.

Here is the lesson for optimal pricing policies. A fully informed, risk-neutral expected utility maximizer sees two equivalent ways of pricing golf camp.  Way #1:  Price is fixed and set at the value of the expected number of non-rain days.  Way #2:  A higher price but with refunds on rain days.

But given the inherent reference-dependence that comes from the natural tendency to interpret any price as just on the threshold of reasonable, Way #2 is clearly superior.  This has many implications.  Think shipping costs, all-inclusive holidays, etc.

 

You receive a notice in the mail reminding you that your subscription to Food and Wine Magazine is about to expire.  Don’t miss out on everything you have come to love about Food and Wine Magazine, renew today!

I received one of these last week.  Thing is, I don’t subscribe to Food and Wine Magazine and I never have.  Still, for the briefest moments I think I did start to worry that I would miss out on everything I love about it.

In the end, I didn’t “renew” but I would bet that lots of people do.

It’s hard to model serendipity in a rational choice framework.  For example, people say that the web’s ability to focus your attention on subjects you like prevents you from being exposed to new stuff and that makes you worse off.  That may be true, but it could never be true in a rational framework because if you wanted exposure to new stuff you would choose that.  (I am leaving out market structure explanations, i.e. the market for serendipitous content may shrink.  That’s beside the point I am making and anyway I would guess exactly the opposite.  I can always take advantage of the increased diversity in content by supplying my own randomness.)

But here’s a version of serendipity that may be rationalized.  I have started reading blogs in my Google reader using the “All Items” tab where all the articles in all the blogs I subscribe to are listed in a flat format in chronological order, rather than blog-by-blog.  I have found a non-obvious effect of serendipity:  not knowing which blog I am reading and just reading the article prevents me from approaching it with expectation of the author’s prior biases, etc.  I recommend it.

For some kinds of information it may be beneficial to hide the source.  For example, pure rhetoric.  My ability to judge whether it is convincing or not is based purely on the logical connections between premise and conclusion and my prior beliefs about the plausibility of the premises.  Knowing the author of the rhetoric provides no additional information.  And if, for psychological reasons, knowing the source biases my interpretation then I am strictly better off having it hidden from me.  (At least temporarily)

You will complain that by appealing to psychological biases I have departed from the rational choice framework.  But I think there is a useful distinction between rational choice, and rational information processing (if the latter even has any meaning.)  If I can be expected to choose my sources rationally then there is no role for serendipitous exposure to new sources, even if I make errors in processing information.  But rational choice together with (self-aware) processing errors can justify keeping the source hidden.

(Drawing by Stephanie Yee.)

A reader, Kanishka Kacker, writes to me about Cricket:

Now, very often, there are certain decisions to be made regarding whether a given batter was out or not, where it is very hard for the umpire to decide. In situations like this, some players are known to walk off the field if they know they are “out” without waiting for the umpire’s decision. Other players don’t, waiting to see the umpire’s decision.

Here is a reason given by one former Australian batsman, Michael Slater, as to why “walking” is irrational:

(this is from Mukul Kesavan’s excellent book “Men in White”)

“The pragmatic argument against walking was concisely stated by former Australian batsman Michael Slater. If you walk every time you’re out and are also given out a few times when you’re not (as is likely to happen for any career of a respectable length), things don’t even out. So, in a competitive team game, walking is, at the very least, irrational behavior. Secondarily, there is a strong likelihood that your opponents don’t walk, so every time you do, you put yourself or your team at risk.”

What do you think?

Let me begin by saying that the only thing I know about Cricket is that “Ricky Ponting” was either the right or the wrong answer to the final question in Slumdog Millionaire.  Nevertheless, I will venture some answers because there are general principles at work here.

  1. First of all, it would be wrong to completely discount plain old honor. Kids have sportsmanship drilled into their heads from the first time they start playing, and anyone good enough to play professionally started at a time when he or she was young enough to believe that honor means something. That can be a hard doctrine to shake.  Plus, as players get older and compete in at more selective levels, some of that selection is on the basis of sportsmanship.   So there is some marginal selection for honorable players to make it to the highest levels.
  2. There is a strategic aspect to honor.  It induces reciprocity in your opponent through the threat of shame.  If you are honorable and walk, then when it comes time for your opponent to do the same, he has added pressure to follow suit or else appear less honorable than you.  Even if he has no intrinsic honor, he may want to avoid that shame in the eyes of his fans.
  3. But to get to the raw strategic aspects, reputation can play a role.  If a player is known to walk whenever he is out then by not walking he signals that he is not out.  In those moments of indecision by the umpire, this can tip the balance and get him to make a favorable call.  You might think that umpires would not be swayed by such a tactic but note that if the player has a solid reputation for walking then it is in the umpire’s interest to use this information.
  4. And anyway remember that the umpire doesn’t have the luxury to deliberate.  When he’s on the fence, any little nudge can tilt him to a decision.
  5. Most importantly, a player’s reputation will have an effect on the crowd and their reactions influence umpires.  If the fans know that he walks when he’s out and this time he didn’t walk they will let the umpire have it if he calls him out.
  6. There is a related tactic in baseball which is where the manager kicks dirt onto the umpire’s shoes to show his displeasure with the call.  It is known that this will never influence the current decision but it is believed to have the effect of “getting into the umpire’s head” potentially influencing later decisions.
  7. Finally, it is important to keep in mind that a player walks not because he knows he is out but because he is reasonably certain that the umpire is going to decide that he is out whether or not he walks.  The player may be certain that he is not out but only because he is in a privileged position on the field where he can determine that.  If the umpire didn’t have the same view, it would be pointless to try and persuade.  Instead he should walk and invest in his reputation for the next time when the umpire is truly on the fence.

Dan Ariely, Chris Anderson, Hal Varian and others are heading Startup-Onomics: a Behavioral Economics “Summit” for entrepreneurs.

What is behavioral economics?

As business owners, we want to design products that are useful, we want customers (lots of them), and we want to create a motivating work environment. But it’s not that easy. In fact, most of the time that stuff takes a lot of hard work and a lot of trial and error.

Good news. There is a science called Behavioral Economics.  This attempts to understand people’s day to day decisions (where do I get my morning coffee?) and people’s big decisions (How much should I save for retirement?).

Understanding HOW your users make decisions and WHY they make them is powerful. With this knowledge, companies can build more effective products, governments can create impactful policies and new ideas can gain faster traction.

Sessions include “How to get people to pay what they want,” and “The creation and role of habits in purchasing decisions,” etc.

via The BrowserThis article was thoroughly engaging from start to finish.  It’s about a convict who faked insanity so that they would put him in the psycho ward instead of prison.  And then he realizes what a huge mistake that was.

Tony said the day he arrived at the dangerous and severe personality disorder (DSPD) unit, he took one look at the place and realised he’d made a spectacularly bad decision. He asked to speak urgently to psychiatrists. “I’m not mentally ill,” he told them. It is an awful lot harder, Tony told me, to convince people you’re sane than it is to convince them you’re crazy.

So he tries to be co-operative in the hospital to prove that he wasn’t really criminally insane.  According to his doctors that was a sure sign he belonged in the hospital.

I glanced suspiciously at Tony. I instinctively didn’t believe him about this. It seemed too catch-22, too darkly-absurd-by-numbers. But later Tony sent me his files and, sure enough, it was right there. “Tony is cheerful and friendly,” one report stated. “His detention in hospital is preventing deterioration of his condition.”

Then he tried the opposite.

After Tony read that, he said, he started a kind of war of non co-operation. This involved staying in his room a lot. On the outside, Tony said, not wanting to spend time with your criminally insane neighbours would be a perfectly understandable position. But on the inside it demonstrates you’re withdrawn and have a grandiose sense of your own importance. In Broadmoor, not wanting to hang out with insane killers is a sign of madness.

Eventually his doctors figured out that he had been faking it.  But of course the willingness to fake criminal insanity in order to go to the psycho ward is all the more reason to stay there.

But then I read Maden’s next line: “Most psychiatrists who have assessed him, and there have been a lot, have considered he is not mentally ill, but suffers from psychopathy.”…

Faking mental illness to get out of a prison sentence, Maden explained, is exactly the kind of deceitful and manipulative act you’d expect of a psychopath.

A psychopath is basically someone who appears perfectly normal on the surface but lacks normal moral restraints that make people socially fit. And just as it is hard to prove that you are not a psychopath it becomes really easy to conclude that everybody else is one.

My mind drifted to what I could do with my new powers. If I’m being honest, it didn’t cross my mind to become some kind of great crime fighter, philanthropically dedicated to making society a safer place. Instead, I made a mental list of all the people who over the years had crossed me and wondered which of them I might be able to expose as having psychopathic character traits.

 

 

 

Bandwagon effects are hard to prove.  If an artist is popular, does that popularity by itself draw others in?  Are you more likely to enjoy a movie, restaurant, blog just because you know that lots of other people like it to?  It’s usually impossible to distinguish that theory from the simpler hypothesis:  the reason it was popular in the first place was that it was good and that’s why you are going to like it too.

Here’s an experiment that would isolate bandwagon effects.  Look at the Facebook like button below this post.  I could secretly randomly manipulate the number that appears on your screen and then correlate your propensity to “like” with the number that you have seen.  The bandwagon hypothesis would be that the larger number of likes you see increases your likeitude.

Here’s the abstract of a recent paper in the Journal of Clinical Child and Adolescent Psychology.

We examined middle-class Israeli preschoolers’ cognitive self-transformation in the delay of gratification paradigm. In Study 1, 66 un-caped or Superman-caped preschoolers delayed gratification, half with instructions regarding Superman’s delay-relevant qualities. Caped children delayed longer, especially when instructed regarding Superman’s qualities. In Study 2 with 43 preschoolers, with the respective relevant superhero qualities emphasized (i.e., patient vs. impulsive), Superman-caped children tended to delay longer than Dash-caped children. In Study 3, 48 preschoolers delayed gratification after being instructed to pretend to be Superman or a child with the same patient qualities, or after watching a video of Superman, with or without pretend instructions. Invoking Superman led to longer delays and instructions regarding Superman’s qualities tended to lead to longer delays than watching the Superman video. In accounting for the data, we differentiated cognitive transformations of the reward’s consummatory value and cognitive transformations as basic intellectual processes.

Gat glide:  Not Exactly Rocket Science.

An article in the journal Neuron documents an experiment in which monkeys were trained to play Rock Scissors Paper.  When a monkey played a winning strategy he was likely to repeat it on the next round.  When he played a losing strategy he was likely to choose on the next round the strategy that would have won on this round.  The researchers were interested in how monkeys learn and they interpret the results as consistent with “regret”-based learning.

Unfortunately in the experiment the monkeys were playing against a (minimax-randomizing) computer and not other monkeys.  With this as a control it would now be interesting to see if they follow the same pattern when they play against another monkey.  (With this pattern, the winning monkey would be negatively serially correlated.)

Vueltiao volley: Victor Shao.

Via Mind Hacks, a story in the New York Times about twins conjoined at the head in such a way that they share brain matter and possibly consciousness.

Twins joined at the head — the medical term is craniopagus — are one in 2.5 million, of which only a fraction survive. The way the girls’ brains formed beneath the surface of their fused skulls, however, makes them beyond rare: their neural anatomy is unique, at least in the annals of recorded scientific literature. Their brain images reveal what looks like an attenuated line stretching between the two organs, a piece of anatomy their neurosurgeon, Douglas Cochrane of British Columbia Children’s Hospital, has called a thalamic bridge, because he believes it links the thalamus of one girl to the thalamus of her sister. The thalamus is a kind of switchboard, a two-lobed organ that filters most sensory input and has long been thought to be essential in the neural loops that create consciousness. Because the thalamus functions as a relay station, the girls’ doctors believe it is entirely possible that the sensory input that one girl receives could somehow cross that bridge into the brain of the other. One girl drinks, another girl feels it.

  1. The story is very interesting and moving.  Worth a read.
  2. I would like to see them play Rock-Scissors-Paper.
  3. More generally, experimental game theory suffers from a mutliple-hypothesis problem.  We assume rationality, and knowledge of the others’ strategy. Departures from theoretical predictions could come from violations of either of these two.  The twins present a unique control.

Perfectionism seems like an irrational obsession.  If you are already very close to perfect the marginal benefit from getting a little bit closer is smaller and smaller and, because we are talking about perfection, the marginal cost is getting higher and higher.  An interior solution seems to be indicated.

But this is wrong.  There is a discontinuity at perfection, a discrete jump up in status that can only be realized with literal perfection.  Too see why, consider a competitive diver who performs his dive perfectly.  Suppose that the skill of a diver ranges from 0 to 100, say uniformly distributed, and that only divers of skill greater than 80 can execute a perfect dive.  Then conditional on a perfect dive, observers will estimate his skill to be 90.  But if his dive falls short of perfection, no matter how close to perfection he gets, observers’ estimate of his skill will be bounded above by 80.  That last step to perfection gives him a discontinuous jump upward in esteem.

Now of course in contests like diving, the competitors choose the difficulty of their dives.  This unravels the drive to perfectionism because in fact the observers will figure out that he has chosen a dive that is easy enough for him to perfect.  In equilibrium a diver of skill level q chooses a dive which can be perfected only by divers whose skill is q or larger.  And then the choice of dive will perfectly reveal his skill.  The discontinuity goes away.

But many activities have their level of difficulty given to us, and even among those where we can choose, most activities are not as transparent as diving. At a piano recital you hear only the pieces that the performers have prepared. You know nothing about the pieces they practiced but then shelved.   The fame from pitching perfect game is not closely approximated by the notoriety of the near-perfect game.

You get to see only the blog post I wrote, not the ones that are still on the back burner.

(Drawing: Wait! from www.f1me.net)

Let’s start with the premise that self-confidence leads to greater success.  (Now, you may object because most of the highly self-confident people you know are not as good as they think they are.  But the premise is simply that they are more successful than they would otherwise be, not that their self-confidence is fully validated.)

Is it because confidence by itself makes you more successful?  You can do some interesting behavioral economics with that assumption but here’s another channel that requires less of a leap.  When you are confident in yourself you try harder, you take more chances, you let your intuitions run.  But that by itself doesn’t make you any more successful than the next guy.  Indeed it probably will make you less successful because your intuitions are probably wrong.

But it means that you will find that out sooner.  And if you are confident enough, when your first foray fails you will believe in yourself enough to regroup and try again. Even if your confidence doesn’t make it any more likely that these successive attempts pay off, you will still be more successful in the long run because you will learn faster what doesn’t work, and those lessons won’t demoralize you.

And once we have this, then it follows that confidence per se can make you more successful.  Because confidence signals this ability to roll with the punches and that will be rewarded by others.

From Jonah Lehrer:

One week later, all the subjects were quizzed about their memory of the product. Here’s where things get disturbing: While students who saw the low-imagery ad were extremely unlikely to report having tried the popcorn, those who watched the slick commercial were just as likely to have said they tried the popcorn as those who actually did. Furthermore, their ratings of the product were as favorable as those who sampled the salty, buttery treat. Most troubling, perhaps, is that these subjects were extremely confident in these made-up memories. The delusion felt true. They didn’t like the popcorn because they’d seen a good ad. They liked the popcorn because it was delicious.

The article is interesting, you should check it out. These stories always sound impossible to believe.  It just doesn’t seem so easy to manipulate memories.  But it’s not all that surprising when you think about it.

  1. You have dreams where impossible things happen, where people you know have changed dramatically out of the blue, or where your life is completely changed. And when you have a dream like that you say “this is strange” and then you accept it as true and go on.  It’s incredibly easy for you to believe in impossible things.  And often you do it by convincing yourself that in fact its been like this all along.
  2. False memories sound impossible, but on the other hand we forget things all the time.  Forgetting something is not all that different from a false memory. “Where did you put your keys?”  “I didn’t put them anywhere.  Somebody else must have put them somewhere.”  And then you find the keys and remember that in fact you did put them there.  You have not just forgotten something but you have believed in the false memory that the thing never happened.

Andrew Caplin told us about a new experiment that adds to the debate about “nudges.”

We have initiated experiments to study this tradeoff experimentally in a setting where imperfect perception seems highly likely and choice quality is easy to measure. In each round, subjects are presented with three options, each of which is composed of 20 numbers. The value of each option is the sum of the 20 numbers, and subjects are incentivized to select the object with the highest value. In the baseline treatment (“33%, 33%, 33%”), subjects were informed that all three options were equally likely to be the highest valued option, but in two other treatments, they were nudged towards the first option. In one of the nudge treatments (“40%, 30%, 30%”), subject were informed that the first option was 40% likely to be the highest valued option (the other two were both 30% likely). In the other nudge treament (“45%, 27.5%, 27.5%”), subjects were told that the first option was 45% likely to be the highest valued option (the other two were both 27.5% likely). Subjects completed 12 rounds of each treatment, which were presented in a random order.

The subjects got the best option only 54% of the time revealing that effort was required to add up all 20 numbers three times to find the largest sum.  The nudges gave them hints but notice that the hints also lower the return to search effort.  So in theory there will be both income and substitution effects.  And in the experiment you see evidence of both.  Their choices reveal that they utilized the hints: they more often chose the highlighted alternative. But, the interesting finding is that their chances of getting the best alternative did not increase.  In essence, the hint perfectly crowded out their own search effort.

You could take a pessimistic view based on this:  nudges don’t improve outcomes, they just make people lazier.  But in fact the experiment suggests a nuanced interpretation of nudges.  Even if we don’t see any evidence that, say published calorie counts improve the quality of decisions, that doesn’t imply that they have no welfare effects.  Information is a fungible resource.  If you give people information, they can save the effort of gathering it themselves.  Given that information is a public good, these are potentially large welfare gains that would be hard to measure directly.

That was the title of a very interesting talk at the Biology and Economics conference I attended over the weekend at USC.  The authors are Juan Carillo, Isabelle Brocas and Ricardo Alonso.  It’s basically a model of how multitasking is accomplished when different modules in the brain are responsible for specialized tasks and those modules require scarce resources like oxygen in order to do their job.  (I cannot find a copy of the paper online.)

The brain is modeled as a kludgy organization.  Imagine that the listening-to-your-wife division and the watching-the-French-Open division of YourBrainINC operate independently of one another and care about nothing but completing their individual tasks.  What happens when both tasks are presented at the same time? In the model there is a central administrator in charge of deciding how to ration energy between the two divisions.  What makes this non-trivial is that only the individual divisions know how much juice they are going to need based on the level of difficulty of this particular instance of the task.

Here’s the key perspective of the model.  It is assumed that the divisions are greedy:  they want all the resources they need to accomplish their task and only the central administrator internalizes the tradeoffs across the two tasks.  This friction imposes limits on efficient resource allocation.  And these limits can be understood via a mechanism design problem which is novel in that there are no monetary transfers available.  (If only the brain had currency.)

The optimal scheme has a quota structure which has some rigidity.  There is a cap on the amount of resources a given division can utilize and that cap is determined solely by the needs of the other division.  (This is a familiar theme from economic incentive mechanisms.)  An implication is that there is too little flexibility in re-allocating resources to difficult tasks.  Holding fixed the difficulty of task A, as the difficulty of task B increases, eventually the cap binds.  The easy task is still accomplished perfectly but errors start to creep in on the difficult task.

(Drawing:  Our team is non-hierarchical from www.f1me.net)

I learned something from reading this article about a classic experimental finding in developmental psychology:

So you keep hiding the toy in “A” and the baby keeps searching for the toy in “A.” Simple enough. But what happens if you suddenly hide the toy in “B”? Remember, you’re hiding the toy in full view of the infant. An older child or an adult would simply reach for “B” to retrieve the toy. But not the infant. But, despite having just seen the object hidden in the new “B” location, infants between 8 and 12 months of age (the age at which infants begin to have enough motor control to successfully reach for an object) frequently look for it under box “A,” where it had previously been hidden. This effect, first demonstrated by Jean Piaget, is called the perseverative search error or sometimes the A-not-B error.

The result is robust to many variations on the experiment and the full article goes through some hypotheses about the error and a new experiment that turns them on their head.

Andrew Caplin is visiting Northwestern this week to give a series of lectures on psychology and economics.  Today he talked about some of his early work and briefly mentioned an intriguing paper that he wrote with Kfir Eliaz.

Too few people get themselves tested for HIV infection.  Probably this is because the anxiety that would accompany the bad news overwhelms the incentive to get tested in the hopes of getting the good news (and also the benefit of acting on whatever news comes out.)  For many people, if they have HIV they would much rather not know it.

How do you encourage testing when fear is the barrier?  Caplin and Eliaz offer one surprisingly simple, yet surely controversial possibility:  make the tests less informative.  But not just any old way.  Because we want to maintain the carrot of a positive result but minimize the deterrent of a negative result.  Now we could try outright deception by certifying everyone who tests negative but give no information to those who test positive.  But that won’t fool people for long.  Anyone who is not certified will know he is positive and we are back to the anxiety deterrent.

But even when we are bound by the constraint that subjects will not be fooled there is a lot of freedom to manipulate the informativeness of the test.  Here’s how to ramp down the deterrent effect of bad result without losing much of the incentive effects of a good result.  A patient who is tested will receive one of two outcomes:  a certification that he is negative or an inconclusive result.  The key idea is that when the patient is negative the test will be designed to produce an inconclusive result with positive probability p.  (This could be achieved by actually degrading the quality of the test or just withholding the result with positive probability.)

Now a patient who receives an inconclusive result won’t be fooled.  He will become more pessimistic, that is inevitable.  But only slightly more pessimistic.  The larger we choose p (the key policy instrument) the less scary is an inconclusive result.  And no matter what p is, a certification that the patient is HIV-negative is a 100% certification.  There is a tradeoff that arises, of course, and that is that high p means that we get the good news less often.  But it should be clear that some p, often strictly between 0 and 1, would be optimal in the sense of maximizing testing and minimizing infection.

Its a recent development that economists are turning to neuroscience to inform and enrich economic theory.  One controversial aspect is the potential use of neuroscience data to draw conclusions about welfare that go beyond traditional revealed preference.  It is nicely summarized by this quote from Camerer, Lowenstein, and Prelec.

The foundations of economic theory were constructed assuming that details about the functioning of the brain’s black box would not be known. This pessimism was expressed by William Jevons in 1871:

I hesitate to say that men will ever have the means of measuring directly the feelings of the human heart. It is from the quantitative effects of the feelings that we must estimate their comparative amounts.

Since feelings were meant to predict behavior but could only be assessed from behavior, economists realized that, without direct measurement, feelings were useless intervening constructs. In the 1940s, the concepts of ordinal utility and revealed preference eliminated the superfluous inter- mediate step of positing immeasurable feelings. Revealed preference theory simply equates unobserved preferences with observed choices…

But now neuroscience has proved Jevons’s pessimistic prediction wrong; the study of the brain and nervous system is beginning to allow direct measurement of thoughts and feelings.

There are skeptics, I don’t count myself as one of them.  I expect that we will learn from neuroscience and economics will benefit.  But, I think it is helpful to explore the boundaries and I have a little thought experiment that I think sheds some light.

Imagine a neuroscientist emerges from his lab with a theory of what makes people happy.  This theory is based on measuring activity in the brain and correlating it with measures of happiness and then repeated experiments studying how different activities affect happiness.  For the purposes of this thought experiment be as generous as you wish to the neuroscientist, assume he has gone as far as you think is possible in measuring thoughts and feelings and their causes.

Now the neuroscientist approaches his first new patient and explains to him how to change his behavior in order to achieve the optimum level of well-being according to his theory, and asks the patient to give it a try.  After a month of trying it out, imagine that the patient comes back and says “Doctor, I did everything you prescribed to the letter for one whole month.  But, with all due respect, I would prefer to just go back to doing what I was doing before.”

Ask yourself if there is any circumstance, including any imaginable level of neuroscientific sophistication,  under which after the patient tries and rejects the neuroscientist’s theory, you would accept a policy which over-rode the patient’s wishes and imposed upon him the lifestyle that the neuroscientist says is good for him.

If there is no circumstance then I claim you are fundamentally a revealed preference adherent.  Because the example (again, I am asking you to be as charitable as you can be to the neuroscientist) presents the strongest possible case for including non-choice data into welfare considerations.  We are allowing the patient to experience what the neuroscientist’s theory asserts to be his greatest possible state of well-being and even after experiencing that he is choosing not to experience it any more.  If you insist that he has that freedom then you are deferring to his revealed preference over his “true” welfare.

That’s not to say that you must reject neuroscience as being valuable for welfare. Indeed it may be that when the patient goes his own way he does voluntarily incorporate some of what he learned.  And so, even by a revealed preference standard could say that neuroscience has made him better off.  But we can clearly bound its contribution.  Neuroscience can make you better off only insofar as it can provide you with new information that you are free to use or reject as you prefer.

Drawing:  Anxiety or Imagination from www.f1me.net

We often remember things by relying on the overall gist of an event—for example, instead of storing every detail about our last birthday, we tend to remember abstract things like “I had a fun party” or “I was in a grumpy mood because I felt old.”  This strategy allows us to remember more things about an event, but there’s one major drawback: by storing memories based on gist, we actually change how we remember the event.  This happens because we are biased to remember things that are consistent with our overall summary of the event.  So if we remember the birthday party was “super fun” overall, we’ll exaggerate how we remember the details—the average chocolate cake is now “insanely good”, and the 10 friends who were there becomes a “huge crowd.”  One of the factors that could contribute to this distortion is time; as you forget the details of an event, there’s more room for gist to change how you remember things.  But you would remember the details of an event immediately afterward, right?

The article describes an experiment that suggests that this kind of classification-induced distortion occurs even for short-term memory.

We are reading it in my Behavioral Economics class and so far we have finished the first 5 chapters which make up Part I of the book “Anticipating Future Preferences.” In Ran Spiegler’s typical style, perfectly crafted simple models are used to illustrate deep ideas that lie at the heart of existing frontier research and, no doubt, future research this book is bound to inspire.

A nod also has to go to Kfir Eliaz who is Rani’s longtime collaborator on many of the papers that preceded this book.  Indeed, in a better world they would form a band.  It would be a early ’90s geek-rock band like They Might Be Giants or whichever band it was that did The Sweater Song.  I hereby name their band Hasty Belgium. (Names of other bands here.)

Many of the examples in the book are referred to as “close variations of” or “free variations of” papers in the literature.  And Rani has even written a paper that he calls “a cover version of” a paper by Heidhues and Koszegi.  So to continue the metaphor, I offer here some liner notes for the book.

In chapter 5 there is a fantastic distillation of a model due to Michael Grubb that explains Netflix pricing.  Conventional models of price discrimination cannot explain three-part tariffs:  a membership fee, a low initial per-unit price, and then a high per-unit price that kicks in above some threshold quantity.  (Netflix is the extreme case where the initial price per movie is zero, and above some number the price is infinite.) Rani constructs the simplest and clearest possible model to show how such a pricing system is the optimal way to take advantage of consumers who are over-confident in their beliefs about their future demand.

A conventional approach to pricing would be to set price equal to marginal cost, thereby incentivizing the consumer to demand the efficient quantity, and then adding on a membership fee that extracts all of his surplus.  You can think of this as the Blockbuster model.  The Netflix model by contrast reduces the per-unit price to zero (up to some monthly allotment) but raises the membership fee.

Here’s how that increases profits.  Many of us mistakenly think we will watch lots of movies.  Netflix re-arranges the pricing structure so that the total amount we expect to pay when we watch all of those movies is the same as in the Blockbuster model.  Just now we are paying it all in the form of a membership fee.  If it turns out that we watch as many movies as we anticipated, we are no better or worse off and neither is Netflix.

But in fact most of us discover that we are always too busy to watch movies. In the Blockbuster system when that happens we don’t watch movies and so we don’t pay per-unit prices and we Blockbuster doesn’t make much money. In the Netflix system it doesn’t matter how many movies we watch, because we already paid.

My only complaint about the book is the title.  (Not for those reasons, no.)  The term “Bounded Rationality” has fallen out of favor and for good reason.  It’s pejorative and it doesn’t really mean anything.  A more contemporary title would have been Behavioral Industrial Organization.  Now I agree that “Behavioral” is at least as meaningless as “Bounded Rationality.”  Indeed it has even less meaning. But that’s a virtue because we don’t have any good word for whatever “Bounded Rationality” and “Behavioral” are supposed to mean. So I prefer a word that has no meaning at all than “Bounded Rationality” which suggests a meaning that is misplaced.

This time the subjects were in fMRI scanners while they delivered electric shocks for money.

But in FeldmanHall’s study, things actually happened. “There are real shocks and real money on the table,” she said. Subjects lying in an MRI scanner were given a choice: Either administer a painful electric shock to a person in another room and make one British pound (a little over a dollar and a half), or spare the other person the shock and forgo the money. Shocks were priced in a graded manner, so that the subject would earn less money for a light shock, and earn the whole pound for a severe shock. This same choice was given 20 times, and the person in the brain scanner could see a video of either the shockee’s hand jerk or both the hand jerk and the face grimace. (Although these shocks were real, they were pre-recorded.)

The brain scanners are supposed to shed light on the neuroscience of moral behavior.

Even though the findings are “a little bit chilling,” Wager says, “it’s important to know.” These kinds of studies can help scientists figure out how the brain dictates moral behavior. “There’s a real neuroscientific interest now in understanding the basis of compassion,” Wager says. “That’s something we are just starting to address scientifically, but it’s a critical frontier because it has such an impact on human life.”

Barretina bow:  Not Exactly Rocket Science.

With the new social-network aware webapp, Getupp:

Getupp is a neat webapp that helps you set your goals and share them publicly so you’re held accountable to the world. To make sure keep your commitments, Getupp can notify your Facebook friends if you break a commitment so you can be sure everyone will find out that you’re a slacker.

I have a personal commitment to write a blog post every day.  Each day that I fail I am going to write a blog post to let you know that I failed.

There was an interview with Tom Waits on the radio last week and I heard him say something that got me thinking.

I like hearing things incorrectly. I think that’s how I get a lot of ideas is by mishearing something.

It happens to me all the time.  It could be when I am half-listening to a lecture or catching a little snippet of a conversation by passersby.  It can even happen when I am listening to an interview on the radio.

There are good reasons why mishearing is a great source of new ideas.

  1. If you hear something already put together, you are prone to give it credence.  Ever notice how someone tells you about something surprising and right away you understand why its true?  Sometimes even before they are done talking?  The kickstarting effect of credence is a valuable scarce resource that is often wasted on the actually true.  Mishearing tricks you into believing something that is probably not true and sets your brain in motion to find something true in it.
  2. Mishearing isn’t random: your brain does its best to make sense of whatever comes in.  Think of the mishearing as some noise coming in and the brain assembling into something useful.
  3. It’s not just noise that comes in.  You are mishearing something that originally made sense.  So most of the parts fit together in some way already.  The mishearing will just turn it around, extend it, or apply it to something new.

So how do you make it happen?  Tom Waits:

I like turning on two radios at the same time and listening to them.

I am standing in front of an intimidating audience and a question stops me. I should know how to answer. I do know the answer. But its not coming to me right away. So the question has me stopped.

There is a silence. And at the center of that silence stands me waiting for the answer to come. At first. But the silence is piling up, and as it does I start to make alternative plans. Up to this point I was just hanging passively as some automatic mechanism searches through the files for the right thread, but now I may have to start actively conjuring something up.

The last thing you want to do is waste precious moments deciding when to cut off that search, especially because as soon as those thoughts start to creep in they threaten to be a self-fulfilling prophecy.

But I can’t not think about it. Because no matter how confident I am that I do have the answer stored in there somewhere there is always a chance that memory fails and as long I stand here, the answer is still not going to come to me. And the longer I wait the less time I am going to have to stammer out something forced. All the while the audience is growing uncomfortable.

This is more than just an optimal stopping problem because of the Zen state variable. Its the Zen fixed point. The more confident you are that the answer will come, the less you will be infiltrated by thoughts of the eventual collapse, the more likely your confidence will be validated. And then there’s what happens when that doesn’t happen.

And then there’s what happens when you know all of the above and it either fuels your confidence (because you are the confident type) or sends you even sooner spiraling into a panic searching for plan B, crowding out plan Absent, all the while escalating the panic (because you are prone to panic) ensuring that whatever finally does come out is going to be a big mess.

Chickle: Type-A Meditation from www.f1me.net

I wrote last week about More Guns, Less Crime.  That was the theory, let’s talk about the rhetoric.

Public debates have the tendency to focus on a single dimension of an issue with both sides putting all their weight behind arguments on that single front.  In the utilitarian debate about the right to carry concealed weapons, the focus is on More Guns, Less Crime. As I tried to argue before, I expect that this will be a lost cause for gun control advocates.  There just isn’t much theoretical reason why liberalized gun carry laws should increase crime.  And when this debate is settled, it will be a victory for gun advocates and it will lead to a discrete drop in momentum for gun control (that may have already happened.)

And that will be true despite the fact that the real underlying issue is not whether you can reduce crime (after all there are plenty of ways to do that,) but at what cost.  And once the main front is lost, it will be too late for fresh arguments about externalities to have much force in public opinion.  Indeed, for gun advocates the debate could not be more fortuitously framed if the agenda were set by a skilled debater.  A skilled debater knows the rhetorical value of getting your opponent to mount a defense and thereby implicitly cede the importance of a point, and then overwhelming his argument on that point.

Why do debates on inherently multi-dimensional issues tend to align themselves so neatly on one axis?  And given that they do, why does the side that’s going to lose on those grounds play along?  I have a theory.

Debate is not about convincing your opponent but about mobilizing the spectators.  And convincing the spectators is neither necessary nor sufficient for gaining momentum in public opinion.  To convince is to bring others to your side.  To mobilize is to give your supporters reason to keep putting energy into the debate.

The incentive to be active in the debate is multiplied when the action of your supporters is coordinated and when the coordination among opposition is disrupted.  Coordinated action is fueled not by knowledge that you are winning the debate but by common knowledge that you are winning the debate.  If gun control advocates watch the news after the latest mass killing and see that nobody is seriously representing their views, they will infer they are in the minority and give up the fight even if in fact they are in the majority.

Common knowledge is produced when a publicly observable bright line is passed.  Once that single dimension takes hold in the public debate it becomes the bright line:  When the dust is settled it will be common knowledge who won. A second round is highly unlikely because the winning side will be galvanized and the losing side demoralized.  Sure there will be many people, maybe even most, who know that this particular issue is of secondary importance but that will not be common knowledge.  So the only thing to do is to mount your best offense on that single dimension and hope for a miracle or at least to confuse the issue.

(Real research idea for the vapor mill.  Conjecture:  When x and y are random variables it is “easier” to generate common knowledge that x>0 than to generate common knowledge that x>y.)

Chickle:  Which One Are You Talking About? from www.f1me.net.

David Mitchell is a stammerer who wrote beautifully about it in his semi-autobiographical novel Black Swan Green. Here is Mitchell on The King’s Speech.  In the article he talks about his own strategies for coping with stammering:

If these technical fixes tackle the problem once it’s begun, “attitudinal stances” seek to dampen the emotions that trigger my stammer in the first place. Most helpful has been a sort of militant indifference to how my audience might perceive me. Nothing fans a stammer’s flames like the fear that your listener is thinking “Jeez, what is wrong with this spasm-faced, eyeball-popping strangulated guy?” But if I persuade myself that this taxing sentence will take as long as it bloody well takes and if you, dear listener, are embarrassed then that’s your problem, I tend not to stammer. This explains how we can speak without trouble to animals and to ourselves: our fluency isn’t being assessed. This is also why it’s helpful for non-stammerers to maintain steady eye contact, and to send vibes that convey, “No hurry, we’ve got all the time in the world.”

(Gat Gape:  The Browser) Incidentally, I watched The King’s Speech and also True Grit on a flight to San Francisco Sunday night while the Oscars were being handed out down below. I enjoyed the portrayal of stammering in TKS but unlike Mitchell I didn’t think that subject matter alone carried an entire film.  And there wasn’t much else to it.  (And by the way here is Christopher Hitchens complaining about the softie treatment of Churchill and King Edward VIII.)

True Grit was also a big disappointment.  I haven’t seen Black Swan but I hear it has some great kung fu scenes.

You go around saying X.  There are some people who agree with X and others who disagree.  Those who agree with X don’t blink an eye when you say X. Those who disagree with X tell you X is wrong.

At some point you have to rethink whether you agree with X. You have a bunch of definitive signals against X but the signals in favor of X are hard to count. You have to count the number of times people didn’t say anything about X. You are naturally biased against X.

Eventually you change your mind and go around saying not-X.  Repeat.

Self-Deception is a fascinating phenomenon.  If you repeat a lie to yourself again and again, you start to believe it.  You would think that the ability to deceive yourself would be constrained by data.  If there is obviously available evidence that your story is false, you might stop believing it.  Then, self-deception can only flourish when there is an identification problem.  Once data falsifies competing theories, the individual is forced to face facts.

Reality is much more complex.  Take the perhaps extreme case of John Edwards.  The National Enquirer published a story reporting that Rielle Hunter was pregnant with John Edwards’s child.   Edwards simply denied the facts.  The Enquirer employed a psychologist to profile Edwards.  S/he concluded:

“Edwards looks at himself as above the law. He has a compromised conscience — meaning he will cover up his immoral behavior at whatever cost to keep his reputation intact. He believes he is who his reputation says he is, rather than the immoral side, the truth. He separates himself from the immoral side because that person wouldn’t be the next president of the United States. He overcompensated for his insecurities with sex to feed his ego which feeds his narcissism.”

The most important part was the absolute certainty of the mental health professional that Edwards would continue to deny the scandal — almost at all costs.

“He will keep denying the scandal to America because he is denying the reality of it to himself. He sees himself only as the image he has created.”

How do you deal with a pathological deceiver/self-deceiver?  The Editor collected photos and evidence of Hunter-Edwards liasons.  He describes his strategy:

We told the press that there were photographs and video from that night. Other journalists asked us to release the images but I refused. Edwards needed to imagine the worst-case scenario becoming public. TheEnquirer would give him no clues about what it did and did not have…..

Behind the scenes we exerted pressure on Edwards, sending word though mutual contacts that we had photographed him throughout the night. We provided a few details about his movements to prove this was no bluff.

For 18 days we played this game, and as the standoff continued the Enquirer published a photograph of Edwards with the baby inside a room at the Beverly Hilton hotel.

Journalists asked if we had a hidden camera in the room. We never said yes or no. (We still haven’t). We sent word to Edwards privately that there were more photos.

He cracked. Not knowing what else the Enquirer possessed and faced with his world crumbling, Edwards, as the profiler predicted, came forward to partially confess. He knew no one could prove paternity so he admitted the affair but denied being the father of Hunter’s baby, once again taking control of the situation.

This strategy is inconsistent with the logic of extreme self-deception.  Such an individual must be overconfident, thinking he can get away with bald-faced lies.  Facing ambiguous evidence, he might conclude that the Enquirer had nothing beyond the odd photo it released.  The Enquirer strategy instead relies on the individual believing the worst not the best.  The two pathologies self-deception and extreme pessimism should cancel out…..there is some interssting inconsistency here.

One thing is clear:  One way to eliminate self-deception is for a third-party to step in and make the decision.  This is what Omar Suleiman, Barack Obama and the Egyptian army are doing to help Hosni Mubarak deal with his self-deception.

When your doctor points to the chart and asks you to rate your pain from 0 to 5, does your answer mean anything?  In a way, yes: the more pain you are in the higher number you will report.  So if last week you were 2 and this week you are 3 then she knows you are in more pain this week than last.

But she also wants to know your absolute level of pain and for that purpose the usefulness of the numerical scale is far less clear.  Its unlikely that your 3 is equal in terms of painfulness to the next guy’s 3.  And words wouldn’t seem to do much better.  Language is just too high-level and abstract to communicate the intensity of experience.

But communication is possible.  If you have driven a nail through your finger and you want to convey to someone how much pain you are in that is quite simple. All you need is a hammer and a second nail.  The “speaker” can recreate the precise sensation within the listener.

Actual mutilation can be avoided if the listener has a memory of such an experience and somehow the speaker can tap into that memory.  But not like this: “You remember how painful that was?”  “Oh yes, that was a 4.” Instead, like this: “You remember what that felt like?” “OUCH!”

Memories of pain are more than descriptions of events.  Recalling them relives the experience.  And when someone who cares about you needs to know how much help you need, actually feeling how you feel is more informative than hearing a description of how you feel.

So words are at best unnecessary for that kind of communication, at worst they get in the way.  All we need is some signal and some understanding of how that signal should map to a physical reaction in the “listener.” If sending that signal is a hard-wired response it’s less manipulable than speech.

Which is not to say that manipulation of empathy is altogether undesirable. Most of what entertains us exists precisely because our empathy-receptors are so easily manipulated.