You are currently browsing the tag archive for the ‘psychology’ tag.

Apple’s latest response to the iPhone 4 antenna issue:

Upon investigation, we were stunned to find that the formula we use to calculate how many bars of signal strength to display is totally wrong. Our formula, in many instances, mistakenly displays 2 more bars than it should for a given signal strength. For example, we sometimes display 4 bars when we should be displaying as few as 2 bars. Users observing a drop of several bars when they grip their iPhone in a certain way are most likely in an area with very weak signal strength, but they don’t know it because we are erroneously displaying 4 or 5 bars. Their big drop in bars is because their high bars were never real in the first place.

Apple will soon be releasing a software update that will fix the problem by lowering the number of bars displayed on your phone.  In related news, in response to my students’ grade groveling I have re-examined the midterm and noticed that everyone’s score was 5 points higher than it should have been.  The curve has been re-calculated.

Here is the advice from Annie Duke, professional poker player and the 2006 Champion of the World Series of Rock, Scissors, Paper:

The other little small piece of advice that I would give you is that people tend to throw rock on their first throw. Throwing paper is usually not a good strategy because they might throw scissors. You should throw rock as well.

The key is, and this is the best piece of advice that I can give you, if you do think that you recognize the pattern from your opponent, it’s good to try to throw a tie as opposed to a win. A tie will very often get you a tie or a win, whereas a win will get you a win or a loss. For example, if you think that someone might throw a rock, it’s good to throw rock back at them. You should be going for ties.

If at first it sounds dumb, think again.  The idea is some combination of pattern learning and level-k thinking:  If she thinks that I think that I have figured out her pattern and it dictates that she will play Rock next, then she expects me to play Paper and so in fact she will play Scissors. That means I should play Rock because either I have correctly guessed her pattern and she will indeed play Rock and I will tie, or she has guessed that I have guessed her pattern and she will play Scissors and I will win.

She is essentially saying that players are good at recognizing patterns and that most players are at most level 2

Research note:  why are we wasting time analyzing penalty kicks?  Can we get data on competitive RoShamBo? While we wait for that here is an exercise for the reader:  find the minimax strategy in this game:

Compare two studies of a medicine’s effectiveness.  In the first study there was a placebo control group.  Subjects who actually got the medicine believed with 50% probability that they were taking a sugar pill.  In the second study there was no placebo control.  Those who got the medicine knew it.

Those who actually got the medicine had better outcomes when they knew it than when they were unsure.

Our group at Columbia has completed preliminary work involving metaanalyses of randomized controlled trials comparing antidepressant medications to a placebo or active comparator in geriatric outpatients with Major Depressive Disorder (Sneed et al. 2006). In placebo controlled trials, the medication response rate was 48% and the remission rate 33%, compared to a response rate of 62% and remission rate of 43% in the comparator trials (p < .05). The effect size for the comparison of response rate to medications in the comparator and placebo controlled trials was large (Cohen’s d = 1.2).

“If you don’t have something nice to say, don’t say anything at all.”  That is usually bad advice.  Because then when you say nothing at all it is understood that you have only unkind things to say.

If you are trying to maximize pleasantry then your policy should depend on your listener’s preferences.  Based on what you say she is going to revise her beliefs over what you think about her.  What matters is her preferences over these beliefs.

A key fact is that you have only limited control over those beliefs.    Some of the time you will say something kind and some of the time you will say something unkind.  These will move her beliefs up and down but by the law of total probability the average value of her beliefs is equal to her prior.  You control only the variance.

If good feelings help at the margin more than bad feelings hurt then she is effectively risk-loving.  You should go to extremes and maximize variance.  Here the old adage applies:  you should say something nice when you have something nice to say and you should not say anything nice when you don’t.  In terms of her beliefs, it makes no difference whether you say the unkind thing or just keep quiet and allow her to infer it.  But perhaps politeness gets a lexicographic kick here and you should not say anything at all.

(On thing the standard policy ignores is the ambiguity.  Since there are potentially many unkind things you might be witholding, if she is pessimistic you might worry that she will assume the worst.  Then you should consider saying slightly-unkind things in order to prevent the pessimistic inference.  Still there is the danger of unraveling because then when you say nothing at all she will know that what is on your mind is even worse than that.)

If she is risk-averse in beliefs then you want to go to the opposite extreme and never say anything.  She never updates her beliefs.

But prospect theory suggests that her preferences are S-shaped around the prior:  risk-averse on the upside but risk-loving on the downside.  Then often  it is optimal to generate some variance but not to go to extremes.  You do this by dithering.  Your never give outright compliments or insults.  Your statements are always noisy and subject to interpretation.  But the signal to noise ratio is not zero.

A full analysis of this problem would combine the tools of psychological game theory with persuasion mechanisms a’ la Gentzkow and Kamenica.

Jonah Lehrer has a post

about why those poor BP engineers should take a break. They should step away from the dry-erase board and go for a walk. They should take a long shower. They should think about anything but the thousands of barrels of toxic black sludge oozing from the pipe.

He weaves together a few stories illustrating why creativity flows best when it is not rushed.  This is something I generally agree with and his post is good read but I think one of his examples needs a second look.

In the early 1960s, Glucksberg gave subjects a standard test of creativity known as the Duncker candle problem. The problem has a simple premise: a subject is given a cardboard box containing a few thumbtacks, a book of matches, and a waxy candle. They are told to determine how to attach the candle to piece of corkboard so that it can burn properly and no wax drips onto the floor.

Oversimplifying a bit, to solve this problem there is one quick-and-dirty method that is likely to fail and then another less-obvious solution that works every time.  (The answer is in Jonah’s post so think first before clicking through.)

Now here is where Glucksberg’s study gets interesting. Some subjects were randomly assigned to a “high drive” group, which was told that those who solved the task in the shortest amount of time would receive $20.

These subjects, it turned out, solved the problem on average 3.5 minutes later than the control subjects who were given no incentives.  This is taken to be an example of the perverse effect of incentives on creative output.

The high drive subjects were playing a game.  This generates different incentives than if the subjects were simply paid for speed.  They are being paid to be faster than the others.  To see the difference, suppose that the obvious solution works with probability p and in that case it takes only 3.5 minutes.  The creative solution always works but it takes 5 minutes to come up with it. If p is small then someone who is just paid for speed will not try the obvious solution because it is very likely to fail.  He would then have to come up with the creative solution and his total time will be 8.5 minutes.

But if he is competing to be the fastest then he is not trying to maximize his expected speed.  As a matter of fact, if he expects everyone else to try the obvious solution and there are N others competing, then the probability is 1 - (1-p)^N that the fastest time will be 3.5 minutes.  This approaches 1 very quickly as N increases.  He will almost certainly lose if he tries to come up with a creative solution.

So it is an equilibrium for everyone to try the quick-and-dirty solution, and when they do so, almost all of them (on average a fraction 1-p of them) will fail and take 3.5 minutes longer than those in the control group.

I spent one year as an Associate Professor at Boston University.  The doors in the economics building are strange because the key turns in the opposite way you would expect.  Instead of turning the key to the right in order to pull the bolt left-to-right, you turn the key to the left.  For the first month I got it wrong every morning.

Eventually I realized that I needed to do the opposite of my instinct.  And so as I was just about to turn the key to the right I would stop myself and do the opposite.  This worked for about a week.  The problem was that as soon as I started to consistently get it right, it became second nature and then I could no longer tell what my primitive instinct was and what my second-order counter-instinct was.  I would begin to turn the key to the left and then stop myself and turn the key to the right.

I have since concluded that it is basically impossible to “do the opposite” and that we are all lesser beings because of it.  We could learn from experience much faster if we had the ability to remenber what our a) what our natural instinct is b) whether it works and c) to do the opposite when it doesn’t.

We could be George Castanza:

Younger siblings are said to be more prone to risky behaviors than their elders.  This usually means stuff like drugs and sex, but now it means stealing bases:

For more than 90 percent of sibling pairs who had played in the major leagues throughout baseball’s long recorded history, including Joe and Dom DiMaggio and Cal and Billy Ripken, the younger brother (regardless of overall talent) tried to steal more often than his older brother.

Cap tap: Ron Siegel.

If doctors were to fine tune their prescriptions to take maximal advantage of the placebo effect, what would they do?  It’s hard to answer this question even with existing data on the strength of the placebo effect because beliefs, presumably the key to the placebo effect, would adjust if placebo prescription were widespread.

Indeed, over the weekend I saw a paper presented by Emir Kamenica which strongly suggests that equilibrium beliefs matter for placebos.  In an experiment on the effectiveness of anti-histamines, some subjects were shown drug ads at the same time they took the drug.  The ads had an impact on the effectiveness of the drug but only for subjects with less prior experience with the same drug.  The suggestion is that those with prior experience have already reached their equilibrium placebo effect.  (It appears that the paper is not yet available for download.)

So we need a model of the placebo effect in equilibrium.  Suppose that patients get a placebo a fraction p of the time and a full dose the remaining 1-p fraction of the time.  And let q(p) be the patient’s belief in the probability the prescription will work.  Then the placebo effect means that the true probability that the prescription will work is determined by a function h which takes two arguments:  the true dosage (=1 for full dose, 0 for placebo) and the belief q.  And in equilibrium beliefs are correct:

q = p \cdot h(q, 0) + (1-p) \cdot h(q,1) \equiv \hat h(q,p)

This equilibrium condition implicitly defines a function q(p) which gives the equilibrium efficacy as a function of the placebo rate p.

The benefit of the model is that it allows us to notice something that may not have been obvious before.  If instead of using placebos by varying p, an alternative is to just lower the dose, deterministically.  Then if we let d be the dosage (somewhere between 0 and 1), we get

q = h(q,d)

as the equilibrium condition which defines effectiveness q(d) now as a function of the fixed dose d.

The something to notice is that, if the function h is continuous and monotone, then the range of q is the same whether we use placebos p or deterministic doses d.  That is, any outcome that can be implemented with placebos can be implemented by just using lower doses and no placebos.  This follows mathematically because the placebo model collapses to the determistic model at the boundary: \hat h(q,p=0) = h(q, d=1) and \hat h(q,p=1) = h(q,d=0).

Now this is just a statement about the feasible set.  The benefit of placebo may come from the ability to implement the same outcome but with lower cost.   In terms of the model this would occur if the d that satisfies q(d) = q(p) is larger than 1- p.  That boils down to a cost-benefit calculation.  But I doubt that this kind of calculation is going to be pivotal in a debate about using placebos as medicine.

Does it ever happen to you that someone tells you something, then weeks or months pass, and the same person tells you the same thing again forgetting that they already told you before?

Why is it easier for the listener to remember than the speaker?  Is there some fundamental difference in the way memory operates?  Or is it that the memory is more evocative for the listener just because the fact being told is uniquely associated with the teller?  For the person doing the telling you are just a generic listener.  Or is it something else?  Answer below.

Read the rest of this entry »

Tyler Cowen tweeted:

Why do chess players hold their heads hard, with their hands, when they are thinking? If it works, why don’t more thinkers do it?

To prevent overheating of course.  You’ll notice that they typically extend their fingers and cover their foreheads which is the hottest part.  They are maximizing surface area in order to increase heat dissipation.

Here is a suggestion for how to super-cool your cranium and over-clock your brain.  On a more serious note, here is a pipe that is surgically implanted in the skull of epileptics to reduce the intensity of seizures.

I coach my 7-year-old daughter’s soccer team.  It’s been a tough Spring season so far: they lost the first three games by 1 goal margins.  But this week they won something like 15-1.

I noticed something interesting.  In all of the close games the girls were emotionally drained. By the end of the game they didn’t have much energy left.   Many of them asked to be rotated out.

But this week nobody asked to be rotated out.  In fact this week they had the minimum number of players so each of them played the whole game and still nobody complained of being tired.  Obviously they were having fun running up the score but they didn’t get tired.

Incentives are about getting players to want conditions to  improve.  So incentives necessarily make them less happy about where they are now.  Feeling good about winning means feeling bad about not winning.  That’s the motivation.

But encouragement is about being happy about where you are now.  And it has real effects:  it energizes you.  You don’t get tired so fast when you are having fun.

There is a clear conflict between incentives and encouragement.  At the same time incentives motivate you to win, they discourage you because you are losing.  A coach who fails to recognize this is making a big mistake.

And I am not giving a touchy-feely speech about “it’s not whether you win or lose…”  I am saying that a cold-hearted coach who only cares about winning should, at the margin, put less weight on incentives to win.

If my daughter’s team loved losing, is it possible they would lose less often?  Probably not.  But that’s because the love of losing would give them an incentive to lose.  They would be discouraged when they win but that would only help them to start losing.  (Unless the opposing coach used equally insane incentives.)

Nevertheless, to love winning by 10 goals is a waste of incentive and is therefore a pure cost in terms of its effect on encouragement when the game is close.  Think of it this way:   you have a fixed budget of encouragement to spread across all states of the game.  If you make your team happy about winning by 10 goals,  that directly subtracts from their happiness about winning by only 1 goal.

My guess is that, against a typically incentivized opponent, the optimal incentive scheme is pretty flat over a broad range. That range might even include losing by one goal.  Because when the team is losing by one goal, the positive attitude of being in the first-best equivalence class will keep them energized through the rest of the game and that’s a huge advantage.

From The McKinsey Quarterly:

Long before behavioral economics had a name, marketers were using it. “Three for the price of two” offers and extended-payment layaway plans became widespread because they worked—not because marketers had run scientific studies showing that people prefer a supposedly free incentive to an equivalent price discount or that people often behave irrationally when thinking about future consequences. Yet despite marketing’s inadvertent leadership in using principles of behavioral economics, few companies use them in a systematic way. In this article, we highlight four practical techniques that should be part of every marketer’s tool kit.

Among the key points, the one that stands out is “Make a product’s price less painful.”  This includes profiting from hyperbolic discounting and  exploiting mental accounting.  Manipulating default options and harnessing choice-set-dependent preferences also figure prominently.

Evidently marketing will soon supplant finance as the relevant outside option for new Economics PhD’s bargaining over academic salaries.

Like me, ants like  dark houses/nests with small entrances.  Facing a choice between a dark nest with a large entrance (option A) and a light nest with a small entrance (option B), an ant colony faces a trade-off.  Some go this way to A and some go that way to B.  Suppose we add a third decoy nest option D. Option D is as dark as A but has an even larger entrance.  It is thus dominated by A but not by B.   How will the ant colony’s behavior change when they face the three options together versus just A and B?

Rational choice theory says that the fractions choosing A and B should not change.  Option D is dominated and should never chosen and hence is an irrelevant alternative.  Its presence or absence should not affect the choice between A and B.

One psychological theory suggests that the proportion choosing A should go up.  Option D helps to crystallize the advantages of option A (the smaller entrance).  This may increase the perception of the advantages of A over B as well leading to a change in the proportion of ants choosing A over B.

So what actually happens?

A controlled experiment by Edwards and Pratt answers this question.  Edwards and Pratt built nests with the properties above and made ant colonies make repeated binary and  ternary choices.  They randomized the order of choices, where the nests were located etc.  And because they were experimenting with ants, they could cruelly force the choice of nest upon the ants by destroying the old nest the ants lived in by removing it’s roof.

They find no significant change in the proportions choosing A vs B when the decoy D is present.  Ant colonies are rational and do not violate the axiom of independence of irrelevant alternatives (IIA).

In other work, Pratt shows that ant colonies obey transitivity (i.e. if a colony prefers A to B and B to C, it prefers A to C).

Why are ant colonies more rational than individual humans?  The authors offer a cool hypothesis: choice between colonies is typically made by sending independent scouts sent to the different options.  No scout visits different locations.  The scouts reports are simply compared and the best option is chosen.   A human being contemplates all the choices by herself and has a harder time comparing the attributes independently leading to a violation of IIA.

An ant colony is like a well performing and coordinated decentralized firm with employees passing information up the hierarchy and efficient decisions coming down from the center  Can we import lessons into designing firms?  Alas, I believe not.  A human scout evaluating a decision/option will not be as impartial an ant scout.  He will exagerrate its qualities, hoping his option “wins”.  He hopes to get the credit for finding the implemented option, get promoted, receive stock options and retire young to the Bay Area.  In other words, career concerns ruin a simple transfer of ant colony principles to firms.  If we eliminate career concerns within the firm, we will induce moral hazard as there is no incentive to exert costly effort to find the best decisions for the firm.  Ants in the same colony do not face the same issue as they are genetically related and have “common values”.

Still,  a thought-provoking paper and it has many references to other papers that it builds on. I am going to read more of them.

(Hat tip to Christophe Chamley for the reference)

Via Barker, a pointer to a theory from evolutionary psychology that tears are a true signal that the person crying is vulnerable and in need.

Emotional tears are more likely, however, to function as handicaps. By blurring vision, they handicap aggressive or defensive actions, and may function as reliable signals of appeasement, need or attachment.

Usually you should be skeptical that signaling is evolutionarily stable.  For example if tears convince another that you are defenseless then there is an evolutionary incentive to manipulate the signal.  Convince someone you are defenseless and then take advantage of them.

A typical exception is when the signal is primarily directed toward a family member.  Family members have common interests because they share genes.  Less incentive to manipulate the signal means that the signal has a better chance of being stable.  And babies of course have few other ways of communicating needs.

Of course children eventually do start manipulating the signal.  They learn before their parents do that they are becoming self-sufficient but they still have an incentive to free-ride on the parents’ care.  Fake tears appear.  But this is a temporary phase until the parents figure it out.  Not surprisingly, once the child reaches adulthood, crying mostly stops:  Nature takes away a still-costly but  now-useless signal.

Here’s an interesting experiment I would like to see.  Look at adults who learned a second language as a child from one of their parents.  For example, the father speaks only English but the mother speaks English and Hungarian.  English is the standard language outside of the home.

Profile the personalities of the parents.  Now have a Hungarian speaker interview the subject and profile his personality and separately have an English speaker profile the subject’s personality.  Is the subject’s personality different in the two languages and is he more like his mother when speaking Hungarian?

From Barking Up The Wrong Tree:

What determines reciprocity in employment relations? We conducted a controlled field experiment and tested the extent to which cash and non-monetary gifts affect workers’ productivity. Our main finding is that the nature of the gift, not its monetary value, determines the prevalence of reciprocal reactions. A gift in-kind results in a signicant and substantial increase in workers’ productivity. An equivalent cash gift, on the other hand, is largely ineffective or even though an additional experiment showed that workers would strongly favor the gift’s cash equivalent.

It probably has nothing to do with reciprocity.  If I pay you money you have to share it with your family and then buy a car out of your share.  If I give you a car it is all yours.

This logic also often provides a psychology-free explanation of the endowment effect.  You are willing to pay at most $10,000 for a car.  But if I give you that car for free and offer to buy it back from you, you require $20,000, because you will get to keep only half of that money.

(inspired by discussions with my Behavioral Economics class.)

Update: See Ben’s comment below for another variation on the theme which also came up in class.  If you have present-biased preferences you have an endowment effect because cash will be shared with future selves, whereas instantaneous consumption is all for your present self.

Is it a superstition that babies born in a Year of the Dragon will have good luck?  The Taiwanese government wanted to dispell the superstition.

The demographic spike in 1976 was sufficiently large that governments decided to issue warnings in 1987 against having babies in Dragon years because of the problems they caused for the educational system, particularly with respect to finding teachers and classroom space. Editorials were issued that claimed no special luck or intelligence for Dragon babies and a government program in Taiwan was designed to alert parents to the special problems faced by children born in an unusually large cohort (Goodkind, 1991, p. 677 cites multiple newspaper accounts of this).

But the effort failed and another spike was seen in 1988.  Why?  Because the dragon superstition is true. In this paper by Johnson and Nye, among Asian immigrants to the US, those born in Dragon years are compared to those born in non-Dragon years.  Dragon babies are more successful as measured in terms of educational attainment.  And the difference is larger than the corresponding difference for other US residents.

And of course it turns out that this is due to the self-fulfilling nature of the superstition.  Asian Dragon babies have parents who are more successful and they are more likely to have altered their fertility timing in order to have a baby in a Dragon year.  Is this because the smarter parents were more likely to be dumb enough to believe the superstition?

Or is it because of statistical discrimination?  Since the Dragon superstition is true, being a Dragon is a signal of talent and luck.  Unless these traits are observable without error, even unlucky and untalented Dragons will be treated preferentially relative to unlucky and untalented non-Dragons.  Smart parents know this and wait until Dragon years.

Thanks to Toomas Hinnosaar for the pointer.

It’s as if someone at the New York Times scanned this blog, profiled me, and assembled an article that hits every one of my little fleemies:

(Follow closely now; this is about the science of English.) Phoebe and Rachel plot to play a joke on Monica and Chandler after they learn the two are secretly dating. The couple discover the prank and try to turn the tables, but Phoebe realizes this turnabout and once again tries to outwit them.

As Phoebe tells Rachel, “They don’t know that we know they know we know.”

Literature leverages our theory of mind.

Humans can comfortably keep track of three different mental states at a time, Ms. Zunshine said. For example, the proposition “Peter said that Paul believed that Mary liked chocolate” is not too hard to follow. Add a fourth level, though, and it’s suddenly more difficult. And experiments have shown that at the fifth level understanding drops off by 60 percent, Ms. Zunshine said. Modernist authors like Virginia Woolf are especially challenging because she asks readers to keep up with six different mental states, or what the scholars call levels of intentionality.

And they even drag evolution into it.

To Mr. Flesch fictional accounts help explain how altruism evolved despite our selfish genes. Fictional heroes are what he calls “altruistic punishers,” people who right wrongs even if they personally have nothing to gain. “To give us an incentive to monitor and ensure cooperation, nature endows us with a pleasing sense of outrage” at cheaters, and delight when they are punished, Mr. Flesch argues. We enjoy fiction because it is teeming with altruistic punishers: Odysseus, Don Quixote, Hamlet, Hercule Poirot.

Cordobés address:  Marcin Peski.

Female digger wasps prey on katydids.  But they don’t kill them.  They paralyze them and then store them in little holes they dig in the ground.  They are preparing nests where they will lay eggs and when the eggs hatch, the larvae will feast on the katydids.

Richard Dawkins and John Brockman observed that it sometimes happens that two digger wasps are unknowingly tending the same nest.  Naturally, once they figure this out, there’s going to be a fight.  Dawkins and Brockman noticed two things about these fights.  First, the wasp that wins is usually the one that has contributed more katydids to the common nest.  Second, the duration of the fight is predicted by the number of katydids contributed by the eventual loser.

For Dawkins and Brockman the wasps are revealing a sunk-cost fallacy.  Evidently, their willingness to fight is not determined by the total reward, but instead by the individual wasp’s past investment.  The more they invested, the more they are willing to fight.

A more nuanced interpretation is that the wasps’ behavior is not a fallacy at all, but a clever hack.  The wasps really do care about the total value of the nest, but their best estimate of that value is (proportional to) their own contribution to it.  For example, a wasps may be able to “remember” the number of katydids she paralyzed (and she must if she is able to condition her fighting intensity on that number) but not be able to count the number of katydids in the nest.  The former is going to be correlated with the latter.

Sunk cost bias:  a handy trick.

If you are one of the millions of Facebook users who play games like Playfish or Pet Society, you are a datum in Kristian Segerstale’s behavioral economics experiments.

Instead of dealing only with historical data, in virtual worlds “you have the power to experiment in real time,” Segerstrale says. What happens to demand if you add a 5 percent tax to a product? What if you apply a 5 percent tax to one half of a group and a 7 percent tax to the other half? “You can conduct any experiment you want,” he says. “You might discover that women over 35 have a higher tolerance to a tax than males aged 15 to 20—stuff that’s just not possible to discover in the real world.”

Note that these are virtual goods that are sold through the game for (literal) money.  And here is the website of the Virtual Economy Research Network which promotes academic research on virtual economies.

Relationships that are sustained by reciprocity work like this.  When she cooperates I stay cooperative in the future.  When she egregiously cheats, the relationship breaks down.  In between these polar cases it depends on how much she let me down and whether she had good reason.  Forgiveness is rationed.  Too much forgiveness and the temptation to cheat is too great.

This is also how it should work in your relationship with yourself.  It takes discipline to keep working toward a long-term goal.  Procrastination is the temptation to shirk today with the expectation that you’ll make it up in the future.  Thus the only way to reduce the temptation is to change those expectations.  Self-discipline is the (promise, threat, expectation) that if I procrastinate now, things will only get worse in the future.  Too much self-forgiveness is self-defeating.

I came across a study by psychologists that, at first glance, casts doubt on this theory.  Students who procrastinated on their midterm exams were asked whether they forgave (forgifted ?!) themselves.  The level of forgiveness was then compared to the degree of procrastination on the final exam.  The more self-forgiving students were found to procrastinate less on the final.  The psychologists interpreted the finding in this way.

we have to forgive ourselves for this transgression thereby reducing the negative emotions we have in relation to the task so that we’ll try again. If we don’t forgive, we maintain an avoidance motivation, and we’re more likely to procrastinate.

But if we think a bit more, we can square the experiment quite nicely with the theory.  The key is to focus on the intermediate zone where forgiveness is metered out depending on the extent of the violation.  Forgiveness means that the relationship continues as usual with no punishment.  The lack of forgiveness, i.e. punishment, means that the relationship breaks down.  In the game with yourself that means that your resolve is broken and you lose the incentive to resist procrastination in the future.  Forgiveness is negatively correlated with future procrastination.

(The apparent inversion comes from the fact that the experiment relates forgiveness to future procrastination.  A naive reading of the theory is that because forgiveness reduces the incentive to work, forgiveness should predict more procrastination.  As we see this is not true going forward.  It would be true, however, looking backward.  Those who are more likely to forgive themselves are more likely to procrastinate.)

In a classic experiment, psychologists Arkes and Blumer, randomized theater ticket prices to test for the existence of a sunk-cost fallacy.  Patrons who bought season tickets at the theater box office were randomly given discounts, some large some small.  At the end of the season the researchers counted how often the different groups actually used their tickets.  Consistent with a sunk-cost fallacy, those who paid the higher price were more likely to use the tickets.

A problem with that experiment is that it was potentially confounded with selection effects.  Patrons with higher values would be more likely to purchase when the discount was small and they would also be more likely to attend the plays.  Now a new paper by Ashraf, Berry, and Shapiro uses an additional control to separate out these two effects.

Households in Zambia were offered a water disinfectant at a randomly determined price.  If the price was accepted, then the experimenters randomly offered an additional discount.  With these two treatment dimensions it is possible to determine which of the two prices affects subsequent use of the product.  They find that all of the variation in usage is explained by the initial offer price.  That is, the subjects revealed willingness to pay was the only detrminant of usage and not the actual payment.

This is the cleanest field experiment to date on the effect of past sunk costs on later valuations and it overturns a widely cited finding.  On the other hand, Sandeep and I have a lab experiment which tests for sunk cost effects on the willingness to incur subsequent, unexpected, cost increases.  We show evidence of mental accounting:  subjects act as if all costs, even those that are sunk, are relevant at each decision-making stage.  This is the opposite effect found by Arkes and Blumer.

(Dunce cap doff:  Scott Ogawa)

In a paper in Nature; the authors Tricomi, Rangel, Camerer, and O’Doherty used fMRI experiments to reveal that the brain is wired for egalitarianism.

Activity rose in rich people when their poor colleagues got money. In fact, it was greater in that case than when they got money themselves, which means the “rich” people’s neural activity was more egalitarian than their subjective ratings were. Whereas in “poor” people, the vmPFC and the ventral striatum only responded to getting money, not to seeing the rich getting even richer.

Neuroskeptic provides some perspective:

Notice that this is essentially a claim about psychology, not neuroscience, even though the authors used neuroimaging in this study. They started out by assuming some neuroscience – in this case, that activity in the vmPFC and the ventral striatum indicates reward i.e. pleasure or liking – and then used this to investigate psychology, in this case, the idea that people value equality per se, as opposed to the alternative idea, that “dislike for unequal outcomes could also be explained by concerns for social image or reciprocity, which do not require a direct aversion towards inequality.”

(Guang Puang ping:  Marciano Siniscalchi.)

The sound of Fiona Ritchie’s voice has the flavor of grapefruit and/or cranberry.  (Updated:  I couldn’t get the direct link to work.  So you will have to manually start one of the audio links on the page.  Any one will do.)

Via Barker:

Several studies have demonstrated some accuracy in personality attribution using only visual appearance. Using composite images of those scoring high and low on a particular trait, the current study shows that judges perform better than chance in guessing others’ personality, particularly for the traits conscientiousness and extraversion. This study also shows that attractiveness, masculinity and age may all provide cues to assess personality accurately and that accuracy is affected by the sex of both of those judging and being judged. Individuals do perform better than chance at guessing another’s personality from only facial information, providing some support for the popular belief that it is possible to assess accurately personality from faces.

Source: Using composite images to assess accuracy in personality attribution to faces” from British Journal of Psychology

For most aspects of personality I would bet that the reason is simply that your face causes your personality.  Attractive people are confident and extroverted, unattractive people less so.  For other aspects its the other way around.  People who are conscientious comb their hair, etc.

But I am often firmly convinced that I can tell how smart a person is just by looking at them.  And although I can see a few potential causalities, they seem flimsy to me:

  1. You invest more in appearance if it is a substitute for intelligence.  (But can’t they be complementary?  I cling to the belief that my wife sees them so.)
  2. Investments in appearance are short-term.  Smarter people are more patient than that.  (Tell that to Fabio.)
  3. Smart people exude confidence.  (Do they? Dumb people may be even more confident.)

Anyway, I don’t usually think that attractiveness is the key signal (not that I can say what is the key signal.)  But I can imagine that I am consistenly wrong in my judgments of how smart a person is.  If you assume a person is smart then everything they say tends to sound smart.  (Why else would you still be reading this?)

Roland Eisenhuth told me that when he was very young, the first time he had an examination in school his mother told him that she knew a secret for good luck.  She leaned in and spit over his shoulder.  This would give him an advantage on the exam, she told him.

Indeed it gave him a lot of confidence and confidence helped him do well on the exam.  For every school examination after that until he left for University, his mother would spit over his shoulder and he would do well.

Here are the ingredients for a performance-related superstition.   Something unusual is done before a performance, say a baseball player has chicken for dinner, and by chance he has a good game.  Probably just a fluke.  Just in case, he tries it again.  Maybe it doesn’t repeat the second time, but maybe he does have another dose of luck and it “pays off” again.  And there’s always a chance it repeats enough times in a row that its too unlikely to be a statistical fluke.

Now once you believe that chicken makes you a good hitter, you approach each game with confidence.  And confidence makes you a good hitter.  From now on, luck is no longer required:  your confidence means that chicken dinner correlates with a good game. And you won’t have reason to experiment any further so there will be no learning about the no-chicken counterfactual.

If you are a coach (or a parent) you want to instill superstitions in your student.  My wife has been stressing about our third-grade daughter’s first big standardized test coming up in a couple weeks.  Not me.  I am just going to spit over her shoulder.

You are a student living in a small room with no closet.  All of your clothes sit on the floor. You can never remember if they are clean or dirty, and each day you have to decide whether to do the laundry or just throw something on.

If your strategy in these circumstance is to always do the laundry, you will be doing laundry every day, often washing clothes that are already clean.  On the other hand if your strategy is to dress and go your clothes will never get clean.

Instead you have to randomize.  If you wash with probability p then p is the probability you will be wearing clean clothes on any given day.  Of course you would like p=1, but then you are doing laundry with probability 1 every day.  Your optimal p is strictly between 0 and 1 and trades off the probability of clean clothes p versus the probability of washing clothes that are already clean.  (The latter is equal to the probability these clothes are clean, p, multiplied by the probability you wash them, again p so it’s p².)

The same logic applies to:

  1. I’ve been standing here in the shower for what must be a good 30 minutes; singing, sleeping, or absorbed in a proof that doesn’t work and I have forgotten whether I washed my hair. (shower cap nod:  David K. Levine)
  2. Its dark and I lost count of how many intersections I’ve crossed and I know I have to turn left somewhere to get home. (The classic example, due to Piccione and Rubinstein.)
  3. When was the last time I called my mother?
  4. etc…

Emotions are Nature’s incentive schemes getting us to do what’s in our evolutionary interest.  But unlike the textbook principal-agent relationship, there is no intrinsic conflict of interest.  The principal gets to design the agent, essentially dictating the terms of the contract. However, in practice the contract is incomplete.  Instead of being programmed with an exhaustive set of instructions for every contingency we are designed to respond to emotions.

This second-best solution has its costs.  For example, fear can kill you.  In fact, your enemies can scare you to death.

I once tried setting my watch ahead a few minutes to help me make it to appointments on time.  At first it worked, but not because I was fooled.  I would glance at the watch, get worried that I was late, then remember that the watch is fast.  But that brief flash acted as a sort of preview of how it feels to be late.  And the feeling is a better motivator than the thought in the abstract.

But that didn’t last very long.  The surprise wore off.  I wonder if there are ways to maintain the surprise.  For example, instead of setting the watch a fixed time ahead, I could set it to run too fast so that it gained an extra minute every week or month.  Then if I have adaptive expectations I could consistently fool myself.

I think I might adjust to that eventually though.  How about a randomizing watch? I don’t think you want a watch that just shows you a completely random time, but maybe one that randomly perturbs the time a little bit.  Would a mean-preserving spread make sense?  That way you have the right time on average but if you are risk-averse you will move a little faster.

You could try to exploit “rational inattention.”  You could set the watch to show the true time 95% of the time and the remaining 5% of the time add 5 minutes.  Your mind thinks that it’s so likely that the watch is correct that it doesn’t waste resources on trying to research the small probability event that it’s not.  Then you get the full effect 5% of the time.

Maybe its simpler to just set all of your friends’ watches to run too slow.

It’s been blog fodder the past week.

In other words, to pull off a successful boast, you need it to be appropriate to the conversation. If your friend, colleague, or date raises the topic, you can go ahead and pull a relevant boast in safety. Alternatively, if you’re forced to turn the conversation onto the required topic then you must succeed in provoking a question from your conversation partner. If there’s no question and you raised the topic then any boast you make will leave you looking like a big-head.

It makes perfect sense.  First of all, purely in terms of how much I impress you, an unprovoked boast is almost completely ineffective.  Because everybody in the world has something to boast about.  If I get to pick the topic then I will pick that one.  If you pick the topic or ask the question then the odds you serve me a boasting opportunity are long unless I am truly impressive on many dimensions.

And it follows from that why you think I am a jerk for blowing my own horn.  I reveal either that I don’t understand the logic and I am just trying to impress you or I think that you don’t understand it and I can fool you into being impressed by me.