You are currently browsing the tag archive for the ‘psychology’ tag.
How can a guy who never misses a field goal miss an easy one at a crucial moment?
Still, a semiconsensus is developing among the most advanced scientists. In the typical fight-or-flight scenario, scary high-pressure moment X assaults the senses and is routed to the amygdala, aka the unconscious fear center. For well-trained athletes, that’s not a problem: A field goal kick, golf swing or free throw is for them an ingrained action stored in the striatum, the brain’s autopilot. The prefrontal cortex, our analytical thinker, doesn’t even need to show up. But under the gun, that super-smart part of the brain thinks it’s so great and tries to butt in. University of Maryland scientist Bradley Hatfield got expert dart throwers and marksmen to practice while wearing a cumbersome cap full of electrodes. Without an audience, their brains show very little chatter among regions. But in another study, when dart throwers were faced with a roomful of people, the pros’ neural activity began to resemble that of a novice, with more communication from the prefrontal cortex.
When I was in the 6th grade I won our school’s spelling bee going away. The next level was the district-wide spelling bee, televised on community access cable. My amygdala tried to insert an extra `u’ into the word tongue and I was out in the first round.
Teller as in Penn &. He’s out to teach neuroscientists a thing or two about deception.
I’m all for helping science. But after I share what I know, my neuroscientist friends thank me by showing me eye-tracking and MRI equipment, and promising that someday such machinery will help make me a better magician.
I have my doubts. Neuroscientists are novices at deception. Magicians have done controlled testing in human perception for thousands of years.
I remember an experiment I did at the age of 11. My test subjects were Cub Scouts. My hypothesis (that nobody would see me sneak a fishbowl under a shawl) proved false and the Scouts pelted me with hard candy. If I could have avoided those welts by visiting an MRI lab, I surely would have.
In the article he ticks off a list of mental shortcuts that the magician exploits for his tricks. You should read it. Visor visit: Jacob Grier.
I was having coffee outside and I saw ants crawling on my feet so I moved to another table.
Then I rewound my stream of consciousness about 30 seconds and I was able to recall that in fact there was a little more going on than that. I was daydreaming while sipping my coffee and I felt ticklishness on my toes and ankles. That made me look down and that’s when I saw the ants.
Now the fact that I had to rewind to remember all of this says something interesting. Had I looked down and not seen ants, i.e. if it turned out it was just the precious Singapore wind blowing on my cozy bare feet, then this episode would never have penetrated my conscious mind. I would have gone on daydreaming without distraction.
The subconscious mind pays attention to a million things outside of our main line of being and only when it detects something worth paying attention to does it intervene in some way. There are two very common interventions. One is to react at a subconscious level. I.e. shooing a fly while I go on daydreaming. Another is to commandeer consciousness and force a reaction. I.e. pay attention to an attractive potential mate passing by.
Both of these involve the subconscious mind making a decisive call as to what is going on, what is its level of significance, and how to dispense with it. It’s all or nothing: let the conscious mind go on without interruption or completely usurp conscious attention.
But the ant episode exemplifies a third type. My subconscious mind effectively said something like this :”I am not sure what is going on here, but I have a feeling that its something that we need to pay attention to. But to figure that out I need the expertise and private information available only to conscious visual attention and deliberation. I am not telling you what to do because I don’t know, I am just saying you should check this out.”
And so a tiny slice of consciousness gets peeled off to attend to that and only on the basis of what it sees is it decided whether the rest has to be distracted too.
Email is the superior form of communication as I have argued a few times before, but it can sure aggravate your self-control problems. I am here to help you with that.
As you sit in your office working, reading, etc., the random email arrival process is ticking along inside your computer. As time passes it becomes more and more likely that there is email waiting for you and if you can’t resist the temptation you are going to waste a lot of time checking to see what’s in your inbox. And it’s not just the time spent checking because once you set down your book and start checking you won’t be able to stop yourself from browsing the web a little, checking twitter, auto-googling, maybe even sending out an email which will eventually be replied to thereby sealing your fate for the next round of checking.
One thing you can do is activate your audible email notification so that whenever an email arrives you will be immediately alerted. Now I hear you saying “the problem is my constantly checking email, how in the world am i going to solve that by setting up a system that tells me when email arrives? Without the notification system at least I have some chance of resisting the temptation because I never know for sure that an email is waiting.”
Yes, but it cuts two ways. When the notification system is activated you are immediately informed when an email arrives and you are correct that such information is going to overwhelm your resistance and you will wind up checking. But, what you get in return is knowing for certain when there is no email waiting for you.
It’s a very interesting tradeoff and one we can precisely characterize with a little mathematics. But before we go into it, I want you to ask yourself a question and note the answer before reading on. On a typical day if you are deciding whether to check your inbox, suppose that the probability is p that you have new mail. What p is going to get you to get up and check? We know that you’re going to check if p=1 (indeed that’s what your mailbeep does, it puts you at p=1.) And we know that you are not going to check when p=0. What I want to know is what is the threshold above which its sufficiently likely that you will check and below which is sufficiently unlikely so you’ll keep on reading? Important: I am not asking you what policy you would ideally stick to if you could control your temptation, I am asking you to be honest about your willpower.
Ok, now that you’ve got your answer let’s figure out whether you should use your mailbeep or not. The first thing to note is that the mail arrival process is a Poisson process: the probability that an email arrives in a given time interval is a function only of the length of time, and it is determined by the arrival rate parameter r. If you receive a lot of email you have a large r, if the average time spent between arrivals is longer you have a small r. In a Poisson process, the elapsed time before the next email arrives is a random variable and it is governed by the exponential distribution.
Let’s think about what will happen if you turn on your mail notifier. Then whenever there is silence you know for sure there is no email, p=0 and you can comfortably go on working temptation free. This state of affairs is going to continue until the first beep at which point you know for sure you have mail (p=1) and you will check it. This is a random amount of time, but one way to measure how much time you waste with the notifier on is to ask how much time on average will you be able to remain working before the next time you check. And the answer to that is the expected duration of the exponential waiting time of the Poisson process. It has a simple expression:
Expected time between checks with notifier on =
Now let’s analyze your behavior when the notifier is turned off. Things are very different now. You are never going to know for sure whether you have mail but as more and more time passes you are going to become increasingly confident that some mail is waiting, and therefore increasingly tempted to check. So, instead of p lingering at 0 for a spell before jumping up to 1 now it’s going to begin at 0 starting from the very last moment you previously checked but then steadily and continuously rise over time converging to, but never actually equaling 1. The exponential distribution gives the following formula for the probability at time T that a new email has arrived.
Probability that email arrives at or before a given time T =
Now I asked you what is the p* above which you cannot resist the temptation to check email. When you have your notifier turned off and you are sitting there reading, p will be gradually rising up to the point where it exceeds p* and right at that instant you will check. Unlike with the notification system this is a deterministic length of time, and we can use the above formula to solve for the deterministic time T at which you succumb to temptation. It’s given by
Time between checks when the notifier is off =
And when we compare the two waiting times we see that, perhaps surprisingly, the comparison does not depend on your arrival rate r (it appears in the numerator of both expressions so it will cancel out when we compare them.) That’s why I didn’t ask you that, it won’t affect my prescription (although if you receive as much email as I do, you have to factor in that the mail beep turns into a Geiger counter and that may or may not be desirable for other reasons.) All that matters is your p* and by equating the two waiting times we can solve for the crucial cutoff value that determines whether you should use the beeper or not.
The beep increases your productivity iff your p* is smaller than
This is about .63 so if your p* is less than .63 meaning that your temptation is so strong that you cannot resist checking any time you think that there is at least a 63% chance there is new mail waiting for you then you should turn on your new mail alert. If you are less prone to temptation then yes you should silence it. This is life-changing advice and you are welcome.
Now, for the vapor mill and feeling free to profit, we do not content ourselves with these two extreme mechanisms. We can theorize what the optimal notification system would be. It’s very counterintuitive to think that you could somehow “trick” yourself into waiting longer for email but in fact even though you are the perfectly-rational-despite-being-highly-prone-to-temptation person that you are, you can. I give one simple mechanism, and some open questions below the fold.
As the director of recruiting for your department you sometimes have to consider Affirmative Action motives. Indeed you are sympathetic to Affirmative Action yourself and even on your own your recruiting policy would internalize those motives. But in fact your institution has a policy. You perceive clear external incentives coming from that policy.
Now this creates a dilemma. For any activity like this there is some socially optimal level and it combines your own private motivations with any additional external interests. But the dilemma for you is how these should be combined. One possibility is that the public motive and your own private interest stem from completely independent reasons. Then you should just “add together” the weight of the external incentives you feel plus those of your own. But it could be that what motivates your Dean to institutionalize affirmative action is exactly what motivates you. In this case he has just codified the incentives you would be responding to anyway, and rather than adding to them, his external incentives should perfectly crowd out your own.
There is no way of knowing which of these cases, or where in between, the true moral calculation is. That is a real dilemma, but I want to think of it as a metaphor for the dilemma you face in trying to sort out the competing voices in your own private moral decisions.
Say you have a close friend and you have an opportunity to do something nice for them, say buy them a birthday gift. You think about how nice your friend has been to you and decide that you should be especially nice back. But compared to what? Absent that deliberative calculation you would have chosen the default level of generosity. So what your deliberation has led you to decide is that you should be more generous than the default.
But how do you know? What exactly determined the default? One possibility is that the default represents your cumulative wisdom about how nice you should be to other people in general. Then your reflection on this particular friend’s particular generosity should increment the default by a lot. But surely that’s not the relevant default. He’s your friend, he’s not just an arbitrary person (you would even be considering giving a gift to an arbitrary person.) No doubt your instinctive inclination to be generous to your friend already encodes a lot of the collected memory and past reflection that also went into your most recent conscious deliberation. And as long as there is any duplication, there should be crowding out. So you optimally moderate the enthusiasm that arises from your conscious calculation.
But how much? That is a dilemma.
A question raised over dinner last week. A group of N diners are dining out and the bill is $100. In scenario A, they are splitting the check N ways, with each paying by credit card and separately entering a gratuity for their share of the check. In scenario B, one of them is paying the whole check.
In which case do you think the total gratuity will be larger? Some thoughts:
- Because of selection bias, it’s not enough to cite folk wisdom that tables who split the check tip less (as a percentage): At tables where one person pays the whole check that person is probably the one with the deepest pockets. So field data would be comparing the max versus the average. The right thought experiment is to randomly assign the check.
- Scenario B can actually be divided into two subcases. In Scenario B1, you have a single diner who pays the check (and decides the tip) but collects cash from everyone else. In Scenario B2 the server divides the bill into N separate checks and hands them to each diner separately. We can dispense with B1 because the guy paying the bill internalizes only 1/Nth of the cost of the tip so he will clearly tip more than he would in Scenario A. So we are really interested in B2.
- One force favoring larger tips in B2 is the shame of being the lowest tipper at the table. In both A and B2 a tipper is worried about shame in the eyes of the server but in B2 there are two additional sources. First, beyond being a low tipper relative to the overall population, having the server know that you are the lowest tipper among your peers is even more shameful. But even more important is shame in the eyes of your friends. You are going to have to face them tomorrow and the next day.
- On the other hand, B2 introduces a free-rider effect which has an ambiguous impact on the total tip. The misers are likely to be even more miserly (and feel even less guilty about it) when they know that others are tipping generously. On the other hand, as long as it is known that there are misers at the table, the generous tippers will react to this by being even more generous to compensate. The total effect is an increase in the empirical variance of tips, with ambiguous implications for the total.
- However I think the most important effect is a scale effect. People measure how generous they are by the percentage tip they typically leave. But the cost of being a generous tipper is the absolute level of the tip not the percentage. When the bill is large its more costly to leave a generous tip in terms of percentage. So the optimal way to maintain your self-image is to tip a large percentage when the bill is small and a smaller percentage when the bill is large. This means that tips will be larger in scenario B2.
- One thing I haven’t sorted out is what to infer from common restaurant policy of adding a gratuity for large parties. On the one hand you could say that it is evidence of the scale effect in 5. The restaurant knows that a large party means a large check and hence lower tip percentage. However it could also be that the restaurant knows that large parties are more likely to be splitting the check and then the policy would reveal that the restaurant believes that B2 has lower tips. Does anybody know if restaurants continue to add a default gratuity when the large party asks to have the check split?
- The right dataset you want to test this is the following. You want to track customers who sometimes eat alone and sometimes eat with larger groups. You want to compare the tip they leave when they eat alone to the tip they leave when part of a group. The hypothesis implied by 3 and 5 is that their tips will be increasing order in these three cases: they are paying for the whole group, they are eating alone, they are splitting the check.
(Thanks to those who commented on G+)
Here’s a card game: You lay out the A,2,3 of Spades, Diamonds, Clubs in random order on the table face up. So that’s 9 cards in total. There are two players and they take turns picking up cards from the table, one at a time. The winner is the first to collect a triplet where a triplet is any one of the following sets of three:
- Three cards of the same suit
- Three cards of the same value
- Ace of Spaces, 2 of Diamonds, 3 of Clubs
- Ace of Clubs, 2 of Diamonds, 3 of Spades
Got it? Ok, this game can be solved and the solution is that with best play the result is a draw, neither player can collect a triplet. See if you can figure out why. (Drew Fudenberg got it almost immediately [spoiler.]) Answer and more discussion are after the jump.
It’s the canonical example of reference-dependent happiness. Someone from the Midwest imagines how much happier he would be in California but when he finally has the chance to move there he finds that he is just as miserable as he was before.
But can it be explained by a simple selection effect? Suppose that everyone who lives in the Midwest gets a noisy but unbiased signal of how happy they would be in California. Some overestimate how happy they would be and some underestimate it. Then they get random opportunities to move. Who is going to take that opportunity? Those who overestimate how happy they will be. And so when they arrive they are disappointed.
It also explains why people who are forced to leave California, say for job-related reasons, are pleasantly surprised at how happy they can be in the Midwest. Since they hadn’t moved voluntarily already, its likely that they underestimated how happy they would be.
These must be special cases of this paper by Eric van den Steen, and its similar to the logic behind Lazear’s theory behind the Peter Principle. (For the latter link I thank Adriana Lleras-Muney.)
“Improvisational theater” always means comedy. There doesn’t seem to exist any improvisational tragedy/drama. Why? I don’t think its because improvised drama would not be as interesting or entertaining as improvised comedy.
- Its just selection. People become comedians because they are funny in real life. To be funny in real life you have to know how to create humor out of the random events that happen around you. People become dramatic actors if they are good at understanding and reflecting dramatic themes in text.
- Its just training. Improvisation is what you practice if you want to do comedy. Its not a useful skill for dramatic actors (absent an already existing market for improvising tragedians.)
- Improvisation is by its nature funny. Seeing something you don’t expect is usually going to be funny even if it is nominally tragic. Like slipping on a banana peel. So improvised tragedy is just a contradiction in terms.
- To make drama work the players must have a high degree of coordination in terms of the development of the story and that is too hard to achieve through improvisation. By contrast, absurd plotlines add to the comedic effect of improvisation.
- Improvised drama would indeed be no worse than improvised comedy but that’s not the relevant comparison. It would be much worse than scripted drama. In other words drama has a larger range of quality than comedy and to hit the highs you need a script.
- Improvisation inevitably breaks the fourth wall. The audience is wondering “can they do it?” and the actors are self-consciously playing on that tension. Breaking the fourth wall tends to heighten comedy but cheapen drama.
(Plundered from a conversation I had with Chris Romeo.)
If you take your placebos on time and never miss a “dose” you are less likely to die.
Here’s the big finding: in the placebo group of 1174 patients, the people who took all of their placebo pills on time (the good adherers), were significantly less likely to die than the patients who missed lots of doses. People who took over 75% as directed were 40% less likely to die than those with less than 75% adherence
Neuroskeptic has the story, and it appears not to be simply because healthy people are also more responsible, they controlled for measures of health.
Faced with a morally ambiguous choice, you are sometimes torn between conflicting motivations. And it can get to the point where you can’t really figure out which one is really driving you. Are you calling your old girlfriend because only she can give you the right advice about your sick cat, or because you just want to hear her voice? Are you recommending your colleague for the committee because he’s the right guy for the job or because you don’t want to do it yourself? Do you write a daily blog because it’s a great way to hash out new ideas or because you just love the attention?
From a conventional point of view its hard to understand how we could doubt our own motivations. At the moment of decision we can articulate at a conscious level what the right objective is. (If not, then on what basis would we have to be suspicious of ourselves?) And we should evaluate all the possible consequences of the action that tempts us in light of that objective and make the best choice.
So self-doubt is a smoking gun showing that this conventional framework omits an important friction. Here’s my theory what that friction is.
Information comes in millions of tiny pieces over time. It is beyond our memory and our conscious capacity to recall and assemble all of those data when called upon to make a decision that relies on it. Instead we discard the details and just store summary statistics. When it comes time to make a decision, the memory division of our decision-making apparatus steps up and presents the relevant summary statistics.
The instinctive feeling that “I should do X” is what it feels like when the reported summary statistics point in favor of X. It has an instinctive quality because it is entirely pre-conscious. Conscious deliberation begins only after that initial inclination is formed.
At that stage your task is to verify whether the proposed course of action is consistent with your current motivation and the specific details of the situation you find yourself in. But that decision is necessarily made with limited information because you only have the summary statistics to go on.
Any divergence between your present frame of mind and the frame of mind that you were in when you recorded and stored those summary statistics can give you cause for doubting your instincts.
That suggests an interesting behavioral framework. The decision maker is composed of two agents, an Advisor and a Decider. The Advisor has all of the information about the payoffs to different actions and he makes recommendations to the Decider who then takes an action. The friction is that the Advisor and Decider’s preferences are different and the difference fluctuates over time. Thus, at any point in time the Decider must resolve a conflict between his own objective and the unknown objective of the Advisor.
Doctors sometimes resist prescribing costly diagnostic procedures, saying that the result of the test would be unlikely to affect the course of treatment. But what we know about placebo effects for medicine should have implications also for the value of information, even when it leads to no objective health benefits.
I have a theory of how placebos work. The idea is that our bodies, either through conscious choices that we make or simply through physiological changes, must make an investment in order to get healthy. Being sick is like being, perhaps temporarily, below the threshold where the body senses that the necessary investment is worth it. A placebo tricks the body into thinking that we are going to get at least marginally more healthy and that pushes above the threshold triggering the investment which makes us healthy.
The same idea can justify providing information that has no instrumental value. Suppose you have an injury and are considering having an MRI to determine how serious it is. Your doctor says that surgery is rarely worthwhile and so even if the MRI shows a serious injury it won’t affect how you are treated.
But you want to know. For one thing the information can affect how you personally manage the injury. That’s instrumental value that your doctor doesn’t take into account.
But even if there were nothing you could consciously do based on the test result, there may be a valuable placebo reason to have the MRI. If you find out that the injury is mild, the psychological effect of knowing that you are healthy (or at least healthier than you previously thought) can be self-reinforcing.
The downside of course is that when you find out that the injury is serious you get an anti-placebo effect. So the question is whether you are better off on average when you become better informed about your true health status.
If the placebo effect works because the belief triggers a biological response then this is formally equivalent to a standard model of decision-making under uncertainty. Whenever a decision-maker will optimally condition his decision on the realization of information, then the expected value of learning that information is positive.
Comedians are loath to follow a better act. But musicians not so much. Definitely not academics. Why?
- Comedy is more vertically differentiated. It’s really funny, just a little funny, or not funny. The subject matter adds another dimension but that’s not so important for the ultimate impact. Music is more horizontally differentiated. So the opening act can be really good at what they do, but you can still please the audience if you’re not quite as good but do something different. On this score academics are more like musicians.
- Laughs are physical. You only have so many of those to give in a night. Wheras good music has the effect of putting you in a mental state that makes you more receptive to even more music. Here academic talks are more like comedy. The audience gets taxed.
- Headlining musicians always degrade the quality of the opening act by giving them less stage space and limited lighting and other effects. In large conferences academics do the same thing by distinguishing the “plenary” talks from the rest. (Get this: in Istanbul this summer I am giving a semi-plenary talk.) There is no obvious way to do this for comedy.
- Music is played by groups, comedians are always solo. Somehow the head-to-head comparison is less exact for groups. Solo singers are probably more reluctant to follow better singers than groups are when following other groups. Academics get to blame their backstage co-authors.
Some topics evolve by occasional big news events interspersed by long periods of little or no news. The public reacts dramatically to the big news events and seems to ignore the slow news.
For example, a terrorist attack is followed by general paranoia and a tightening of security. But no matter how much time passes without another attack, there never seems to be a restoration of the old equilibrium. News is like a ratchet with each big reaction building directly upon the last, and the periods in-between only setting the stage for the next.
The usual way to interpret this is an over-reaction to the salient information brought by big news events, and a failure to respond to the subtle information conveyed by a lack of big news. We notice when the dog barks but we don’t notice when it doesn’t.
But even a perfectly rational and sophisticated public exhibits a news ratchet. That’s because there is a difference between big news and small news in the way it galvanizes the public. Large changes in policy require a coordinated movement by a correspondingly large enough segment of the population motivated to make the change. Individuals are so motivated only if they know that they are part of a large enough group. Big events create that knowledge.
During the slow news periods all of these individuals are learning that those measures are less and less necessary. But that learning takes place privately and in silence. Never will enough time pass that everyone can confidently conclude that everyone else has confidently concluded that …. that everyone has figured this out. So there will never be the same momentum to undo the initial reaction as there was to inflame it.
From a science fiction writer, who should know.
So, yeah: In a film with impossibly large spiders, talking trees, rings freighted with corrupting evil, Uruks birthed from mud (not to mention legions of ghost warriors and battle elephants larger than tanks), are we really going to complain about insufficiently dense lava? Because if you’re going to demand that be accurate in a physical sense, I want to know why you’re giving the rest of that stuff a pass. If you’re going to complain that the snowman flies, you should also be able to explain why it’s okay to have it eat hot soup.
Read on for the Flying Snowman theory.
Suppose our minds have a hot state and a cool state. In the cool state we are rational and make calculated tradeoffs between immediate rewards and payoffs that require investment of time and effort. But when the hot state takes over we abandon deliberation and just react on instinct.
The hot state is there because there are circumstances where the stakes are too high and our calculations too slow or imperfect. You are being attacked, the food in front of you smells funky, that bridge looks unstable. No matter how confident your cool head might be, the hot state grabs the wheel and forces you to do the safe thing.
Suppose all of that is true. What does that mean when a situation looks borderline and you see that instincts haven’t taken over? Your cool, calculating head rationally infers that this must be a safer situation than it would otherwise appear. And you are therefore inclined to take more risks.
But then the hot state better step in on those borderline situations to stop you from taking those excessive risks. Except that now the borderline has moved a little bit toward the safe end. Now when the hot state doesn’t take over it means its even more safe, etc.
And of course there is the mirror image of this problem where the hot state takes over to make sure you take an urgent risk. A potential mate is in front of you but the encounter has questionable implications for the future. Physical attraction receives a multiplier. If it is not overwhelming then all of the warning signs are magnified.
When you shop for a gift, your recipient observes only what you bought, and not what alternatives you considered.
Why would price matter more to givers than receivers? Dr. Flynn and his Stanford colleague, Gabrielle Adams, attribute it to the “egocentric bias” of givers who focus on their own experience in shopping. When they economize by giving a book, they compare it with the bracelet that they passed up.
But the recipients have a different frame of reference. They don’t know anything about the bracelet, so they’re not using it for comparison. The salient alternative in their minds may be the possibility of no gift at all, in which case the book looks wonderfully thoughtful.
Click through for an excellent article on giving, touching on the potlatch, the gift registry, and re-gifting.
You and your partner have to decide on a new venture. Maybe you and your sweetie are deciding on a movie, you and your co-author are deciding on which new idea to develop, or you and your colleague are deciding which new Assistant Professor to hire.
Deliberation consists of proposals and reactions. When you pitch your idea you naturally become attached to it. Its your idea, your creation. Your feelings are going to be hurt if your partner doesn’t like it.
Maybe you really are a dispassionate common interest maximizer, but there’s no way for your partner to know that for sure. You try to say “give me your honest opinion, I promise I have thick skin, you won’t hurt my feelings.” But you would say that even if it’s a little white lie.
The important thing is that no matter how sensitive you actually are, your partner believes that there is a chance your feelings will be hurt if she shoots down your idea. And she might even worry that you would respond by feeling resentful towards her. All of this makes her reluctant to give her honest opinion about your idea. The net result is that some inferior projects might get adopted because concern for hurt feelings gets in the way of honest information exchange.
Unless you design the mechanism to work around that friction. The basic problem is that when you pitch your idea it becomes common knowledge that you are attached to it. From that moment forward it is common knowledge that any opinion expressed about the idea has the chance of causing hurt feelings.
So a better mechanism would change the timing to remove that feature. You and your partner first announce to one another which options are unacceptable to you. Now all of the rejections have been made before knowing which ones you are attached to. Only then do you choose your proposal from the acceptable set.
If your favorite idea has been rejected then for sure you are disappointed. But your feelings are not hurt because it is common knowledge that her rejection is completely independent of your attachment. And for exactly that reason she is perfectly comfortable being honest about which options are unacceptable.
This is going to work better for movies, and new Assistant Professors than it is for research ideas. Because we know in advance the universe of all movies and job market candidates.
Research ideas and other creative ventures are different because there is no way to enumerate all of the possibilities beforehand and reject the unacceptable ones. Indeed the real value of a collaborative relationship is that the partners are bringing to the table brand new previously unconceived-of ideas. This makes for a far more delicate relationship.
We can thus classify relationships according to whether they are movie-like or idea-like, and we would expect that the first category are easier to sustain with second-best mechanisms whereas the second require real trust and honesty.
(inspired by a conversation with +Emil Temnyalov and Jorge Lemus)
The Missouri Gaming Commission is deciding whether to scrap a voluntary lifetime blacklist for problem gamblers and replace it with a five-year suspension. That would allow nearly 11,000 self-banned gamblers back into the state’s 12 riverboat casinos. The self-exclusion list, implemented in 1996, has been a centerpiece of Missouri’s efforts to manage gambling addiction, and has been emulated in at least eight other states—usually without the lifetime ban.
A laudable use of brain scanner methodology.
Regular joke: Why did Cleopatra bathe in milk? Because she couldn’t find a cow tall enough for a shower.
Funny pun: Why were the teacher’s eyes crossed? Because she couldn’t control her pupils.
Unfunny pun: What was the problem with the other coat? It was difficult to put on with the paint-roller.
The regular joke and the funny pun are both amusing, but for different reasons: in the decidedly unfunny parlance of humor theorists, the pun has “semantic ambiguity” and the joke does not. Part of the fun in the funny pun, in other words, is thinking through the two meanings of pupil.
But now compare the funny pun and the unfunny pun. Both have semantic ambiguity. So why is the funny one funny? The researchers say it’s because both meanings of the ambiguous word (pupil) are true at the same time, whereas in the unfunny pun, only one of the meanings of the ambiguous word (coat) is true.
Read the article to find out why.
Jonah Lehrer describes an fMRI experiment published in Nature by Tricomi, Rangel, Camerer, and O’Doherty. Subjects were first randomly assigned to be rich or poor and given an endowment accordingly. Then they were put in the scanner.
…the scientists found something strange. When people in the “rich” group were told that a poor stranger was given $20, their brains showed more reward activity than when they themselves were given an equivalent amount. In other words, they got extra pleasure from the gains of someone with less. “We economists have a widespread view that most people are basically self-interested and won’t try to help other people,” Colin Camerer, a neuroeconomist at Caltech and co-author of the study, told me. “But if that were true, you wouldn’t see these sorts of reactions to other people getting money.”
I find it helpful to step back and think through how we can come to conclusions like this. Some time ago, neuroscientists correlated certain brain activity measurements with the state of happiness. They did this either by having the subject report when he was happy and then measuring his brain, or by observing him making choices that, presumably, made him happy and then measuring his brain.
Once we have the brain data we no longer need to ask him whether he is happy or make inferences based on his choices, we can just scan his brain to find out. And that allows us to conclude that the rich are less happy receiving $20 than when the poor get $20.
But still, if we wanted to we could just ask them. We might learn something. What would we do if the subjects responded that in fact they would be happier having the $20 for themselves? Would we conclude that they are lying?
Also we might learn something from just letting them decide for themselves whether to give money to the poor. What would we conclude if we see, as we do indeed see in the world, that they do not? That they don’t understand as well as we do what makes their brain happy?
Either way we have a real problem. Because our original reason for associating the specific brain activity with happiness was based on either believing they are honest about what makes them happy or believing that the choices they make reveal what makes them happy. But now in order to apply what we learned we are forced to reject those same premises.
Creative output seems to come in bursts. You have periods of high productivity spaced by periods where you get relatively few good ideas. During the flurries everything seems to come easy and you have more ideas than you can work on at once. During the lulls you wonder if you are still the same person.
What if the pattern can be explained without assuming that your creative energy fluctuates at all? Suppose that ideas of various qualities arrive according to some distribution that is constant over time, but what changes about you is simply the standard you hold them to. Sometimes you are very self-critical and the marginal ideas that come to you don’t seem worth pursuing, so you don’t pursue them. You go through a lull.
Other times you are confident that you can develop your ideas and you do.
If you are trying to come up with a slogan for an ad campaign you have to decide how picky you are going to be with the grammar. For example suppose that there is a grammatical and a more colloquial way to write your slogan. Which do you go with?
Your audience has grammar snobs and regular people. Whichever way you write your slogan it’s going to look natural to one group and un-natural to the other. And the group that stumbles over the syntax is going to be at least somewhat distracted from the message. You have this problem whether you decide to bend toward the grammar snobs or the regular people.
But one thing tips the balance in favor of the ungrammatical slogan. In advertising, you are looking for anything that gets your audience to stop and spin some brain cycles in the presence of your ad. You will smuggle in your brand alongside. You get this benefit only with the ungrammatical. The grammar snobs, annoyed with your slogan are programmed to turn it over, diagram it and correct it. In effect you will cause them to construct variations of your ad campaign inside their own heads.
This is a good thing. Never mind that they will curse you for your trespasses. There’s no such thing as bad publicity. Indeed you hope for their curses. Nothing could be better than having them shout from the rooftops all the ways that your slogan, the one that urges everyone to buy your product, should be rewritten in order to make it more palatable.
Here’s a previous post on krafty konstructions.
It’s pretty old, but worth reading given his new book.
Trivers has been teaching himself things and then growing bored with them his whole life. In 1956, when he was 13 and living in Berlin (his father was posted there by the State Department), he taught himself all of calculus in about three months. Around the same time, and with more modest success, Trivers-a skinny child picked on by bullies-tried to learn how to box, doing push-ups and covertly reading Joe Louis’s ”How to Box” in the school library.
Akubra Cadabra: Tobias Schmidt.
You and your spouse plan your lifetime household consumption collectively. This is complicated because you have different discount factors. Your wife is patient, her discount factor is .8; you are not so patient, your discount factor is .5. But you are a utilitarian household so you resolve your conflicts by maximizing the total household utility.
Leeat Yariv and Matt Jackson show in this cool paper that your household necessarily violates a basic postulate of rationality: your household preferences are not time consistent. For example, consider how you rank the following two streams of household consumption:
- (0,10,0,0, …)
- (0,0,15,0,0, …)
Each of you evaluates the first plan by computing the present value of 10 units of consumption one period from now. Total household utility for the first plan is the sum of your two utilities, i.e. For the second plan you each discount the total consumption of 15 two periods from now. Total utility for the second plan is
Your utilitiarian household prefers the second plan.
But now consider what happens when you actually reach date 1 and you re-consider your plan. Now the total utilities are for the first plan (since it is date 1 and you will each consume the 10 immediately if you choose the first plan) and
for the second plan. Your household preference has reversed.
Indeed your household exhibits a present bias: present consumption looms large in your household preferences, so much so that you cannot forego consumption that, earlier on, you were planning to delay in exchange for a greater later reward.
Jackson and Yariv show that this example is perfectly general. If a group of individuals is trying to aggregate their conflicting time preferences, and if that group insists on a rule that respects unanimous preferences and is not dictatorial, then it must be time inconsistent.
I was born on Jan. 31, but I’ve always wanted a summer birthday. I set my Facebook birthday for Monday, July 11. Then, after July 11, I reset it for Monday, July 25. Then I reset it again for Thursday, July 28. Facebook doesn’t verify your birthday, and doesn’t block you from commemorating it over and over again. If you were a true egomaniac, you could celebrate your Facebook birthday every day.
He noted that for July 11th, he received 119 birthday wishes via Facebook. Four close friends were confused, but “most of them attributed the confusion to their own faulty memories.” When July 25th came around, he received another 105 birthday wishes. The number of people suspecting something was up was nine. The really stunning thing:
Of the 105 birthday wishes, 45 of them—nearly half—came from people who had wished me a Facebook happy birthday two weeks earlier.
On July 28th, just three days later, when it was his birthday again, he still ended up with 71 birthday wishes.
Casquette cast: Mallesh Pai.
Close your eyes. Apparently your opponent will have an increased tendency to imitate your move increasing the chance of a draw. At least that is what is reported in this study. A blindfolded player played RSP against a sighted player and their outcomes were compared to a control treatment in which two blindfolded players played.
A draw was achieved almost exactly 1/3 of the time when the two blindfolded players met, but that rate increased to 36.3% in the blind-sighted treatment, a statistically significant difference. The authors attribute this to a sub-conscious tendency to imitate the actions of others. In particular, when the blind player completed his move more than 200 miliseconds prior to the sighted player, the sighted player had an increased tendency to play the same move.
200 miliseconds is too fast for conscious reaction but still within the time necessary for the visual signal to be sent to the brain and an impulsive response signal to be sent to the hand.
If this is true then you should be able to increase your chance of winning in RSP by holding rock until the very last opportunity and then throw paper. You will sometimes trigger an automatic imitation of your rock and win with your paper.
Are there even more draws when both players have their eyes open?
(Fez float: Not Exactly Rocket Science.)
The ultimatum game is a workhorse for economics experiments. Subject A has 100 dollars to split with Subject B. A proposes a division and if B accepts then the division is carried out. If B rejects then both parties get nothing. In these experiments, A is surprisingly generous and B is surprisingly spiteful. A Fine Theorem makes a good point:
…I’m sure someone has done this but I don’t have a cite, the “standard” instructions in ultimatum games seem to prime the results to a ridiculous degree. Imagine the following exercise. Give 100 dollars to a research subject (Mr. A). Afterwards, tell some other subject (Ms. B) that 100 dollars was given to Mr. A. Tell Mr. A that the other subject knows he was given the money, but don’t prime him to “share” or “offer a split” or anything similar. Later, tell Ms. B that she can, if she wishes, reverse the result and take the 100 dollars away from A – if she does so, had Mr. A happened to have given her some of the money, that would also be taken. I hope we can agree that if you did such an experiment, A would share no money and B would show no spite, as neither has been primed to see the 100 dollars as something that should have been shared in the first place. One doesn’t normally expect anonymous strangers to share their good fortune with you, surely. That is, feelings of spite, jealousy and fairness can be, and are, primed by researchers. I think this is worth keeping in mind when trying to apply the experimental results on ultimatum games to the real economy.
Its called Fehr Advice!
Traditional economic research assumes that managers, employees, customers, and suppliers usually make rational decisions and thus do not make systematic decision errors. Behavioral economics, however, has found numerous proofs that people make systematic decision errors that limit their own welfare and also diminish efficiency, perceived fairness, and the profitability of firms.
It was for this reason that FehrAdvice & Partners AG developed the consulting approach BEA™ that is based on empirical insights about the human tendency to make erroneous decisions. This approach systematically includes this knowledge in consulting activities. Our consultants have a “trained eye” in the area of behavioral economics, and the innovative methods of empirical research we use allow us to identify potential areas of improvement in enterprises, markets, and organizations that were previously ignored.
They have a blog (in German), they are on Twitter, and they are hiring.
Vaughan Bell comes through with a steady-handed (Beavis!!) take-down of Naomi Wolf’s neuro-hyped story about porn and the brain. Wolf wrote:
Since then, a great deal of data on the brain’s reward system has accumulated to explain this rewiring more concretely. We now know that porn delivers rewards to the male brain in the form of a short-term dopamine boost, which, for an hour or two afterwards, lifts men’s mood and makes them feel good in general. The neural circuitry is identical to that for other addictive triggers, such as gambling or cocaine.
And here’s Vaughan:
But the reward is not the dopamine. Dopamine is a neurochemical used for various types of signalling, none of which match the over-simplified version described in the article, that allow us to predict and detect rewards better in the future.
One of its most important functions is reward prediction where midbrain dopamine neurons fire when a big reward is expected even when it doesn’t occur – such as in a near-miss money-loss when gambling – a very unpleasant experience.
But what counts as a reward in Wolf’s dopamine system stereotype? Whatever makes the dopamine system fire. This is a hugely circular explanation and it doesn’t account for the huge variation in what we find rewarding and what turns us on.
This is especially important in sex because people are turned on by different things. Blondes, brunettes, men, women, transsexuals, feet, being spanked by women dressed as nuns (that list is just off the top of my head you understand).






