“We have shown that by applying tools from neuroscience to the public-goods problem, we can get solutions that are significantly better than those that can be obtained without brain data,” says Antonio Rangel, associate professor of economics at Caltech and the paper’s principal investigator.
Here is the paper. You should read it. It is forthcoming in Science. Zuchetto Zip goes to Economists’ View.
The public goods aspect of the problem is not important for understanding the main result here, so here is a simplified way to think about it. You are secretly told a number (in the public goods game this number is your willingness to pay) and you are asked to report your number. You have a monetary incentive to lie and report a number that is lower than the one you were told. But now you are placed in a brain scanner and told that the brain scanner will collect information that will be fed into an algorithm that will try to guess your number. And if your report is different from the guess, you will be penalized.
The result is that subjects told the truth about their number. This is a big deal but it is important to know exactly what the contribution is here.
- The researchers have not found a way to read your mind and find out your number. Indeed, even under the highly controlled experimental conditions where the algorithm knows that your number is one of two possible numbers and after doing 50 treatments per subject and running regressions to improve the algorithm, the prediction made by the algorithm is scarcely better than a random guess. (See table S3)
- In that sense “brain data” is not playing any real role in getting subjects to tell the truth. Instead, it is the subjects’ belief that the scanner and algorithm will accurately predict their value which induces them to tell the truth. Indeed after conducting the experiment the researchers could have thrown away all of their brain data and just randomly given out payments and this would not have changed the result as long as the subjects were expecting the brain data to be used.
- The subjects were clearly mistaken about how good the algorithm would be at predicting their values.
- Therefore, brain scans as incentive mechanisms will have to wait until neuroscientists really come up with a way of reading numbers from your brain.

8 comments
Comments feed for this article
September 11, 2009 at 8:34 am
Michael Turner
OK, forget brain scans then. Maybe there’s a way to do it without fMRI or any “intrusive” electronic lie-detector instrumentation. Maybe there’s even a way that’s more “democratic” in some real sense. Here’s my two cents on it:
Tell people being asked for their public-good valuation that —
(1) they’ll be watched on video (which is cheaply and easily implemented on the Internet these days);
(2) many other human beings will be listening and watching (ditto);
(3) human beings are middling-to-good lie detectors (maybe more accurate than this fMRI gadget, which apparently isn’t very good);
and
(4) these other viewers will vote to penalize (reward) them for apparent dishonesty (honesty). Not very much either way. A token amount, like 20% of their jury-duty-scale pay for participating at all. But enough to sting a little if they’re marked all the way down as total stinkers, or to buy a night on the town with their friends if they pass the honest-face test with flying colors.
To be sure, some people just aren’t very convincing even when they speaking in dead earnest, and some truly prodigious liars are undetectable as such. But they are outliers (er, as it were.) On average, this should work out to much the same as the Caltech result, I think, and without necessarily exposing people’s identities any more than jury duty would, and also without hooking them up to some gadget that’s really only a kind of truth-serum placebo anyway. What the other objections might be, I can only guess.
September 11, 2009 at 10:28 am
Noah Yetter
It’s worse even than that, because we do not ourselves KNOW our willingness to pay. When we evaluate a price, we compare it ordinally with a vague concept of our expected utility, and arrive at a boolean result. Such a result gives us a lower or upper bound on our reserve price but it does not reveal the reserve price itself. The only market setting that lets us discover (or at least approximate) our reserve prices is an auction.
September 11, 2009 at 4:03 pm
Andrew
Why did they bother even creating an algorithm? They knew your number, so presumably they could punish you regardless. Couldn’t they have found a cheaper way of tricking their subjects into thinking that they would get caught cheating. I bet if you brought in a psychic you could get similar results.
September 11, 2009 at 8:54 pm
Etl World News | Assorted links
[…] 2. Using brain scans to increase social cooperation. […]
September 12, 2009 at 2:21 pm
Paul
I don’t see why they bothered to use (presumably expensive) brain-scanning technology when they could just as easily have said, “We’re going to put the real number in a hat with 4 other numbers – if the number you say matches the one we pull out of the hat, you get the money.” As long as they don’t know what the other numbers are, it should have the same effect because the subjects would know that they are more likely to be rewarded from telling the truth in that situation than not (as any lie is much less likely to be one of the 4 random included numbers).
If a brain scanner were doing the same thing (especially if it were only choosing between two numbers), I would tell the truth even if I believed that the scanner had no ability to distinguish between truth and fiction from me.
September 15, 2009 at 7:18 pm
dubious
Maybe I can shed some light on “why” they used expensive fMRI ($500/hr) for this study instead of a cheaper “lie-detector” like even a polygraph machine. My guess would be that the researchers initially believed that they would be able to generate a value-metric directly based on the BOLD response (aka “brain activity”). In other words, they set out to altogether avoid the use of the subject’s reported preference (e.g. via button press or verbal report) and come up with an independent measure of goods-preference. However, they most likely failed. This happens A LOT in fmri experiments and the researcher comes up with clever trips to manipulate the subject’s experience and/or behavior to obtain a correlation with brain activity.
The real question is, if they had used randomly generated fMRI data, would the goods-distribution performance been the same. If it was the same, then the brain activity data did nothing and it was purely the believe that it would affect payment. Or, had they used a polygraph machine, would the goods-distribution been better, etc.
fwiw- I have published several fMRI papers in Nature, Nature Neuroscience, etc… and know how these adventures develop.
September 21, 2009 at 3:36 am
Colin Camerer
dubious– Your guess about how this research proceeded is very intriguing, especially in its detail, but is incorrect. We clearly were motivated *not* by trying to use BOLD to figure out values per se, but instead, by wanting to see if combining those measures combined with subjects’ reports, in a suitably designed “mechanism”, would work. (Note that this type of mechanism combining self-report and independent report is essentially unknown in standard neuroscience so it’s not surprising that you might not recognize and appreciate it).
The whole point of the mechanism is to use the fMRI measures– or polygraph, SCR, facial EMG etc. (it definitely need not be fMRI)– to incentivize the people to report truthfully (not simply to explore whether neural decoding works). Then we can use the high truthful report rate to guide the social decisions and create added value. It is *easier* to do it this way than to very accurately neurally decode value because the math suggests (in theory) accuracy need not be very high for the mechanism to work.
In fact, it is surprising that you either read the paper and made this detailed guess or, more likely, you simply didn’t read the paper and were extrapolating from your own experience with Nature and Nature Neuroscience to methods used by economists.
Jeff et al– It’s true that the key point is that subjects believe there is some independent measure of their value and it seems to influence their reporting. If they believed it worked, but it actually didn’t, that should– in theory– still induce truthful reporting. Furthermore, we have a strong taboo about deception in our lab and at most experimental economics labs. We do not want to tell them we can decode value if we cannot; hence the large expense and effort of actually decoding. And to induce voluntary participation it is necessary that you have some degree of above-chance accuracy in decoding. Otherwise (roughly speaking) if you did this over and over the subjects would find that even when they were truthful they were being penalized a lot for the machine’s wrong guesses and would prefer not to participate.
Furthermore, if we had told the subject we would generate random guesses it is unlikely to work on its face. (That is, nothing in theory suggests it would work and I am quite confident it would not both work and induce voluntary participation.)
And as the paper attempts to make clear, one interesting property of these mechanisms is that the value guesses from fMRI (or any other measure) do not need to be highly correlated with actual value to induce efficiency, in theory. They just need to have some correlation.
Finally, the point (1) in your original post is incorrect as written and conventionally understood. There *is* statistically significant decoding, although it is indeed modest (around 60% accuracy where 50-50 is chance). Look at Figure 3A in the text; this is essentially the same as Supplemental Table S3 you refer to (except that the text Figure averages two types of classifications reported separately in S3).
This imperfect classification is also significantly better than chance by standard tests.
I certainly agree with the spirit of your conclusion (4) however, that we would not want to use these mechanisms until a lot more research is done and the decoding accuracy improves a lot.
August 11, 2014 at 8:28 am
helloworld
I think this is one of the most important info for me.
And i am glad reading your article. But wanna remark on few general things, The web
site style is great, the articles is really great : D. Good job, cheers