Tyler Cowen blegs for ideas on the economics of randomized trials. There is a simple and robust insight economic theory has to offer the design of randomized trials: controlling incentives in order to reduce ambiguity in the measurement of effectiveness.
Suppose you are testing a new drug that must be taken on a daily basis. A typical problem is that some patients stop taking the drug but for various reasons do not inform the experimenters. The problem is not the attrition per se because if the attrition rate were known, this could be used to identify the take-up rate and thereby the effectiveness of the drug.
The problem is that without knowing the attrition rate in advance there is no way to independently identify it: the uncertainty about the attrition rate becomes entangled with the uncertainty about the drug’s effectiveness. The experimenters could assume some baseline attrition rate, but when the effectiveness results come out on the high side, there is always the possibility that this is just because the attrition rate for this particular experiment was lower than usual.
The simple way to solve this problem is to use selective trials rather than randomized trials: require patients in the study to pay a price to remain in the study and continue to receive the drug. If the price is high enough, only those patients who actually intend to take the drug will pay the price. Thus the attrition rate can be directly observed by noting which patients continued to pay for the drug. This removes the entanglement and allows statistical identification of the effectiveness of the drug.
This is one of a number of new ideas in a new paper by Sylvain Chassang, Gerard Padro i Miquel and Erik Snowberg.
Followup: Sylvain Chassang points me to two experimental papers that explore/implement similar ideas:
http://www.dartmouth.edu/~jzinman/Papers/OU_dec08.pdf
http://faculty.chicagobooth.edu/jesse.shapiro/research/commit081408.pdf

9 comments
Comments feed for this article
September 15, 2009 at 11:02 pm
George
Very interesting paper, however epidemiologists handle the problem using Intention-to-treat analysis (http://en.wikipedia.org/wiki/Intention_to_treat_analysis). The fact this is not discussed in the paper is a serious omission.
September 16, 2009 at 1:24 am
chris
Does that work? You are going to have a self selection: The guys most eager for the drug will take it. This group might be very different from the group eventually getting a prescription for this drug. It looks as one is giving up the greatest advantage of randomized trials.
September 16, 2009 at 10:02 pm
jeff
In general, yes there is a trade-off between minimizing ambiguity and introducing selection bias. The authors address this in the paper. But note, that attenuation bias (i.e. attrition) is a form of selection bias because those that drop out of the trial will be those with the least value. So in one sense, selective trials are a way to actively determine the selection cutoff. If there is going to be bias, then you reduce ambiguity about the bias by identifying exactly which types will stay in and which will quit.
Also, at least for drug trials, there is reason to believe that this type of selection bias is not cause for concern. First, patient’s probably have limited private information about how effective the drug will be for them specifically. Second, if we want to know how effective the drug will be when it is marketed, then we want to know how effective it will be for those who are willing to pay for it. So in this sense there is a favorable aspect to the selection.
September 16, 2009 at 7:32 am
Brandon
Doesn’t this solution (selective trials) confuse the issue of a sample distribution with that of a particular estimate? Any given random trial may not match the mean; however, in repeated samples the expected difference between the true and estimated effects would be zero. Selective trials would presumably “select” individuals who were most in need of the drug, thus generating a biased estimate of the true effect. Am I wrong on this?
September 16, 2009 at 10:06 pm
jeff
Its a good question. The paper is addressing an issue I call ambiguity. The researcher presumably does not know the population distribution. Let’s say he knows that the mean is somewhere in some interval. If the experiment is a success this could be because the drug is good or it could be that the population mean is on the low end. This identification problem prevents an independent estimate of the population mean.
The selective trial avoids the need to identify which population distribution is active because the experimenter will see the actual attenuation decisions in the sample.
September 16, 2009 at 6:05 pm
Mike Yeomans
George Loewenstein and Leslie John (maybe other co-authors, too) have used internet-connected pill boxes that measure each time you open it up, to increase compliance with prescriptions. They tie the whole thing to a lottery system to incentivize compliance but I’m sure you could string the smart pill box to just about anything. The idea is there’s almost no reason for someone to open the pill box and not take the pills, that would amount to willful fraud of the experiment which would be impossible to control for in any case
September 16, 2009 at 10:07 pm
jeff
nice
September 17, 2009 at 11:33 am
Michel
The selection bias can be factored out by the fact that the control group gets the placebo treatment anyway, I would think.
One question, though — if patients have to pay in advance to stay in the trial, how would you recompensate the ones that were in the control group? Returning their money with interest would seem like a rather cheap shot.
September 17, 2009 at 6:37 pm
EconTech » Links for 2009.09.17
[…] Improving Upon Randomized Trials: jeff […]