I was talking to someone about matching mechanisms and the fact that strategy-proof incentives are often incompatible with efficiency.  The question came up as to why we insist upon strategy-proofness, i.e. dominant strategy incentives as a constraint.  If there is a trade-off between incentives and efficiency shouldn’t that tradeoff be in the objective function?  We could then talk about how much we are willing to compromise on incentives in order to get some marginal improvement in efficiency.

For example, we might think that agents are willing to tell the truth about their preferences as long as manipulating the mechanism doesn’t improve their utility by a large amount.  Then we should formalize a tradeoff between the epsilon slack in incentives and the welfare of the mechanism.  The usual method of maximizing welfare subject to an incentive constraint is flawed because it prevents us from thinking about the problem in this way.

That sounded sensible until I thought about it just a little bit longer.  If you are a social planner you have some welfare function, let’s say V.  You want to choose a mechanism so that the resulting outcome maximizes V.  And you have a theory about how agents will play any mechanism you choose.  Let’s say that for any mechanism M, O(M) describes the outcome or possible outcomes according to your theory.  This can be very general:  O(M) could be the set of outcomes that will occur when agents are epsilon-truth-tellers, it could be some probability distribution over outcomes reflecting that you acknowledge that your theory is not very precise.  And if you have the idea that incentives are flexible, O can capture that:  for mechanisms M that have very strong incentive properties, O(M) will be a small set, or a degenerate probability distribution, whereas for mechanisms M that compromise a bit on incentives O(M) will be a larger set or a more diffuse probability distribution.  And if you believe in a tradeoff between welfare and incentives, your V applied to O(M) can encode that by quantifying the loss associated with larger sets O(M) compared to smaller sets O(M).

But whatever your theory is you can represent it by some O(.) function.  Then the simplest formulation of your problem is:  choose M to maximize V(O(M)). And then we can equivalently express that problem in our standard way: choose an outcome (or set of outcomes, or probability distribution over outcomes ) O to maximize V(O) subject to the constraint that there exists some mechanism M for which O = O(M).  That constraint is called the incentive constraint.

Incentives appear as a constraint, not in the objective.  Once you have decided on your theory O, it makes no sense to talk about compromising on incentives and there is no meaningful tradeoff between incentives and welfare.  While we might, as a purely theoretical exercise, comment on the necessity of such a tradeoff, no social planner would ever care to plot a “frontier” of mechanisms whose slope quantifies a rate of substitution between incentives and welfare.

Advertisement