Stan Reiter had a standard gripe about statistics/econometrics.  Imagine you there is a cave in front of you and you want to map out its dimensions.  There are many ways you could do it.  One thing you could do is go inside and look. Another thing you could do is stand outside and throw into the cave a bunch of super bouncy balls and when they bounce out, take careful note of their speed and trajectory in order to infer what walls they must have bounced off of and where. Stan equated econometrics with the latter.

That’s not what I am going to say but it is a funny story and its the first thought that came to my mind as I began to write this post.

But I do have something, probably even more heretical, to say about econometrics. Suppose I have a hypothesis or a model and I collect some data that is relevant.  If I am an applied econometrician what I do is run some tests on the data and report the results of the tests.  I tell you with my tests how you should interpret the data.

My tests don’t contain any information in them that isn’t in the raw data.  My tests are just a super sophisticated way to summarize the data.  If I just showed you the tables it would be too much information.  So really, my tests do nothing more than save you the work of doing the tests yourself.

But I pick the tests.  You might have picked different tests.  And even if you like my tests you might disagree with the conclusion I draw from them.  I say “because of these tests you should conclude that H is very likely false.”  But that’s a conclusion that follows not just from the data, but also from my prior which you may not share.

What if instead of giving you the raw data and instead of giving you my test results I did something like the following.  I give you a piece of software which allows you to enter your prior and then it tells you what, based on the data and your prior, your posterior should be?  Note that such a function completely summarizes what is in the data.  And it avoids the most common knee-jerk criticism of Bayesian statistics, namely that it depends on an arbitrary choice of prior.  You tell me what your prior is, I will tell you (what the data says is) your posterior.

Pause and notice that this function is exactly what applied statistics aims to be, and think about why, in practice, it doesn’t seem to be moving in this direction.

First of all, as simple as it sounds, it would be impossible to compute this function in all practical situations.  But still, an approach to statistics based on such an objective, and subject to the technical constraints would look very different than what is done in practice.

A big part of the explanation is that statistics is a rhetorical practice.  The goal is not just to convey information but rather to change minds.  In an imaginary perfect world there is no distinction between these goals.   If I have data that proves H is false I can just distribute that data, everyone will analyze it in their own favorite way, everyone will come to the same conclusion, and that will be enough.

But in the real world that is not enough.  I want to state in clear, plain language terms “H is false, read all about it” and have that statement be the one that everyone focuses on.  I want to shape the debate around that statement.  I don’t want nuances to distract attention away from my conclusion.  In the real world, with limited attention spans, imperfect reasoning, imperfect common-knowledge, and just plain old laziness, I can’t get that kind of focus unless I push the data into the background and my preferred intepretation into the foreground.

I am not being cynical.  All of that is true even if my interpretation is the right one and the most important one.  As a practical matter if I want to maximize the impact of the truth I have to filter it.

Still it’s useful to keep this perspective in mind.