The Economist uncharacteristically published a piece about epistemology and method, though they didn’t come out and say so. They review the pros and cons of using ‘instrumental variables’ in statistics. They give the example of years of schooling being able to replace “innate scholastic ability” as a variable to predict potential earnings, which is arguably necessary because it’s very difficult to measure something like innate scholastic ability. If I’m driving home at night, and the guy ahead of me is swerving, I will infer that he’s probably drunk, though I have no way to test that directly: his swerving is my ‘instrumental variable’. The article does indicate several criticisms of using them, but they miss several other bigger points.
First, in political science these are usually called ‘proxy variables’ because they substitute for something we can’t measure. That is, they’re indirect to start with. They make you start your analysis somewhere you didn’t want to be.
Second, the Economist talks about the ability of instrumental variables to increase the control on the relationship of interest; you can add them to your model to make sure that you’ve accounted for everything. Can you? Jim Ray, a political scientist, has argued for years that just cramming control variables into a model actually distorts it. There’s an often repeated “rule of three” saying that a model including more than three independent/control variables is worthless. A far better way, according to Ray (see the first paper on his site), is to run several tests with few variables and compare them: if you have variables A,B,C, and D, and you run 3 tests that indicate their effects on your dependent variables as follows: ABCD, ABDC, ADBC. You would know that A is the most important (because it’s always first), and B is more important than C (because it’s always before C). There are more sophisticated statistical techniques to do this, but this is the logic behind it. Economists don’t like to do this because it’s hard, time consuming, canned software doesn’t do it or not very well, and nobody else is doing it, so there’s fear of nonconformity. (Did you ask yourself why the magic number is three? Me too. The reason seems to be that two is too few and four is too many. Brilliant, huh?)
I think a deeper problem with statistics in economics and the social sciences more generally is that many have illusions about what they can do and how to use them. Statistics can’t show causality. They can only show if the mutual occurrence of values is something that we would expect to see randomly or if it would be odd to see that mutual occurrence. That is, if we say that tall people make a lot of money, what counts as “a lot”? Statistics can tell us that people over 6’6” (2m) should earn X $/year if they were like everybody else, but they earn X+15 000$ a year, and there’s a 1 in Y chance that what we’re seeing is purely accidental. They couldn’t tell us why tall people earn more. With regards to the proper use of statistics, we often cook the books to find what we were looking for from the beginning. Scale a variable here and make an index there until it all fits. That is, we’re hunting after the correlation. The idea should be the opposite. If you find a correlation, try to destroy it. Try to make it disappear. If it stands despite your best efforts to make it go away, it might be worth asking the question why it won’t go away, and that’s going to require a totally different kind of research. But negative results don’t get published, and it’s hard to figure out what counts as a ‘significant’ negative result, so we ignore them and keep hunting for the correlation. Whatever you do, though, statistics will never be able to indicate causality! If you want to get all huffy and talk about statistical tricks to indicate causality better, like ‘Granger causality’ spare me. Adding lags can show a progression through time, but it still does not count as a mechanism!
To moderate my rant against statistics, I’d also like to point out another one of their uses, possibly the best one: counting. We can’t count the fish in the sea (easily), we can’t count the stars in the sky (at all), and we’ll have a hard time counting everybody in the world, but we don’t have to. Just like statistics can answer questions like “what’s big?” or “how many is a lot?” very well, it can also answer the simpler question of “what’s there at all?” if you input a surprisingly small amount of data. Those super-early exit polls are often close to the money, and it is possible to infer a population’s values from those of a small sample using statistics. But those polls will never be able to tell you why any respondent voted or answered as s/he did.
No comments:
Post a Comment