Complicated models
24 Jan 2013

Gelman says:

…it’s just part of a general attitude people have that there is a high-tech solution to any problem. This attitude is not limited to psychologists. For example, Bannerjee and Duflo are extremely well-respected economists but they have a very naive view (unfortunately, a view that is common among economists, especially among high-status economists, I believe, for whom its important to be connected with what they view as the most technically advanced statistics) of what is important in statistics.

What other disciplines find useful from statistics may or may not be interesting (or intelligible!) to you and that’s completely fine. One discipline’s overly-complex model is another discipline’s deeply intuitive bread and butter. Models and methods can be useful whether or not you as a reader understand them.

The Gelman post led Michael Tofias to try and define the complex model sweet spot:

I generally think this is good advice for someone using a model. It’s your model and if you break it, you buy it. But this advice is shaky in at least one common situation. Think of the bootstrap, which is a clever way of estimating confidence intervals and standard errors. Most people, even those who use it often, have no idea when or why the bootstrap fails. Does this mean that people shouldn’t use the bootstrap? Probably not.

There is often a wide chasm between what you consider “simple” and what others consider “simple”. Double the width of that chasm for “intuitive”. It’s up to modelers to bridge those gaps. Unfortunately, Gelman’s quote above instead widens those gaps by dismissing what’s on the other side (the MATH over in that camp, have you seen it?). We roll our eyes pundits when they rail against the use of models at all in politics and yet here is one of our own dismissing more complicated models in academia.