Really great write up. When you initially mentioned the three “Don’t Be Like” candidates this was the one I was most excited to read. You write:
> And while we try to study these things, we have deeply imperfect methodologies and data at our disposal. Part of the danger of imperfect methodologies is not just that we can’t answer questions well, it’s that we tailor our questions only to things our methodologies can plausibly answer. Now, this is not bad in principle—we want to be able to match method and question so we can do good research. But it leads to cases like we see here with Acemoglu where we mistake what question the method can answer for the question we actually want to answer.
This is a *spot on* critique of psychology as well. People only think about the world in terms of moderation (ie interaction effects) and mediation because the most common statistical tools can only do those things. It creates such a terrible feedback loop for thinking about and researching human behavior.
yes, it's a really complex thing how research questions and methodologies intersect and as you say an entire field can get stuck in a loop and then you end up thinking that that loop describes reality rather than being a small slice of reality identifiable by your method.
Really great write up. When you initially mentioned the three “Don’t Be Like” candidates this was the one I was most excited to read. You write:
> And while we try to study these things, we have deeply imperfect methodologies and data at our disposal. Part of the danger of imperfect methodologies is not just that we can’t answer questions well, it’s that we tailor our questions only to things our methodologies can plausibly answer. Now, this is not bad in principle—we want to be able to match method and question so we can do good research. But it leads to cases like we see here with Acemoglu where we mistake what question the method can answer for the question we actually want to answer.
This is a *spot on* critique of psychology as well. People only think about the world in terms of moderation (ie interaction effects) and mediation because the most common statistical tools can only do those things. It creates such a terrible feedback loop for thinking about and researching human behavior.
yes, it's a really complex thing how research questions and methodologies intersect and as you say an entire field can get stuck in a loop and then you end up thinking that that loop describes reality rather than being a small slice of reality identifiable by your method.
Shouldn't it be 172k citations, it feels more appropriate with your usage of "staggering"?
his h-index his 172 which means he has 172 papers with 172 citations, which is amazing
Thanks for the clarification.