Professionalization and Marginalization of International Relations Field
The takeover of academic IR study by the stats geeks is complete.
Continuing a running debate at Duck of Minerva and elsewhere, Dan Nexon argues that the takeover of the international relations academy by quantitative researchers has seriously damaged the field but nonetheless advises those who wish to be employed therein to get with the program. The piece is worth reading in full but the upshot is that, while it might be desirable for would-be IR professors to spend their undergraduate years grappling with theory and only move into statistics mode later in one’s professional progression,
1. It’s next to impossible to get into a top tier IR graduate program without evidence of substantial quantitative background;
2. There’s not much point in developing unique theoretical perspectives, anyway, because they’re only rewarded at a tertiary level even after grad school;
3. There are too many PhDs chasing too many tenure track jobs and hiring committees care mostly about the ability to crank out stats-driven articles;
4. The pressure to crank out a lot of stats-driven articles has in turn pressured journal editors to publish more, shorter articles that strip out any discussion of theory, since the numbers are the point; and
5. These trends are self-reinforcing as people who came up in this environment are now at the top of the discipline and regard outliers with grave suspicion if not outright contempt.
These trends were well in evidence when I started my PhD twenty years ago but the acceleration has been tremendous. The tools were about to take a quantum change just as I was beginning my dissertation in 1994. Then, in order to do substantial large-N regression analysis, one had to order tapes of the datasets, arrange for time on the mainframe, and then hand-code the analytics. Not only was this cumbersome but, frankly, I was never confident in the outputs since it was more than possible that I’d made an error in the coding. By the mid- and certainly late-1990s, it was possible to do the work on one’s personal computer and use pre-loaded formulas. This has made doing statistical analysis markedly easier but at the same time upped the threshold; new political science research uses statistical techniques that either hadn’t been invented when I was in grad school or at least hadn’t migrated into the social sciences.
While I’m not a quant guy, there’s a lot of merit to all of this. My interest has always been public policy rather than “political science” per se but I’ve long since come around to the position that John Oneal was arguing two decades ago–that the lack of rigor in much if not most of the public policy analysis field rendered the work rather meaningless. Indeed, I’m shocked how repetitive the debate is. Arguments that seemed fresh in the early 1990s when applied to the Balkans seem tired now when applied to North Africa–but it’s as if those earlier debates never happened.
The down side, though, is that the academic study of IR has divorced itself from the real world study of the actual conduct of international relations. Those who serve in government and work in the IR-focused think tanks tend to go to the public policy schools rather than mainline PhD programs. And the work being done by academics in IR is largely irrelevant and inaccessible to the policy community. Indeed, I can’t remember the last time I picked up a copy of International Studies Quarterly, much less the American Political Science Review. Frankly, I’m not sure I could read those journals at this point if I wanted to.