Is Academic Publishing BS?
UCLA public policy prof Mark Kleiman pithily summarizes what many have been saying about the academic publishing racket, in an effort toward “Toward a general theory of academic b.s.”
1. One measure of success as a professor, and one source of reputation, is the production of successful students, where success is defined as jobs, and eventually tenure, at good places.
2. Jobs and tenure are produced by publications.
3. Saying something new, important, and true is hard. Most people aren’t up to doing it very often. Sturgeon’s Law (“Ninety percent of everything is crap”) cannot be repealed.
4. Therefore, success requires publishing papers that are not new, important, and true, that embody either pure b.s. or Kuhnian “normal science”: that is, solving minor puzzles according to a given paradigm without challenging the paradigm.
5. Therefore, in order to achieve a place within academia, a style of work and thought needs to enable the production of un-interesting papers. Those papers need to be distinguishable as “better” or “worse” by those in the in-group, so they can pretend – to others and to themselves – to maintain standards and reward excellence, but it must be possible to crank those papers out more or less mechanically without any risk of producing a a “wrong” result. Ideally, the papers should be written in some incomprehensible language (using either lots of non-standard words or lots of double integral signs and Greek letters) to conceal their vacuity from outsiders.
6. Str**sians, r*tion*l-ch**ce political scientists, r*t**nal-exp*ctations macroeconomists, F**c**ldians, L*c*nians, d*c*nstr*ction*sts, ec*n*metr*cians, and practitioners of “critical” anything have formulas that allow mediocre minds to produce arbitrarily large numbers of papers, and other mediocre minds to sort them out as journal referees. That gives their practitioners an edge when it comes to publishing. And the custom of citing one another gives them a further edge when it comes to citations.
While clever and possessed of several large grains of true dat, it’s also far too harsh.
The problem is that either:
- Kleiman has far too demanding definitions of “new,” “important,” “interesting,” and “mediocre minds”
- publishing large numbers of new, important, and true papers is a ridiculous standard by which to judge academic researchers, or
- both of the preceding are true.
I tend to think it’s 3, with 2 being the most prominent explanation.
Mark’s a very bright and productive fellow even by standards of professors at elite universities. Most professors are less bright and productive than he is and most schools less elite than UCLA. Yet all but a relative handful of the college professors I’ve encountered over the past quarter century plus are quite bright and more than adequately competent in their subject matter to teach it to undergraduates and those seeking a master’s credential in social science education. Which, incidentally, is what almost everyone running around with the title “professor” is actually doing at all but the most elite institutions.
But the fact of the matter is that even extraordinarily talented scholars — the best of the best, the ones competing for Nobel Prizes and Fields and John Bates Clark Medals and the like — tend to only have one or two breakthrough ideas and then spend the rest of their careers spinning of minor variations. The William Rikers and Samuel Huntingtons, who seemed to have multiple paradigm-setting insights throughout their careers, are absurdly rare. So, the idea that some guy down the hall from Mark — much less a guy at Northwest Podunk State University teaching a 4/4 load — is going to crank out multiple game-changing articles is just fantastical.
Then again, a system where a full 1/3 of all published articles are never cited (!) is taking it a bit far. But that’s where we are:
- Only 11 papers got more than 1000 citations
- 245000 got less than 10 citations
- 100,000 got one or none citations
So, does that make the whole thing b.s.?
Not at all. Just because a study isn’t a candidate for a Nobel Prize doesn’t mean it doesn’t advance our understanding of a field of discipline. In American politics, for example, there are always new election data that can be used to test and refine existing theories. There are interestingish and newish ways to slice the salami, applying techniques to different subsets of data of interest to a particular audience.
Should teaching schools abandon the demand for significant publication? It is, after all, a pretty recent requirement, added almost entirely because the oversupply of PhD’s made it possible for departments to make the demand. And, frankly, the fact that tens of thousands of people are forced to publish to prove their worth as scholars was the driver for the proliferation of journals that makes it possible to get ever-less-interesting articles published.
I don’t think so. Being forced to continually engage with the literature and one’s colleagues outside the department is vital for keeping current in the field and remaining an active scholar. Otherwise, professors at most institutions would tend to devolve into mere teachers and the stereotype of the old prof lecturing from yellowed notes would be much more real.
It would, however, be nice to break the cycle of creating articles that are readable by and applicable to only a tiny number of one’s colleagues in one’s sub-sub-subfield. More cross-disciplinary work and articles in policy journals or for non-specialized audiences would be a far more productive use of the average scholar’s mind and time. But I haven’t the foggiest idea of how to change the incentive structure to make that happen.