I was recently referred to a blog post discussing different citation patterns across the disciplinary spectrum: Poor citation practices are a form of academic self-harm in the humanities and social sciences. Although there is a lot to be discussed there, two issues came to my mind immediately.
First of all, there is not a single line in the whole blog post with a critical appraisal of the use of citation indexes as proxies for quality. At this point in time this is absolutely inexcusable, and I won’t go through the trouble of pointing to the accumulated literature that does that over and over again. I would just like to call attention to a recently published ranking of the most cited articles (What’s the most cited research paper ever? You’ll be surprised), referred by BoingBoing, which reproduces a revealing commentary: "The exercise revealed some surprises, not least that it takes a staggering 12,119 citations to rank in the top 100 — and that many of the world’s most famous papers do not make the cut. A few that do, such as the first observation of carbon nanotubes (number 36) are indeed classic discoveries. But the vast majority describe experimental methods or software that have become essential in their fields." I do not want to diminish the relevance of new methodologies, but this paragraph in and of itself should be enough to at least cause some pause in those eager to use citations as measurement tape for science.
But where that post really misses the mark, in my opinion, is when it suggests systematic reviews as some form of panacea for the "problem" of citations in the humanities. This is only possible if all the specificities that differentiate such areas of knowledge are ignored; just to make a quick remark on that, consider Kuhn’s discussion of paradigms and how mono-paradigmatic sciences do not exist in the social sciences. This feature, alone, has two important implications: it is much easier to analyze massive numbers of articles when they are in agreement about all kinds of essential aspects, and it is not necessary to address those; conversely, in the social sciences and humanities there are different schools of thought and it is impossible to adequately address all of them. Second, and because of that, researchers in those areas have to spend much more time describing their theoretical and methodological choices, thus requiring more time to deal with smaller volumes of literature and less textual output. Ignoring that is returning to a 19th century version of a "unified science" that should abide by the same "universal" method, following the model of the physical sciences.
Compare this with the much more nuanced approach of the LSE blog (Impact of social sciences – 1: What shapes the citing of academic publications), which, even when making similar remarks also points out to the same issues that I outlined above, for instance:
"In medicine all published papers are written to a word limit of 3,000 words, whereas the norm in the social sciences is for main papers to be around 6,000 to 9,000 words long"
"The differences in citation patterns between the medical/physical sciences and the social sciences and humanities can also be explained by the development of a ‘normal science’ culture in the former – whereas in the social sciences there are still fundamentally opposed theoretical streams across most of the component disciplines. In the social sciences citations can become a way of taking sides on what constitutes a valid argument. All of these features are even more strongly marked in the humanities, where referencing is often a matter of personal choice."
This leads the author(s) of that blog post to a markedly different conclusion: "Cumulatively these effects are more than enough for us to emphasise that no worthwhile comparisons of citation rates or scores achieved by different academics can be made across the major discipline groups recorded in Figure 1.1 The nature of an academic subject, the ways in which it is set up to generate different kinds of publications, and how practices relating to citation and literature reviews have developed over time, are all far too distinctive across major subject groups to make inter-group comparisons legitimate or useful."
I look forward to the day when less lazy attempts to evaluate science will be in place, and we can look back to the current mess and laugh at our own simplemindedness…