Metrics, flawed indicators, and the case of philosophy journals
De Marchi and Lorenzetti (Scientometrics 106(1):253-261, 2016) have recently argued that in fields where the journal impact factor (IF) is not calculated, such as in the humanities, it is key to find other indicators that would allow the relevant community to assess the quality of scholarly journals and the research outputs that are published in them. The authors' suggestion is that information concerning the journal's rejection rate and the number of subscriptions sold is important and should be used for such assessment. The question addressed by the authors is very important, yet their proposed solutions are problematic. Here I point to some of these problems and illustrate them by considering as a case in point the field of philosophy. Specifically, here I argue for four main claims. First, even assuming that IF provides a reliable indicator of the quality of journals for the assessment of research outputs, De Marchi and Lorenzetti have failed to validate their suggested indicators and proxies. Second, it has not been clarified why, in absence of IF, other journal-based metrics that are currently available should not be used. Third, the relationship between IF and rejection rate is more complex than the authors suggest. Fourth, accepting the number of sold subscriptions as a proxy would result in discrimination against open access journals. The upshot of my analysis is that the question of how to assess journals and research outputs in the humanities is still far from resolved.