The relevance and challenges of using journal guides to evaluate the quality of journals and papers

Wouldn’t it be great to have a clear single decisive list to determine and define what quality should be assigned or attributed to each specific academic journal? Simply, the answer is – certainly not. To start with, it is not possible. Defining quality is an elusive and impossible task, as beautifully portrayed in “Zen and the Art of Motorcycle Maintenance” (Pirsig, 1999). Measuring quality is no less challenging, yet, we know quality when we experience it - and different journals represent various levels of quality. Much of it is due to the rigor they require from submissions and the rigor of the review and decision making processes. Past success and reputation tend to establish norms, reinforcing the quality of the papers published.

Based on personal experiences, veteran scholars may form a well-founded view about which journals are stronger than others. External data such as rejection rate, typical citation level – Impact Factor, etc. may reinforce these individual perceptions. The quality of the review process and the review itself is another factor.

However, not every scholar benefits from such tacit knowledge - certainly, not early career scholars. They need some form of benchmark in their quest for fame, recognition and career success. They wish to maximise their effort, which means publishing their papers in the best fitting journal.

Another stakeholder with interest in knowing the quality of journals are decision makers – they would like to benefit from metrics to support decisions such as hiring or promoting, awarding accolades, and sometime bonuses too. Indeed, such decisions should be based on meritocracy and on valid and reliable data.

This is when quality lists have a role, and this is why they emerged years ago.

Now, every list is political and biased – perhaps apart from the Impact Factor statistics. The worst, to my opinion, are institutional lists when a small group of people in one organisation with specific interests decide which journals they count as ‘top’, ‘second best’, etc. From my experience, when certain professors leave the scene, these lists can often be revised according to the interests of new professors.

Other lists are based on the views and evaluations of groups of scholars. They, too, are biased, due to their own experiences, but nevertheless, it is better to have a less than perfect list than rely on a list imposed by tyrants, and certainly better than submitting to a journal chosen randomly. The scholarly community needs a way to propose which journals are more or less ‘worthy’ than others.

The Chartered Association of Business Schools’ Academic Journal Guide (aka the ABS List) is such a list, and to my view, one of the better, if not the best to serve as a benchmark for quality of journals. This is due to the way it was constructed and being updated periodically. The title – ‘Journal Guide’ is a clear positioning for the role of the list – guide and advice, not command and control. It suggests which journals may be considered more or less ‘worthy’. The interested reader can learn more about how the list was comprised, and what the process is for reviewing it, by reading its methodology which is publicly available on the Chartered ABS website.

One significant caveat: The quality of the journal is only a rough proxy to the quality of the typical papers published in the journal. Often, a really excellent paper will be published in a less reputable journal, and similarly, papers of mediocre level can be published in leading journals. The nature of the reviewing process is such that this is not a perfect system.

To my view, it is a good idea to consult quality lists before submitting to journals. Usually authors have an idea if they have a ‘true gem’ ground-breaking study, or simply a good study, or, simply, a nice piece of work with a limited level of contribution. They should be able to optimise their efforts, by directing their papers to journals in which they will have a fair chance.

Overall, there is a role for quality lists – and it is a good idea to consult such lists, as a guide and indicator for the quality of journals. Not as an arbitrary mechanical algorithm for determining where to submit any paper.

Yehuda Baruch is Professor of Management at the University of Southampton.