This month, I would like to discuss concerns among many in our community regarding how research quality is assessed.
As I mentioned in my last message, for companies, such an assessment is relatively easy—you can look at the profit-and-loss column. In academic research, measures of success are more complex; doing justice to the science requires a multidimensional analysis. Unfortunately, those ultimately deciding about research funding are not necessarily experts—and too often, they look primarily at bibliometrics such as the number of citations a publication or researcher has attracted, the researcher’s h-index and so on.
While the number of citations arguably has some meaning as an indication of the influence of a paper or researcher in a given field, the assessment is more difficult for very recent papers that have not had time to attract citations. In this case, one habit is to look at the citations a particular journal has attracted in the past, using a measure such as the analytics firm Clarivate’s Journal Impact Factor. But the relation between the quality of a research paper and the long-term performance of the journal in which it’s published is extremely indirect, to say the least. Nonetheless, some funding agencies require that project results be published only in a journal that lies in the top quartile based on Impact Factor. That is not a good development.
Eleven years ago, a group of editors and publishers formulated recommendations to funding agencies, academic institutions, journals, organizations that supply metrics, and individual researchers. These were embodied in the San Francisco Declaration on Research Assessment (DORA; sfdora.org). More than 24,000 individuals and organizations (including Optica) in 165 countries have signed the declaration during the past decade. In December 2023, one of the signatories, Sorbonne University in Paris, even discontinued its subscription to the bibliometric tools (including Web of Science) offered by Clarivate. Nonetheless, some still search for easy marks of quality—and the Journal Impact Factor, a doubtful quality criterion for an individual paper, continues to be used.
We can all agree that every scientific paper deserves to be reviewed thoroughly. For most papers, this happens several times, and independently—by the journal in the peer review process; by one or several funding agencies; by search committees; perhaps by others. This replication does not seem very efficient. Given the apparent, ongoing desire for an easy-to-use quality mark, it would be nice to introduce more efficiency—for example, by using the result of the initial peer review process. A number of journals, including some Optica publications, already offer transparent peer review, in which authors and reviewers can agree to make all peer review correspondence available as part of the published paper. While a step in the right direction, this does not provide a quality mark that can be easily used by nonexperts.
How this situation can be improved should be a topic of interest for everyone in the research community.
—Gerd Leuchs,
Optica President
View Spanish, Chinese, French, German and Japanese translations of this message: