In science, reproducibility is key for making systematic progress. Scientometrics is no exception to this. The reproducibility of scientometric research was the topic of a workshop held in the afternoon of 17th October 2017 at the 16th ISSI Conference, in Wuhan, China, attracting about 50 participants. The workshop sought to kick off the debate whether and in what way the reproducibility of research in scientometrics may be endangered, and if so, what to do to address the problem. 

The workshop started with a series of short presentations by the workshop organizers, offering a variety of perspective on reproducibility in scientometrics research: Sybille Hinze (DZHW, Germany) reported on the collaborative efforts of the competence center for bibliometrics in Germany around the creation, development and curation of a quality assured data infrastructure for bibliometric applications; Jason Rollins (Clarivate Analytics, USA) presented a vendor’s view on reproducibility. He emphasized Clarivate Analytics readiness to cooperate, e.g. by providing custom datasets of the Web of Science for data challenges and expressed openness to suggestions on details of other datasets; Jesper Schneider (Aarhus University, Denmark) remarked that the concept of reproducibility is more ambiguous than suggested by common sense understandings. He made a distinction between exploratory and confirmatory or explanatory research, criticizing that too often research is framed as explanatory or confirmatory when in fact it is only exploratory, leading to issues with replication of the claims made. Ludo Waltman (CWTS, Netherlands and editor of Journal of Informetrics) argued that due to the difference between psychological research and scientometrics research we should not expect to encounter the same major reproducibility problems, but suggested that instead the major threat in scientometrics are mistakes made in data analysis. Theresa Velden (ZTG, TU Berlin, Germany) discussed concerns about the reliability of computational methods used to map scientific fields that drive the initiative of the recent topic extraction challenge (www.topic-challenge.info). Katy Börner's (Indiana University) presentation in form of a recorded video message described various data centered, tool-based, and training oriented initiatives she is involved in in order to improve the reproducibility of research in scientometrics.

In the second part of the workshop, all workshop participants got involved to discuss in break-out groups three questions:

  1. What threats to the reliability of scientific knowledge in scientometrics exist & why bother?
  2. Should we be more concerned about exact or conceptual reproducibility? (Why?)
  3. Through what measures can these threats be addressed?

The break-out groups were organized to align with four broad topics. In the following some of the key points that they discussed and reported back at the concluding session of the workshop:

- Data (Rapporteur Sybille Hinze): We need good data since data are foundational for what comes after, and one of the key requirements for reproducibility is stability of data. The group regarded alternative data sources as hugely problematic, as they contain more black boxes than traditional data sources, and emphasized that we need to fulfill same requirements ourselves that we expect database vendors to fulfill.

- Computational Methods (Rapporteur Ludo Waltman): We need clear standardized protocols for checks of computations to make sure that the most standard errors are avoided and when tools are used we need better explanations from users and developers of what the tools used actually do. The group further suggested to calculate scientometric statistics two times to ensure correctness, and stressed the importance of having discussions with users on how the statistics have been obtained. The group echoed the need for stable access to data in order to support exact replication.

- Statistical Methods (Rapporteur Jesper Schneiders): The group concluded that over-reliance on statistical significance and statistical inference is a bad thing, and that statistics as prime evidence for knowledge claims is problematic and that instead other evidence needs to used. The group called for more openness, better documentation of analyses, and suggested that if findings seem interesting, we should try to reproduce those.

- Interpretation (Rapporteur Alesia Zuccala): This group focused on conceptual replication of findings rather than exact replication by use of the same method and data. While we need to have a clear explanation of underlying concepts and assumptions, e.g. for policy recommendations, the group identified as problematic that in our field often operationalization is enough while theoretical conceptualization is often fuzzy. It was identified as a threat to conceptual replicability that too often the ready availability data drives how we conceptualize things.

Both group II and group III who focused on methods, as well as members of the audience, highlighted the important role journals can play to help improve reproducibility and set standards for best practices, e.g. by providing check lists to authors and reviewers for good method descriptions or granting certificates to articles that provide reproducibility.

The workshop organizers envision a continuation of the discussion in form of a workshop or special track at the upcoming STI conference from 12-14 September 2018 in Leiden (The Netherlands).

The presentations and outcomes of the workshop are available here.

About the author

comments powered by Disqus

Join our society


Become a member         Subscribe to our mailing list