Webinar 4 – Publish or perish? – an overview of controversies associated with current scientific publishing practices

A total of 15 participants attended the webinar.

1. Webinar Slides

2. Poll results

1 What is your background?

A: Natural sciences: 8 responses

B: Social sciences and the humanities: 2 responses

C: Both

2. Please let us know over the public chat which is your area of expertise

Participants from diverse research fields such as health care, sport science, medicine, science management, biochemistry, social sciences and electrophysics joined the webinar. (At the time of these questions there were 12 people online).

3. Questions from the audience

Q.1: Isn’t the pre-submission of experimental designs too much work for little rewards? – Perhaps even unattractive? What is the benefit here?

4. Speaker’s comments and references for further reading

The reproducibility crisis

Publication bias

During the webinar an example was given on how the traditional way of publishing scientific results (namely, if only statistically significant results are considered) may be biased. Although statistically significant results should give scientists confidence that the sought effects are being found, in reality, and due to the huge proportion of none-published studies reporting none-significant results these confidence levels may not be as high as previously thought. This biased effect may be further increased by the fact that studies reporting significant results are much more cited than those reporting none-significant results. This problem is particularly accentuated in clinical trials and medical studies but it surely extends to other disciplines. Clearly more awareness from the part of scientists on this matter is necessary. An interesting article on the matter was discussed during the webinar.

Statistical considerations

Degrees of freedom

The issue of degrees of freedom was also briefly discussed. Degrees or freedom, or in this case, researchers degrees of freedom are terms which refer to the choice that researchers make during the design, planning, collecting and reporting phases of their studies. These choices are often arbitrary and could lead to an inflation of error rate. The opportunistic use of these degrees of freedom aims at obtaining statistically significant results but the practice is problematic because it enhances the chances of false positive results. Care should be taken when planning your studies so that your experimental design does not bias the results of your studies. Here is an interesting checklist as a further reading which might help you to better plan your studies.

P-values and a posteriori hypothesizing

The questionable research practices of p-hacking or data snooping and HARKing which is an acronym for “hypothesizing after the results are known” were briefly mentioned. Researchers incur in these practices in their desire to obtain P values which are statistically significant and are a point of dispute because these can lead to an inflation of the error rate. A recommendation here is to take time to design your experiments. If you are dealing with complex experimental designs, you may want to increase your sample sizes in order to compensate for this complexity. Think about whether or not some observation can be left out of your design. Also consider carefully which conditions should be combined and which should be indeed compared. If in doubt with your experimental design, get in touch with more experienced peers and colleagues.

Pre-registration

In the last part of the seminar the practice of pre-registration was discussed. Although seemingly unattractive and even dangerous practice (by which you expose your research ideas to a wider audience before publishing) pre-registration is a mechanism that can help you obtain valuable feedback. Normally, pre-registered studies get a timestamp upon registration and are embargoed to avoid exposing of information to a wider audience; however, you can still give access to this information to your peers and mentors and in this way, obtain crucial feedback from specific colleagues BEFORE your start your experiments. Pre-registration contributes to ensuring your scientific processes are transparent and eventually more reproducible. Many journals are now actually offering pre-registration workflows. The advantage of this is that there is a way to secure publication of your data. A good experimental design is here rewarded. The rationale here is that good experimental designs will throw in valuable results either way. With a proper experimental design, you can conduct your experiments knowing that no matter what the results are, your paper is likely to be published.

Institutional support for a better open science culture

It was stressed that open science and reproducibility cannot be achieved by the scientists alone but instead, institutions and other actors need to play a role in making sure there are enough incentives and mechanisms for rewarding efforts which improve the reproducibility of science. 

Data descriptors and data journals

Some examples of relatively new forms of scientific were given. Data descriptors and scientific journals with a focus on publishing data sets are paving the way to a more transparent research data culture and to making scientific data FAIR. Some of the mechanisms for publishing data sets have evolved a lot over the past decade so that most of the data journals have by now established peer review processes and recognized impact factors. Publishing data sets will also contribute to raise your scientific profile.

Dr. Axel Kohler– Goethe Research Academy

Dr. Roberto Cozatl| Open Science Team | openscience@bibliothek.uni-halle.de | 26.10.2020 | University and State Library of Saxony-Anhalt