How biased is the literature in Psychological Science?
How biased is the literature in Psychological Science?
Disciplines
Other Social Sciences (50%); Psychology (50%)
Keywords
-
Open Science,
Reproducibility,
Dissemination Bias,
Scientific Integrity,
Decline Effect,
Research Synthesis
The confidence crisis in psychological science has come by now even into the focus of popular media. Especially, science scandals pertaining to data fabrication as for instance evidenced in the prominent research fraud case of Diederik Stapel in the Netherlands received considerable attention through the public. Although such cases attract the most attention, there are further mechanisms in the scientific process that pose a considerably stronger threat for the validity of empirical research results. Non- replicable findings, voodoo correlations, and zombie theories permeate the scientific literature and lead oftentimes to the adoption of spurious results. This does not only negatively affect the scientific process but can also lead to dramatic for real-world implications. For instance, a large-scale meta- analysis of randomized controlled trials about the effectiveness of certain interventions within the clinical contexts showed that initial effect estimates and early published studies showed stronger average treatment effects than subsequently published studies. Such effect overestimations can be even observed if as is typically the case in clinical intervention studies studies have been preregistered. Consequences of effect inflation are exacerbated by the fact that strong, surprising, hypothesis- conforming, and significant results are published more often, quicker, and more visible (i.e., in journals with higher impact factors) and in turn are cited more frequently than smaller (i.e., typically more realistic) effect estimates. Therefore, false and inflated effects are often prominently communicated in the literature. The increased awareness of the scientific community about such problems has led to considerable efforts particularly in recent years to increase the transparency and replicability of empirical studies. The present project aims to extend these efforts and it is planned provide a contribution to the estimation of prevalence and strength of bias in the empirical literature. To this end, we intend to assess, extract, and reanalyze data of all published meta-analyses from five of the most authoritative journals in psychology. By means of application of standard and specialized methods of research synthesis, we plan to achieve five goals: 1.) assess the average strength of effect inflation in initial publications compared to meta-analytic summary effects, 2.) calculate average annual effect declines, 3.) assess moderating influences of study characteristics as well as visibility and authority of the journal where initial studies were published, 4.) estimate the prevalence and effect misrepresentation in the literature, and 5.) provide estimates for the prevalence of dissemination bias based on seven modern methods for bias detection and compare results with originally reported bias estimates. Our results will be useful to inform authors, reviewers, and readers alike about the potential evidential value of initially published novel results and provide a reasonable estimate for expectable effect changes over time.
Due to their nature, results from empirical research studies may represent a more or less accurate picture of reality. Less representative findings may occur due to inappropriate study designs, approaches, or interpretations but may also represent chance effects. Whilst peer-review is intended to serve as a failsafe against the publication of suboptimally designed or interpreted studies, chance findings are largely immune against detection from peers. In empirical research, chance findings manifest themselves by inaccurate effect estimates, leading to over- or underestimation of investigated study effects or the passing of a statistical threshold indicating significance of a given effect, although in reality there is none. This problem is typically dealt with by independent replications of newly established effects which will lead to more accurate effect estimations once sufficient data have been accumulated. However, a central assumption of the scientific method is that effect over- and underestimations happen at about an equal amount of time and one scenario does not occur more often than the other. At the core of the present project lies the idea that this is not the case because exploratory studies systematically report an overproportional number of overestimated compared to underestimated effects throughout the literature in Psychology, regardless of the investigated research question. This is particularly problematic because exploratory studies get more attention and are cited more often, thus achieving a status of unfounded authority compared to replications. By examining results of more than 570 research syntheses with over 51 million participants that were published in five flagship journals in Psychological Science, we showed in our project that decline effects (i.e., indicating overestimation of the initial effect in a given field, thus leading to decreasing effect sizes over time) are twice as likely to occur in the literature than effect increases. Moreover, these declines are considerably stronger compared to the increases. These findings may be attributed to publication-related mechanisms that incentivize the publication of underpowered studies with spectacular (but unlikely) results. This interpretation is corroborated by our observation that the largest (and therefore most spectacular) effects that were reported in initial studies represented the most inaccurate estimates. In other words, the most breathtaking effects were most likely to be wrong. Although our results are so far only based on studies that have been published in Psychology, we expect that these results generalize to other empirical disciplines as well. Our findings are a testament to the importance of the use of modern open science practices in empirical research but illustrate a need to move beyond mere voluntary preregistering and data sharing. Reforming editorial policies, incentivizing accurate instead of spectacular effect publication, and application of state-of-the-art bias detection methods are necessary means to improve confidence in empirical research.
- Universität Wien - 100%
- Jelte (J.M.) Wicherts, Tilburg University - Netherlands
Research Output
- 68 Citations
- 4 Publications
- 1 Methods & Materials
-
2019
Title Directional and regioselective hole injection of spiropyran photoswitches intercalated into A/T-duplex DNA DOI 10.1039/c9cp03398j Type Journal Article Author Avagliano D Journal Physical Chemistry Chemical Physics Pages 17971-17977 Link Publication -
2019
Title Effect Declines Are Systematic, Strong, and Ubiquitous: A Meta-Meta-Analysis of the Decline Effect in Intelligence Research DOI 10.3389/fpsyg.2019.02874 Type Journal Article Author Pietschnig J Journal Frontiers in Psychology Pages 2874 Link Publication -
2020
Title Times are Changing, Bias isn’t: A Meta-Meta-Analysis on Publication Bias Detection Practices, Prevalence Rates, and Predictors in Industrial/Organizational Psychology DOI 10.31234/osf.io/mtv2h Type Preprint Author Siegel M Link Publication -
2022
Title Times Are Changing, Bias Isn’t: A Meta-Meta-Analysis on Publication Bias Detection Practices, Prevalence Rates, and Predictors in Industrial/Organizational Psychology DOI 10.1037/apl0000991 Type Journal Article Author Siegel M Journal Journal of Applied Psychology Pages 2013-2039 Link Publication