Figures
Following a re-analysis and validation of all analyses in this article, a very minor discrepancy in few numbers was uncovered, because in a power calculation for two-sample t-tests, (df+2)/2 was used in a formula instead of df+2.
The third sentence of the abstract should read: “Median power to detect small, medium, and large effects was 0.12, 0.46, and 0.78, reflecting no improvement through the past half-century.”
In the sixth paragraph of the results section, the third sentence should read: “For example, to detect a small true effect (d = 0.2), 90% of cognitive neuroscience records had power < 0.242.
The first sentence of the tenth paragraph of the Results section should read: “The somewhat higher power in the journals we classified as more medically oriented was driven by the Journal of Psychiatry Research (JPR in Fig 4; median power to detect small, medium and large effects: 0.24, 0.79, 0.94), which includes more behavioral studies than the other two journals we classified as ‘medical.’”
In the 14th paragraph of the Results section, the fourth sentence on should read: “In the best case of having H0:H1 odds = 1:1 = 1 and zero bias, FRP is 13.0%. A 10% bias pushes this to 22%. Staying in the optimistic zone when every second to every sixth of hypotheses work out (1 ≤ H0:H1 odds ≥ 5) and with relatively modest 10%–30% experimenter bias, FRP is 22%–70% (median = 50%). That is, between one- to three-quarters of statistically significant results will be false positives. If we now move into the domain of slightly more exploratory research where even more experimental ideas are likely to be false (5 < H0:H1 odds < 20; bias = 10%–30%), then FRP grows to at least 59%–90% (median = 75%).”
Similarly, corrected power estimates in Table 1 are 0.11, 0.15, 0.42, 0.46, 0.75, and 0.71 in the first row; 0.16, 0.24, 0.64, 0.63, 0.88, 0.84 in the second row; 0.16, 0.24, 0.62, 0.60, 0.87, 0.82 in the third row; and 0.12, 0.18, 0.46, 0.52, 0.78, 0.76 in the fourth row. All other rows and columns remain unchanged.
Please see corrected Table 1.
The bottom row shows mean power computed from 25 power surveys.
https://doi.org/10.1371/journal.pbio.3001151.t001
Acknowledgments
The authors thank Dr Marjan Bakker (Tilburg University) who kindly re-analysed all the analyses in the paper and discovered the above mistake.
Reference
- 1. Szucs D, Ioannidis JPA (2017) Empirical assessment of published effect sizes and power in the recent cognitive neuroscience and psychology literature. PLoS Biol 15(3): e2000797. https://doi.org/10.1371/journal.pbio.2000797 pmid:28253258
Citation: Szucs D, Ioannidis JPA (2021) Correction: Empirical assessment of published effect sizes and power in the recent cognitive neuroscience and psychology literature. PLoS Biol 19(3): e3001151. https://doi.org/10.1371/journal.pbio.3001151
Published: March 5, 2021
Copyright: © 2021 Szucs, Ioannidis. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.