«

»

Analysis preregistration gets a big boost in the form of a million dollars

The Center for Open Science has recently issued The $1,000,000 Preregistration Challenge. What is preregistration? Briefly, the idea is to reduce “researcher degrees of freedom.” Researchers often explore their data and then make data-processing and analytical choices based on the data that will be analyzed. But this can lead to biased results and incorrect scientific inference. Recent meta-studies have found a surprisingly high level of non-reproducibility in psychology and biomed, and this has no doubt spurred increased interest in techniques that can potentially make studies more reproducible. And preregistration is one such technique.

When I first heard of preregistration, I thought, “huh, well, sure, I guess in theory that would be a good thing to do.” But I also thought, “why would I do that? It’s quite a bit of extra work and doing so won’t benefit me in any way.” Like all of us, my time is limited and very valuable. I do cost-benefit analyses all the time on whether or not to do things. And preregistration wasn’t even close to the break-even line.

But this Preregistration Challenge may tip the scales for me. $1,000 is not to be scoffed at. And 1,000 recipients isn’t a small number of awards. I have a major new project coming up that I haven’t yet started analyses on. And the timeline for awards distribution seems quite reasonable. (In other words, the awards won’t all be won by projects that are fast and in disciplines that have fast publishing turn-around times.)

It’s pretty clear that for preregistration to become the norm, academic culture will need to change. And cultural change is hard. This challenge is a neat way to get a lot of people to experiment with a new practice that they likely wouldn’t have bothered with otherwise.

Oh wait.

Maybe I’ll take it all back. There’s a list of ‘eligible’ journals one can publish results in. And I assumed this list was just to vet journal repute — i.e. you can’t win if you publish in a journal that doesn’t actually do peer-review. But the list is actually journals that are open science friendly. And ecology journals are poorly represented on the list. Which is a problem, of course. But nudging ecology journals towards more open practices is not something I have the time or energy for. Boo. Maybe I’ll preregister and hope that by the time I publish, there are more ecology journals on the list.

Permanent link to this article: http://ecologybits.com/index.php/2016/01/18/analysis-preregistration-gets-a-big-boost-in-the-form-of-a-million-dollars/

8 comments

1 ping

Skip to comment form

  1. David Mellor (@EvoMellor)

    Thanks for taking a look at the Preregistration Challenge. The eligible journal list is indeed short on ecology journals right now.* However, we have had great initial interest in this competition and in the Transparency and Openness Promotion Guidelines (https://cos.io/TOP) by a group of a few dozen ecology and evolutionary biology journals and hope to have them sign on in the near future, which would be a huge boost to open science in the field. We’re encouraging community engagement and hope that interested researchers advocate for these topics among their peers (https://cos.io/getlisted/).

    *For the time being, journals that publish ecological research on the list include the Royal Society journals, the PLOS journals, BMC Ecology, PeerJ, and Collabra.

  2. Koen Hufkens

    My question would be how does this work in for example the world of mechanistic model development?

    The preregistration challenge would work very well in classical manipulation experiments, less so for anything that deviates from that form. Or, you would need to be so broad in your initial statistical framework that anything goes.

    In case of my recent grassland model, I could have speculated that it would have been a coupled pulse response hydrology – vegetation model including radiation and temperature as co-limiting factors. But this would be overly broad.

    You could rephrase the question and say that you improve upon an existing framework and testing for the difference in accuracy using test X. Again, I would consider this overly broad in defining your methodology up front and useless as your goal is exactly to pass test X in the first place.

    Thoughts?

    1. Margaret Kosmala

      My guess is that there is less of a bias problem for things like mechanistic model development — or for things like machine learning — where you’ve got a validation dataset built in as part of bias- / overfitting- prevention. I think the challenge really is focused on traditional hypothesis-driven work, where researchers changing their hypotheses to fit the data can cause bias. I’ll ping David Mellor, above, though. He’s probably thought about it more than I have.

  3. David Mellor (@EvoMellor)

    Preregistration is most useful at checking biases that occur when hypothesis testing. As you mentioned, the goal of model development is to refine the model to “pass” your test, so preregistration is a bit less relevant. However, one goal of the competition is to engage in this type of wider experimentation- does preregistration and specifying ahead of time the exact method you will use in model development help? You can give it a try to see the degree to which your planned procedure varies from what is actually done. The one case you mentioned, comparing the predictions of a model that has been developed to the real-world data collection does seem very relevant, if a bit broad. I would be happy to hear your thoughts if you want to try it out.

  4. David Mellor

    I know it’s been a while but I wanted to reach out again on this thread. Recent ecology additions to this list, include Ecology Letters, Systematic Botany, Oikos, Ecology and Evolution, and Conservation Biology.

    This is in addition to the already added journals from the Royal Society, AGU, Collabra, PeerJ, all of the PLOS journals, and BMC. Overall, there are 466 journals on the list and we hope that the recent discussions in the field (e.g. http://onlinelibrary.wiley.com/wol1/doi/10.1111/ele.12611/full) will keep that list growing!

    1. Margaret Kosmala

      Hey, thanks for the update!

      I’m just now starting a new project and am working through my planned analysis instead of diving right into the data (like I usually do) so that I can participate in the challenge. I’ll also blog about how that all goes. One challenge I’m finding is time pressure: I have to submit a report on my progress pretty soon, and it would be nice to have some ‘preliminary analyses’ to highlight. Figuring out all the analysis first takes time…

      1. David Mellor

        My best recommendation is to start simple and to preregister only the subset of tests that you are either 1) most certain that you will conduct, 2) that you are most certain will come out “as expected” or 3) that you have the clearest sense of what the alternative explanations will be based on the unknown results. There is no expectation that unplanned analyses shouldn’t take place (indeed, that would be counter productive!), but clarifying what was pre-specified versus what was the result of unexpected data exploration is a major goal of the whole process.

    2. Margaret Kosmala

      Got it! Thanks.

  1. Thoughts on preregistering my research » EC0L0GY B1TS

    […] like a Good Thing to Do in the name of Open Science. And the monetary incentive pushed me over the learning-curve barrier and the fact that it involves a bit more work than usual. I consider my preregistration a bit of an experiment. Having written one now, I have […]

Leave a Reply

Your email address will not be published. Required fields are marked *