We are seeing a great renaissance in new funding models1, grant programs2, and moving back (hopefully!) to “fund people, not projects”3. A lot of these initiatives bring an easy, non-bureaucratic application process to get funding. How applicants are selected is still largely the same.
Though there isn’t a full RCT selecting applicants to fund at random, there have been a few attempts at randomized funding:
- Health Research Council of New Zealand: “The selection of successful proposals will not be the same as that for other HRC contracts. All proposals that meet the eligibility criteria will be assessed for compatibility with the scheme’s intent; proposals will not be scored or ranked. All proposals that are considered eligible and compatible will be considered equally eligible to receive funding, and a random process will be used to select the proposals to be offered funding.”
- Swiss National Science Foundation: “Funding agencies rely on peer review and expert panels to select the research deserving funding. Peer review has limitations, including bias against risky proposals or interdisciplinary research. The inter-rater reliability between reviewers and panels is low, particularly for proposals near the funding line. Funding agencies are also increasingly acknowledging the role of chance. The Swiss National Science Foundation (SNSF) introduced a lottery for proposals in the middle group of good but not excellent proposals. In this article, we introduce a Bayesian hierarchical model for the evaluation process. To rank the proposals, we estimate their expected ranks (ER), which incorporates both the magnitude and uncertainty of the estimated differences between proposals. A provisional funding line is defined based on ER and budget. The ER and its credible interval are used to identify proposals with similar quality and credible intervals that overlap with the funding line. These proposals are entered into a lottery.”
- As of 2021, following a successful pilot, SNSF is introducing random selection as a tie-breaker for all funding schemes.
- Volkswagen Foundation’s Experiment! Initiative: “Since 2017, the Volkswagen Foundation is testing a new selection procedure for project applications: In the funding initiative “Experiment!”, part of the funded projects are selected by an independent jury. Additionally, further projects are drawn from those applications that are suitable for the program and eligible for funding.”
- Austrian Science Fund, FWF uses a double-blind selection procedure in it’s 1000 ideas programme: “Half of the maximum number of 30 projects that can be funded are selected by lot from a pool that the FWF Board preselects according to strict criteria and which is then evaluated by an interdisciplinary panel of experts. The other half are awarded on the basis of the recommendations of the jury itself. Up to 150,000 euros are available per project. And if a single jury member considers a project to be particularly innovative, but is unable to convince the rest of the panel, he or she can use a wildcard to bring it into the final round. After all, who knows where serendipity will strike? It is important to note that only the very best ideas reach the final round, and lots are drawn only among the finalists.”
- Mavericks and lotteries
- The acceptability of using a lottery to allocate research funding: a survey of applicants
- Research funding randomly allocated? A survey of scientists’ views on peer review and lottery
- Policy Considerations for Random Allocation of Research Funds
- Randomisation Project by RoRI is really interesting in not just tracking these attempts but also enabling more.
Please send any novel attempts at application selection. This seems like a ripe area for experimentation!
FROs, PARPA ↩︎
AI Grant, Emergent Ventures, Fast Grants, Helium Grants ↩︎
Fund people not projects by John Ioannidis, Fund people, not projects by José Luis Ricón ↩︎