Survey will open in a new window, please keep this tab open! If the survey does not open, please enable popups and refresh the page.

Dismiss

Purpose

The social sciences have witnessed a dramatic shift in the last 25 years. Rigorous research designs have drastically improved the credibility of research findings. However, while research designs have improved, less progress has been made in relating experimental findings to the views of the broader scientific community and policy-makers. Individual researchers have begun to gather predictions about the effects they might find in their studies, as this information can be helpful in interpreting research results. Still, we could learn more from predictions if they were elicited in a coordinated, systematic way.

This prediction platform will allow for the systematic collection and assessment of expert forecasts of the effects of untested social programs. In turn, this should help both policy makers and social scientists by improving the accuracy of forecasts, allowing for more effective decision-making and improving experimental design and analysis.

Benefits of a prediction platform

Measuring and improving the accuracy of forecasts

Collecting and evaluating a body of priors across members of an expert community can expose whether and how predictions are systematically biased or inaccurate, and can allow us to explore adjustments or corrections that yield more reliable forecasts. It can similarly allow us to identify circumstances that affect the accuracy of predictions and superforecasters, guiding the improvement of the accuracy of future forecasts.

Improving the accuracy of forecasts and identifying consistently accurate forecasters will be particularly valuable to policy makers who often must rely on expert predictions and scientific consensus when considering policy options, and particularly when investment decisions must be made in the absence of RCTs or other rigorous evidence. Many policy analysts already make and use predictions informally and could benefit from infrastructure that standardizes the sourcing and cataloguing of knowledge, similar to how the American Economic Association has systematized registration with the RCT Registry. If forecasts and their accuracy are tracked over time, they can also be combined with newer or crowdsourced predictions, giving more weight to those of superforecasters, to weed out interventions with a lower likelihood of success. There is clear demand from research organizations and funders for this kind of standardization.

Mitigating publication bias

In cases where null results are known to have been unexpected, we can treat them not as null results, but as rejecting previously held priors. We propose to use ex ante priors, such as the mean or median forecast, as an alternative hypothesis against which to test experimental results. Importantly, forecasts will need to be collected ex ante, given that it is all too easy to rationalize any result ex post (hindsight bias).

Improving experimental design

The collection of priors, much like pre-analysis plans, focuses attention on the hypotheses to be tested and helps to highlight which research questions have the highest value. It can also improve the efficiency of research by allowing for a more optimal allocation of participants across treatment arms or by focusing attention on those outcome variables for which more data would have the highest value of information.

Call for Projects – Soft Launch

There is a tension between collecting predictions for more studies and risking survey fatigue on the part of expert forecasters. One of the biggest challenges in building a predictions platform for social science research is that it might impose a large burden on forecasters, similar to providing the public good of referee reports. An advantage of a centralized predictions platform is that we can ensure no individual receives an abundance of requests for predictions. To further mitigate risk, our “soft launch” of this platform will focus on gathering predictions for a few key initial studies. This will enable us to tweak parameters like the incentives offered and the frequency of prediction requests and adapt going forward.

To reduce the risk of forecasts being biased by study results, it is best if data has yet to be collected or if results are not yet available or accessible. For example, projects that have been offered “in-principle acceptance” as part of a Registered Reports track but have not yet collected data might be particularly suitable. Forecasts may also be particularly useful for large, flagship projects that are unlikely to be replicated.

Initially, we will focus mainly on economics topics with the eventual goal of opening the platform to other social sciences.

The benefits to researchers include:

  • Freedom to describe their project and determine what questions are asked of forecasters.
  • Forecasts elicited from a sample of experts, including (i) disciplinary experts (e.g., pre-screened PhD students), (ii) selected experts curated for particular topics or methods (e.g., senior academics and professionals), and (iii) members of the general public who sign up on the platform.
  • Funding for forecasting incentives and research assistance.
  • Early feedback to help determine which treatments or outcomes to prioritize.
  • Assistance with designing your elicitation surveys.
  • Early feedback to help determine which treatments or outcomes to prioritize.
  • Increased visibility of the research project.

To be notified when the request for proposals is launched, please enter your name and e-mail address here: