Survey will open in a new window or redirect momentarily, please keep this tab open! If the survey does not open, please enable popups and refresh the page.

Status:

Dismiss



Purpose

The social sciences have witnessed a dramatic shift in the last 25 years. Rigorous research designs have drastically improved the credibility of research findings. However, while research designs have improved, less progress has been made in relating experimental findings to the views of the broader scientific community and policy-makers. Individual researchers have begun to gather predictions about the effects they might find in their studies, as this information can be helpful in interpreting research results. Still, we could learn more from predictions if they were elicited in a coordinated, systematic way. See this Science article for a short summary and see our Example Research page for real uses of the SSPP.

This prediction platform will allow for the systematic collection and assessment of expert forecasts of the effects of untested social programs. In turn, this should help both policy makers and social scientists by improving the accuracy of forecasts, allowing for more effective decision-making and improving experimental design and analysis.

Benefits of a prediction platform

Measuring and improving the accuracy of forecasts

Collecting and evaluating a body of priors across members of an expert community can expose whether and how predictions are systematically biased or inaccurate, and can allow us to explore adjustments or corrections that yield more reliable forecasts. It can similarly allow us to identify circumstances that affect the accuracy of predictions and superforecasters, guiding the improvement of the accuracy of future forecasts.

Improving the accuracy of forecasts and identifying consistently accurate forecasters will be particularly valuable to policy makers who often must rely on expert predictions and scientific consensus when considering policy options, and particularly when investment decisions must be made in the absence of RCTs or other rigorous evidence. Many policy analysts already make and use predictions informally and could benefit from infrastructure that standardizes the sourcing and cataloguing of knowledge, similar to how the American Economic Association has systematized registration with the RCT Registry. If forecasts and their accuracy are tracked over time, they can also be combined with newer or crowdsourced predictions, giving more weight to those of superforecasters, to weed out interventions with a lower likelihood of success. There is clear demand from research organizations and funders for this kind of standardization.

Mitigating publication bias

In cases where null results are known to have been unexpected, we can treat them not as null results, but as rejecting previously held priors. We propose to use ex ante priors, such as the mean or median forecast, as an alternative hypothesis against which to test experimental results. Importantly, forecasts will need to be collected ex ante, given that it is all too easy to rationalize any result ex post (hindsight bias).

Improving experimental design

The collection of priors, much like pre-analysis plans, focuses attention on the hypotheses to be tested and helps to highlight which research questions have the highest value. It can also improve the efficiency of research by allowing for a more optimal allocation of participants across treatment arms or by focusing attention on those outcome variables for which more data would have the highest value of information.