top of page

"Gathering, evaluating, and aggregating social scientific models"

Co-authored with Tara Slough et al.


On what basis can we claim a scholarly community understands a phenomenon? Social scientists generally propagate many rival explanations for what they study. How best to discriminate between or aggregate them introduces myriad questions because we lack standard tools that synthesize discrete explanations. In this paper, we assemble and test a set of approaches to the selection and aggregation of predictive statistical models representing different social scientific explanations for a single outcome: original crowd-sourced predictive models of COVID-19 mortality. We evaluate social scientists' ability to select or discriminate between these models using an expert forecast elicitation exercise. We provide a framework for aggregating discrete explanations, including using an ensemble algorithm (model stacking). Although the best models outperform benchmark machine learning models, experts are generally unable to identify models' predictive accuracy. Findings support the use of algorithmic approaches for the aggregation of social scientific explanations over human judgement or ad-hoc processes.

00_write_paper
.pdf
Download PDF • 4.87MB

bottom of page