"Gathering, evaluating, and aggregating social scientific models"
Co-authored with Tara Slough et al.
On what basis can we claim a scholarly community understands a phenomenon? Social scientists generally propagate many rival explanations for the phenomena that they study. How best to discriminate between or aggregate them introduces myriad questions because we lack standard tools that synthesize discrete explanations. In this paper, we assemble and test a set of approaches to the selection and aggregation of predictive statistical models representing different social scientific explanations for a single outcome: original crowd-sourced predictive models of COVID-19 mortality. We evaluate social scientists’ ability to select or discriminate between these models using an expert forecast elicitation exercise. We provide a framework for aggregating discrete explanations, including use of an ensemble algorithm (model stacking). Although the best models outperform pre-specified benchmark machine learning models, experts are generally unable to identify models’ predictive accuracy. Our findings suggest that algorithmic approaches for the aggregation of social scientific explanations can outperform human judgement or ad-hoc processes.
Комментарии