Co-authored with Eugenia Nazrullaeva and Dylan Potts
Prior scholarship contends that control over patronage appointments confers the incumbent an electoral advantage. We study the introduction of state-level legislation that abolished patronage appointments to the civil services of the 50 US states between 1900 and 2016. Using recently-developed statistical methods appropriate to reform’s staggered introduction, we show that legislators were much less likely to be reelected during the patronage era than after the introduction of civil service reform. Reelection rates for legislators significantly and substantially increase following reform, when political careers also lengthen. We explore both selection and performance explanations for this surprising result.
We conduct parallel surveys of legislators and citizens in three countries to study their tolerance for corruption. In Italy, Colombia, and Pakistan legislators and citizens respond similarly to hypothetical scenarios involving trade-offs between, for example, probity and efficiency: both perceive corruption as undesirable but prevalent. These novel descriptive data further reveal that legislators generally have accurate beliefs about public opinion on corruption and understand its relevance to voters. An informational treatment updates legislators’ beliefs about public opinion. The treatment produces downward adjustments among legislators who initially overestimated citizens’ anti-corruption preferences. We also present descriptive data that tolerance of corruption is predicted by politician attributes, most notably motivations for entering politics. Finally, results reconfirm partisan bias by voters in evaluations of corruption. Overall, results suggest that barriers to effective anti-corruption policies are unlikely to lie with lack of information by legislators or by their deliberate commitment to corrupt activities
Updated: Aug 13, 2024
Co-authored with Tara Slough et al.
On what basis can we claim a scholarly community understands a phenomenon? Social scientists generally propagate many rival explanations for the phenomena that they study. How best to discriminate between or aggregate them introduces myriad questions because we lack standard tools that synthesize discrete explanations. In this paper, we assemble and test a set of approaches to the selection and aggregation of predictive statistical models representing different social scientific explanations for a single outcome: original crowd-sourced predictive models of COVID-19 mortality. We evaluate social scientists’ ability to select or discriminate between these models using an expert forecast elicitation exercise. We provide a framework for aggregating discrete explanations, including use of an ensemble algorithm (model stacking). Although the best models outperform pre-specified benchmark machine learning models, experts are generally unable to identify models’ predictive accuracy. Our findings suggest that algorithmic approaches for the aggregation of social scientific explanations can outperform human judgement or ad-hoc processes.