We address a well-known but infrequently discussed problem in the quantitative study of international conflict: Despite immense data collections, prestigious journals, and sophisticated analyses, empirical findings in the literature on international conflict are often unsatisfying. Many statistical results change from article to article and specification to specification. Accurate forecasts are nonexistent. In this article we offer a conjecture about one source of this problem: The causes of conflict, theorized to be important but often found to be small or ephemeral, are indeed tiny for the vast majority of dyads, but they are large, stable, and replicable wherever the ex ante probability of conflict is large. This simple idea has an unexpectedly rich array of observable implications, all consistent with the literature. We directly test our conjecture by formulating a statistical model that include s critical features. Our approach, a version of a "neural network" model, uncovers some interesting structural features of international conflict, and as one evaluative measure, forecasts substantially better than any previous effort. Moreover, this improvement comes at little cost, and it is easy to evaluate whether the model is a statistical improvement over the simpler models commonly used.
Winner of the Gosnell Prize. See also our response to a published comment on this paper: Beck, Nathaniel; Gary King; and Langche Zeng, "Theory and Evidence in International Conflict: A Response to de Marchi, Gelpi, and Grynaviski," American Political Science Review, 98, 2 (May, 2004): 379--389 (Article:
PDF | Abstract:
HTML< /a>), and a related paper King, Gary; Zeng, Langche, "Improving Forecasts of State Failure," World Politics, Vol. 53, No. 4 (July, 2001): 623-58. (Article: PDF | Abstract:
HTML)
See also:
International Conflict