Theory and Evidence in International Conflict: A Response to De Marchi, Gelpi, and Grynaviski

American Political Science Review, Vol. 98, No. 2, pp. 379-389, May 2004

11 Pages Posted: 13 Jan 2008

See all articles by Nathaniel Beck

Nathaniel Beck

New York University (NYU) - Wilf Family Department of Politics

Gary King

Harvard University

Langche Zeng

University of California, San Diego

Abstract

We thank Scott de Marchi, Christopher Gelpi, and Jeffrey Grynaviski (2003; hereinafter dGG) for their careful attention to our work (Beck, King, and Zeng, 2000; hereinafter BKZ) and for raising some important methodological issues that we agree deserve readers' attention. We are pleased that dGG's analyses are consistent with the theoretical conjecture about international conflict put forward in BKZ - The causes of conflict, theorized to be important but often found to be small or ephemeral, are indeed tiny for the vast majority of dyads, but they are large stable and replicable whenever the ex ante probability of conflict is large (BKZ, p.21) - and that dGG agree with our main methodological point that out-of-sample forecasting performance should always be one of the standards used to judge studies of international conflict, and indeed most other areas of political science.

However, dGG frequently err when they draw methodological conclusions. Their central claim involves the superiority of logit over neural network models for international conflict data, as judged by forecasting performance and other properties such as ease of use and interpretation (neural networks hold few unambiguous advantages . . . and carry significant costs relative to logit; dGG, p. 14). We show here that this claim, which would be regarded as stunning in any of the diverse fields in which both methods are more commonly used, is false. We also show that dGG's methodological errors and the restrictive model they favor cause them to miss and mischaracterize crucial patterns in the causes of international conflict.

We begin in the next section by summarizing the growing support for our conjecture about international conflict. The second section discusses the theoretical reasons why neural networks dominate logistic regression, correcting a number of methodological errors. The third section then demonstrates empirically, in the same data as used in BKZ and dGG, that neural networks substantially outperform dGG's logit model. We show that neural networks improve on the forecasts from logit as much as logit improves on a model with no theoretical variables. We also show how dGG's logit analysis assumed, rather than estimated, the answer to the central question about the literature's most important finding, the effect of democracy on war. Since this and other substantive assumptions underlying their logit model are wrong, their substantive conclusion about the democratic peace is also wrong. The neural network models we used in BKZ not only avoid these difficulties, but they, or one of the other methods available that do not make highly restrictive assumptions about the exact functional form, are just what is called for to study the observable implications of our conjecture.

Suggested Citation

Beck, Nathaniel and King, Gary and Zeng, Langche, Theory and Evidence in International Conflict: A Response to De Marchi, Gelpi, and Grynaviski. American Political Science Review, Vol. 98, No. 2, pp. 379-389, May 2004, Available at SSRN: https://ssrn.com/abstract=1082784

Nathaniel Beck (Contact Author)

New York University (NYU) - Wilf Family Department of Politics ( email )

715 Broadway
New York, NY 10003
United States

Gary King

Harvard University ( email )

1737 Cambridge St.
Institute for Quantitative Social Science
Cambridge, MA 02138
United States
617-500-7570 (Phone)

HOME PAGE: http://gking.harvard.edu

Langche Zeng

University of California, San Diego ( email )

9500 Gilman Drive
Code 0521
La Jolla, CA 92093-0521
United States

Do you have negative results from your research you’d like to share?

Paper statistics

Downloads
50
Abstract Views
927
PlumX Metrics