Minorization-Maximization (MM) Algorithm for Semiparametric Logit Models: Bottlenecks, Extensions, and Comparisons

53 Pages Posted: 2 May 2018 Last revised: 23 Jul 2018

See all articles by Prateek Bansal

Prateek Bansal

National University of Singapore (NUS)

Ricardo Daziano

Cornell University

Erick Guerra

University of Pennsylvania

Date Written: June 22, 2018

Abstract

Motivated by the promising performance of alternative estimation methods for mixed logit models, in this paper we derive, implement, and test expectation-maximization (EM) and minorization-maximization (MM) algorithms to estimate the semiparametric logit mixed logit (LML) and mixture-of-normals multinomial logit (MON-MNL) models. In particular, we show that the reported computational efficiency of the MM algorithm is actually lost for large choice sets. Because the logit link that represents the parameter space in LML is intrinsically treated as a large choice set, the MM algorithm for LML actually becomes unfeasible to use in practice. We thus propose a faster MM algorithm that revisits a simple step-size correction. In a Monte Carlo study, we compare the maximum simulated likelihood estimator (MSLE) with the algorithms that we derive to estimate LML and MON-MNL models. Whereas in LML estimation alternative algorithms are computationally uncompetitive with MSLE, the faster-MM algorithm appears emulous in MON-MNL estimation. Both algorithms – faster-MM and MSLE – could recover parameters as well as standard errors at a similar precision in both models. We further show that parallel computation could reduce estimation time of faster-MM by 45% to 80%. Even though faster-MM could not surpass MSLE with analytical gradient (because MSLE also leveraged similar computational gains), parallel faster-MM is a competitive replacement to MSLE for MON-MNL that obviates computation of complex analytical gradients, which is a very attractive feature to integrate it into a flexible estimation software. We also compare different algorithms in an empirical application to estimate consumer’s willingness to adopt electric motorcycles in Solo, Indonesia. The results of the empirical application are consistent with those of the Monte Carlo study.

Keywords: discrete choice, semiparametrics, preference heterogeneity, expectation-maximization, minorization-maximization

JEL Classification: C13, C25, Q42

Suggested Citation

Bansal, Prateek and Daziano, Ricardo and Guerra, Erick, Minorization-Maximization (MM) Algorithm for Semiparametric Logit Models: Bottlenecks, Extensions, and Comparisons (June 22, 2018). Available at SSRN: https://ssrn.com/abstract=3156835 or http://dx.doi.org/10.2139/ssrn.3156835

Prateek Bansal (Contact Author)

National University of Singapore (NUS) ( email )

1E Kent Ridge Road
NUHS Tower Block Level 7
Singapore, 119228
Singapore

Ricardo Daziano

Cornell University ( email )

Ithaca, NY 14853
United States

Erick Guerra

University of Pennsylvania ( email )

Philadelphia, PA 19104
United States

Do you have negative results from your research you’d like to share?

Paper statistics

Downloads
92
Abstract Views
670
Rank
506,051
PlumX Metrics