Enhancing discrete choice models with representation learning
In discrete choice modeling (DCM), model misspecifications may lead to limited predictability and biased parameter estimates. In this paper, we propose a new approach for estimating choice models in which we divide the systematic part of the utility specification into (i) a knowledge-driven part, and (ii) a data-driven one, which learns a new representation from available explanatory variables. Our formulation increases the predictive power of standard DCM without sacrificing their interpretability. We show the effectiveness of our formulation by augmenting the utility specification of the Multinomial Logit (MNL) and the Nested Logit (NL) models with a new non linear representation arising from a Neural Network (NN), leading to new choice models referred to as the Learning Multinomial Logit (L-MNL) and Learning Nested Logit (L-NL) models. Using multiple publicly available datasets based on revealed and stated preferences, we show that our models outperform the traditional ones, both in terms of predictive performance and accuracy in parameter estimation. All source code of the models are shared to promote open science.
Enhancing Discrete Choice Models with Representation Learning.pdf
Publisher's version
openaccess
CC BY-NC-ND
2.98 MB
Adobe PDF
422ec90329643f18a482a739360e0196
1812.09747.pdf
openaccess
CC BY-NC-ND
3.01 MB
Adobe PDF
da59873a460f5a4395e18c9101e329e1