Repository logo

Infoscience

  • English
  • French
Log In
Logo EPFL, École polytechnique fédérale de Lausanne

Infoscience

  • English
  • French
Log In
  1. Home
  2. Academic and Research Output
  3. Journal articles
  4. Deep neural networks for choice analysis: Enhancing behavioral regularity with gradient regularization
 
research article

Deep neural networks for choice analysis: Enhancing behavioral regularity with gradient regularization

Feng, Siqi
•
Yao, Rui  
•
Hess, Stephane
Show more
July 27, 2024
Transportation Research Part C-emerging Technologies

Deep neural networks (DNNs) have been increasingly applied in travel demand modeling because of their automatic feature learning, high predictive performance, and economic interpretability. Nevertheless, DNNs frequently present behaviorally irregular patterns, significantly limiting their practical potentials and theoretical validity in travel behavior modeling. This study proposes strong and weak behavioral regularities as novel metrics to evaluate the monotonicity of individual demand functions (known as the "law of demand"), and further designs a constrained optimization framework with six gradient regularizers to enhance DNNs' behavioral regularity. The empirical benefits of this framework are illustrated by applying these regularizers to travel survey data from Chicago and London, which enables us to examine the trade-off between predictive power and behavioral regularity for large versus small sample scenarios and in-domain versus out-of-domain generalizations. The results demonstrate that, unlike models with strong behavioral foundations such as the multinomial logit, the benchmark DNNs cannot guarantee behavioral regularity. However, after applying gradient regularization, we increase DNNs' behavioral regularity by around 6 percentage points while retaining their relatively high predictive power. In the small sample scenario, gradient regularization is more effective than in the large sample scenario, simultaneously improving behavioral regularity by about 20 percentage points and log-likelihood by around 1.7%. Compared with the in- domain generalization of DNNs, gradient regularization works more effectively in out-of-domain generalization: it drastically improves the behavioral regularity of poorly performing benchmark DNNs by around 65 percentage points, highlighting the criticality of behavioral regularization for improving model transferability and applications in forecasting. Moreover, the proposed optimization framework is applicable to other neural network-based choice models such as TasteNets. Future studies could use behavioral regularity as a metric along with log-likelihood, prediction accuracy, and F1 1 score when evaluating travel demand models, and investigate other methods to further enhance behavioral regularity when adopting complex machine learning models.

  • Files
  • Details
  • Metrics
Loading...
Thumbnail Image
Name

10.1016_j.trc.2024.104767.pdf

Type

Main Document

Version

Accepted version

Access type

openaccess

License Condition

CC BY

Size

4.23 MB

Format

Adobe PDF

Checksum (MD5)

208602ff783efff2352a4d48102af6cc

Logo EPFL, École polytechnique fédérale de Lausanne
  • Contact
  • infoscience@epfl.ch

  • Follow us on Facebook
  • Follow us on Instagram
  • Follow us on LinkedIn
  • Follow us on X
  • Follow us on Youtube
AccessibilityLegal noticePrivacy policyCookie settingsEnd User AgreementGet helpFeedback

Infoscience is a service managed and provided by the Library and IT Services of EPFL. © EPFL, tous droits réservés