Abstract

Econometric models play an important role in transportation analysis. Estimating more and more complex models becomes problematic. The associated log-likelihood function is highly nonlinear and non concave and the comlexity of the model requires constraints (maybe nonlinear) on the parameters in order to obtain meaningful values or to overcome model overspecification which can lead to singularity in the objective function. These difficulties especially arise when estimating advanced discrete choice models. Indeed recent advances have proposed new models in the Generalized Externe Value (GEV) family, mixtures can no longer be applied for the estimation and specific optimization algorithms must be designed for the estimation of advanced discrete choice models. More precisely, we need optimization algorithms able to deal with singularity (possibly nonlinear) and with non trivial constraints. In this paper we propose to investigate the case of unconstrained optimization when an affine singularity arises in the objective function. We present two ways of getting a robust method in the presence of a singularity. Firstly we propose to perform an eigen-structure analysis on the second derivatives matrix of the objective function which allows us to characterize the subspace where the singularity lies. Once the singularity has been properly identified, we fix this singularity by adding constraints in order to better guide the algorithm towards a local solution of the optimization problem. As the identification of the singularity is an iterative process taking place within the optimization algorithm, those constraints must be included "on-the-fly" during the optimization process. This second part is achieved using trust-region based algorithms (in our context we want to maximize the log-likelihood function but also satisfy the adaptative constraints associated with the singularity). Secondly we would like to deal with singularity in the objective function without adding constraints to the problem. We have developed a generalized quasi-Newton methos for unconstrained optimization which should be robust in the presence of singularity. The main idea is to update the second derivatives matrix at each iteration using more information on the function and its derivatives compared to classical secant methods. More precisely, we propose a least-squares approach maintaining a population of previous iterates and calibrating the Hessian approximation on function values and its gradient valus at previous iterates. In this papaer, we present algorithms as well as algorithmic ideas to solve singular unconstrained optimization problems. Then we present tests performed on classical singular problems with the current version of our algorithms. We eventually discuss the future work to be done.

Details

Actions