Résumé

In a stochastic interdiction game a proliferator aims to minimize the expected duration of a nuclear weapons development project, and an interdictor endeavors to maximize the project duration by delaying some of the project tasks. We formulate static and dynamic versions of the interdictor's decision problem where the interdiction plan is either precommitted or adapts to new information revealed over time, respectively. The static model gives rise to a stochastic program, whereas the dynamic model is formalized as a multiple optimal stopping problem in continuous time and with decision-dependent information. Under a memoryless probabilistic model for the task durations, we prove that the static model reduces to a mixed-integer linear program, whereas the dynamic model reduces to a finite Markov decision process in discrete time that can be solved via efficient value iteration. We then generalize the dynamic model to account for uncertainty in the outcomes of the interdiction actions. We also discuss a crashing game where the proliferator can use limited resources to expedite tasks so as to counterbalance the interdictor's efforts. The resulting problem can be formulated as a robust Markov decision process.

Détails