Files

Abstract

The Distributed Constraint Optimization (DCOP) framework can be used to model a wide range of optimization problems that are inherently distributed. A distributed optimization problem can be viewed as a problem distributed over a set of agents, where agents are responsible for finding a solution for their part of the problem. In the DCOP framework, the decisions the agents need to take are modeled using decision variables, and both the local problems and the inter-dependencies are modeled using constraints. In this thesis, we attack two problems related to the DCOP framework. In the first part of this thesis, we look at coordination problems with complex, non- trivial preferences. In the DCOP literature, the notion of complex is used for problems where agents need to take multiple-decisions and thus own multiple decision variables. To express the fact that agents preferences are hard to obtain, we introduce the notion of a non-trivial local problem. The present DCOP literature has largely ignored the origin of preferences, i.e. algorithms assume that the constraints are known a-priori. In coordination problems with complex, non-trivial preferences this assumption no longer holds. In order to investigate the behavior of existing DCOP algorithms on such coordination problems, we first introduce a new DCOP model for such coordination problems, where we make an explicit distinction between the coordination part of the problem, and the local preferences of the agents. We then introduce two new benchmarks with complex, non-trivial local subproblems, and perform an evaluation of several complete and non-complete DCOP algorithms. Our results show that the O-DPOP algorithm, which elicits preferences from agents in a best-first order, outper- forms both other complete approaches as also local search approaches. We show that, despite the fact that a best first order cannot always be guaranteed when dealing with non-trivial local problems, O-DPOP still finds solutions that are either better, or on par with, local search algorithms. In the second part of this thesis, we introduce a new sampling based approach to solving DCOPs. Existing DCOP algorithms are either complete, but do not scale well, or scale well but are not guaranteed to find a solution. Our approach, inspired by the application of the UCB algorithm for the multi-armed bandit problem on trees, aims to provide method that scales well. Furthermore, given certain assumptions on the problem structure, our method is guaranteed to find good solutions. We introduce the Distributed UCT (DUCT) algorithm, that uses bounds similar to those used in the UCT algorithm, which is an adaptation of the UCB approach on trees. We introduce and evaluate four different variations of the DUCT algorithm, DUCT-A, DUCT-B, DUCT-C and DUCT-D, that differ in the bounds that they use. To evaluate our algorithms, we compare them with existing DCOP approaches on three different types of problems, both with and without hard constraints. Our results show that, compared with other DCOP algorithms, the DUCT-D variant consistently performs well on the problems used, with respect to both solution quality and runtime. Furthermore, it consistently sends less information than the state-of-the-art DCOP algorithms, and is thus shown to be a suitable and practical algorithm for solving DCOPs.

Details

PDF