Large scale selective event dissemination

Event-based systems usually denote distributed infrastructures that are used to asynchronously disseminate information among geographically distant computing units. In these systems, the information sources (a.k.a. publishers) produce events and the system delivers these events to destinations (a.k.a. subscribers). Subscribers are "decoupled" from publishers in the sense that they do not need to know each other or even be "up" at the same time. The subscribers can specify the kind of information they like to receive in order not to be flooded with events of all kinds. Depending on the applications, different requirements are imposed on the event-based system. On the one hand, some applications require very strong reliability guarantees while other applications can tolerate a certain percentage of event-losses. On the other hand, while some applications can tolerate large delivery latency, others require latency to be small. In certain cases, it is adequate that the subscribers receive a particular "type" of events irrespective of their actual content. In others, the subscribers like to receive a given type of events with a specific content. The event filtering mechanisms used in these applications need to be decentralized to favor the scalability of the applications. Gossip-based broadcast (also called epidemic) algorithms are particularly appropriate for large scale event dissemination. In these algorithms, processes forward the events they receive to other processes they know of (most of the time in a randomized manner): this continues until all or at least a significant percentage of the processes, receive the events. These broadcast algorithms can be tuned to deliver events according to their type. This is done by mapping a broadcast group to a type. However, more elaborate schemes are necessary to deliver events according to their content. This thesis investigates a number of algorithms that can be used in the context of event dissemination. First, we examine two mechanisms in order to make gossip-based algorithms more reliable for event dissemination. More precisely, we propose a garbage collection technique that can be used to purge events according to their level of dissemination among processes. Furthermore, we introduce an adaptation mechanism to increase the reliability of gossip-based broadcast algorithms. The mechanism controls the amount of events injected into the system and adjusts the broadcast rate according to the buffer resources at processes. Then we discuss how clustering of processes according to their network locality can be useful in delivering events. This clustering mechanism could help to reduce the congestion in network links and also helps to disseminate time sensitive events. Finally, we propose an algorithm to achieve a fine-grained event selection in a completely decentralized environment. To this end, processes use gossip messages to form a structured overlay network and a dictionary based subscription model when joining the overlay.

    Thèse École polytechnique fédérale de Lausanne EPFL, n° 3264 (2005)
    Section des systèmes de communication
    Faculté informatique et communications
    Institut d'informatique fondamentale
    Laboratoire de programmation distribuée
    Jury: Anne-Marie Kermarrec, Claude Petitpierre, Emre Telatar, Maarten Van Steen

    Public defense: 2005-6-23


    Record created on 2005-05-18, modified on 2016-08-08


Related material