A distributed algorithm comprises a collection of sequential-code entities, which are spread over different computers connected to a network. The process of designing a distributed algorithm is influenced by the assumptions we make on the computational environment. The model which makes no assumptions (and therefore can fit any real distributed system) is the asynchronous system model. In the asynchronous model there is no bound on message delays, clock drift or the time needed by a process to execute a step. The advantage of using the asynchronous model is that any algorithm designed under its assumptions preserves its correctness (safety) in any other system model. It has been argued that asynchronous algorithms cannot be used for time critical applications. One of the goals of this thesis is to clarify this point, by proving that designing algorithms under the asynchronous system model does not lead to loss of efficiency. The performance of distributed algorithms may be roughly approximated in terms of the number of messages exchanged during the execution or the number of communication steps, but the most precise performance metric is the termination time (i.e., the time needed for the algorithm to finish). The termination time of a distributed algorithm cannot be precisely measured if computation begins and ends on different processors because the clocks are not synchronized. Another difficulty with the performance measures approach is that results are usually biased by unpredictable network and stations loads, or by scheduling delays, not related to the studied algorithm. The approach taken in this thesis for evaluating the performance of distributed algorithms is based on modelling and simulation. One of the main difficulties and the interest of performance modelling is that results do not always conform to intuition. When an algorithm reaches a given complexity, it is difficult to predict the impact of modifying the input parameters on the termination time of the algorithm. Analytical methods can be applied successfully if the interactions and the interdependencies between the software layers of the algorithm under investigation are not too complex or can be simplified adequately without falsifying relevant system's characteristics. Discrete event simulation covers especially those cases which cannot be tackled by analytical methods. By simulation it is possible to evaluate the termination time of an algorithm with pretty good accuracy. The main purpose of this thesis is to provide a flexible framework for studying the performance of asynchronous distributed algorithms. Several important asynchronous algorithms are con- sidered in the thesis, in order to show the applicability and the interest of our framework. We consider algorithms related to fault-tolerance. Providing fault-tolerance in asynchronous distributed systems is closely related to solving agreement problems. All the agreement problems are related to the consensus problem, and agreement protocols can be built using a generic consensus service. This is the reason why we have focused on studying the behaviour and the performance of a consensus algorithm proposed by Chandra and Toueg. Simulation is used to study the behaviour of the algorithm under different input conditions (with or without crash failure) and to choose optimum values for the system parameters (time-outs). We show that there is an inevitable trade-off to make between obtaining (1) fast reaction to process crash, and (2) good performance in runs without crashes. Moreover, we provide the statistical guarantee that consensus will finish before a certain time with a given probability. This result can be used in soft real-time systems that can tolerate infrequent timing failures. The second class of algorithms studied within our framework are replication protocols. These are high-level protocols, built on top of the consensus algorithm. To achieve fault-tolerance in systems subject to failures, services are replicated on different sites and specific communication protocols are used to maintain consistency between replicas. Two replication strategies are usually considered: the primary-backup replication technique and the active replication technique. We compare the performance of the two replication techniques in two scenarios: failure free run and the worst case with the crash of one replica.