Evaluating Latency of Distributed Algorithms Using Petri Nets N. Sergent The time it takes to a distributed algorithm to finish (the distributed algorithm latency) cannot be directly measured in asynchronous systems if the algorithm starts and ends on different processors, since the system has no global time. A simple method to evaluate this latency is to build and simulate a unified model which includes the network and the distributed algorithm sub-models. In this paper we introduce a network model for UDP (User Datagram Protocol) which allows to establish a relationship between the number of messages exchanged in a distributed algorithm and the communication delays. As an application, we consider the two-phase commitment algorithm. Numerical results derived from simulating the model are compared with data obtained from performance measures.