Self-organized networks require some mechanism to ensure cooperation and fairness in the face of individual utility maximizing users and potential malicious attacks. Otherwise, network performance can be seriously deteriorated. One promising approach are decentralized reputation systems. However, these are vulnerable to users with an interest in passing on false information. Robustness against liars has not yet been analyzed in detail. In this paper, we provide a first step to the robustness analysis of a reputation system based on the deviation test as introduced in . Users accept second hand information only if this does not differ too much from their reputation values. We show that the system exhibits a phase transition: In the subcritical regime, the reputation system is robust and the lying has no effect. In the supercritical regime, the lying does have an impact. We obtain the exact critical values via a mean field approach. We then use explicit computation to verify the mean field results. Thus, we can give conditions under which the deviation test makes the reputation system robust. We also obtain quantitative results on what goes wrong in the supercritical regime.