Self-organized networks require some mechanism to ensure cooperation and fairness. A promising approach is the use of decentralized reputation systems. However, their vulnerability to liars has not yet been analyzed in detail. In this paper, we provide a rst step to the robustness analysis of a reputation system based on a deviation test. Users accept second hand information only if this does not dier too much from their reputation values. We simplify the original system in order to obtain a one-dimensional formulation and show that it exhibits a phase transition. In the subcritical regime, the reputation system is robust. In the supercritical regime, lying has an impact. We obtain the critical values via a mean- eld approach and verify the results by explicit computation. Thus, we provide conditions for the deviation test to make the reputation system robust as well as quantitative results on what goes wrong in the supercritical regime.