We consider unicast equation based rate control, where a source estimates the loss event ratio p, and, primarily at loss events, adjusts its sending rate to f(p). Function f is assumed to represent the loss-throughput relation that TCP would experience. When no loss occurs, the rate may also be increased according to some additional mechanism. We assume that the loss event interval estimator is non-biased. If the loss process is deterministic, the control is TCP-friendly in the long run, i.e, the average throughput does not exceed that of TCP. If, in contrast, losses are random, it is not a priori clear whether this holds, due to the non-linearity of f, and a phenomenon similar to Feller's paradox. Our goal is to identify the key factors that drive whether, and how far, the control is TCP friendly (in the long run). As TCP and our source may experience different loss event intervals, we distinguish between TCP-friendly and conservative (throughput does not exceed f(p)). We give a representation of the long term throughput, and derive that conservativeness is primarily influenced by various convexity properties of f, the variability of loss events, and the correlation structure of the loss process. In many cases, these factors lead to conservativeness, but we show reasonable lab experiments where the control is clearly non-conservative. However, our analysis also suggests that our source should experience a higher loss event ratio than TCP, which would make non-TCP friendliness less likely. Our findings provide guidelines that help understand when an equation base control is indeed TCP-friendly in the long run, and in some cases, excessively so. The effect of round trip time and its variation is not included in this study.