Adaptive networks consist of a collection of agents with adaptation and learning abilities. The agents interact with each other on a local level and diffuse information across the network through their collaboration. In this work, we consider two types of agents: informed agents and uninformed agents. The former receive new data regularly and perform consultation and in-network processing, while the latter do not collect data and only participate in the consultation tasks. We examine the performance of LMS diffusion strategies for distributed estimation over networks as a function of the proportion of informed agents and their distribution in space. The results reveal some interesting trade-offs between convergence rate and mean-square performance. In particular, among other results, it is shown that the mean-square performance of adaptive networks does not necessarily improve with a larger proportion of informed agents. Instead, it is established that if the set of informed agents is enlarged, the convergence rate of the network becomes faster albeit at the expense of some deterioration in mean-square performance. The results further establish that uninformed agents play an important role in determining the steady-state performance of the network and that it is preferable to keep some of the highly noisy or highly connected agents uninformed. The arguments reveal an important interplay among three factors: the number and distribution of informed agents in the network, the convergence rate of the learning process, and the estimation accuracy in steady-state. Expressions that quantify these relations are derived, and simulations are included to support the theoretical findings. We illustrate application of the results to two network models, namely, the Erdos-Renyi and scale-free models.