Learning by demonstration is a natural and interactive way of learning which can be used by non-experts to teach behaviors to robots. In this paper we study two learning by demon- stration strategies which give different an- swers about how to encode information and when to learn. The first strategy is based on artificial Neural Networks and focuses on reactive on-line learning. The second one uses Gaussian Mixture Models built on statistical features extracted off-line from several training datasets. A simple navigation experiment is used to compare the developmental possibilities of each strategy. Finally, they appear to be complementary and we will highlight that each one can be related to a specific memory structure in brain.