This is the second episode of the Bayesian saga started with the tutorial on the Bayesian probability. Its aim is showing in very informal terms how supervised learning can be interpreted from the Bayesian viewpoint. The focus is put on supervised learning of neural networks. The traditional approach to supervised neural network training is compared with the Bayesian perspective on supervised learning. A probabilistic interpretation is given to the traditional error function and to its minimization, to the phenomenon of overfitting and to the traditional countermeasures to prevent it. Finally, it is shown how the Bayesian approach solves the problem of assessing the performance of different network structures.