The storage and short-term memory capacities of recurrent neural networks of spiking neurons are investigated. We demonstrate that it is possible to process online many superimposed streams of input. This is despite the fact that the stored information is spread throughout the network. We show that simple output structures are powerful enough to extract the diffuse information from the network. The dimensional blow up, which is crucial in kernel methods, is efficiently achieved by the dynamics of the network itself.