On the use of training sequences for channel estimation
Suppose Q is a family of discrete memoryless channels. An unknown member of Q will be available, with perfect, causal output feedback for communication. We study a scenario where communication is carried by first testing the channel by means of a training sequence, then coding according to the channel estimate. We provide an upper bound on the maximum achievable error exponent of any such coding scheme. If we consider the Binary Symmetric and the Z families of channels this bound is much lower than Burnashev's exponent. For example, in the case of Binary Symmetric Channels this bound has a slope that vanishes at capacity. This is to be compared with our previous result that demonstrates the existence of coding schemes that achieve Burnashev's exponent (that has a nonzero slope at capacity) even though the channel is revealed neither to the transmitter nor to the receiver. Hence, the present result suggests that, in terms of error exponent, a good universal feedback scheme entangles channel estimation with information delivery, rather than separating them.