Caching Gaussians: Minimizing Total Correlation on the Gray–Wyner Network
We study a caching problem that resembles a lossy Gray–Wyner network: A source produces vector samples from a Gaussian distribution, but the user is interested in the samples of only one component. The encoder first sends a cache message without any knowledge of the user’s preference. Upon learning her request, a second message is provided in the update phase so as to attain the desired fidelity on that component. The cache is efficient if it exploits as much of the correlation in the source as possible, which connects to the notions of Wyner’s common information (for high cache rates) and Watanabe’s total correlation (for low cache rates). For the former, we extend known results for 2 Gaussians to multivariates by showing that common information is a simple linear program, which can be solved analytically for circulant correlation matrices. Total correlation in a Gaussian setting is less well-studied. We show that for bivariates and using Gaussian auxiliaries it is captured in the dominant eigenvalue of the correlation matrix. For multivariates the problem is a more difficult optimization over a non-convex domain, but we conjecture that circulant matrices may again be analytically solvable.