We propose a simple information-theoretic clustering approach based on maximizing the mutual information $I(\sfx,y)$ between the unknown cluster labels $y$ and the training patterns $\sfx$ with respect to parameters of specifically constrained encoding distributions. The constraints are chosen such that patterns are likely to be clustered similarly if they lie close to specific (unknown) vectors in the feature space. The method may be conveniently applied to learning the optimal affinity matrix, which corresponds to learning parameters of the kernelized encoder. The procedure does not require computations of eigenvalues or inverses of the Gram matrices, which makes it potentially attractive for clustering large data sets.