Dynamic textures are sequences of images showing temporal regularity. Examples can be found in videos representing smoke, flames, ocean waves, wind-shaken forests, etc. Dynamic texture modelling and synthesis has usually been done considering RGB color images. In this paper, we analyze the use of different color encodings, which permit to model luminance and chrominance information separately. We find that this separation is more appropriate, since it takes advantage of the spatial and temporal characteristics of the color channels and leads to more flexible and compact representations. We show that compared to RGB, similar synthesis performance can be achieved using YCbCr or Lab color encodings, using half of the model coefficients and less computational power.