Dynamic textures are sequences of images showing temporal regularity; they can be found in videos representing smoke, flames, flowing water, or moving grass, for instance. Recently, one method based on linear dynamic system theory was proposed to synthesize dynamic textures. Textures are represented as the output of a linear dynamic system and synthesis is reduced to matrix multiplication operations. In this report, we study the problem of implementing this method using fixed-point arithmetic, as required in many portable devices, such as palms or mobile phones. This is done by jointly evaluating the effect of model coefficient quantization and of fixed-point precision arithmetic, which are both source of errors with respect to the floating-point implementation. Our analysis shows that using model coefficients quantized using less bit permits to obtain visual synthesis results comparable to the more expensive floating-point implementation, with the advantage of requiring far less buffer memory space and permitting to perform faster synthesis.