The autocorrelations have been previously used as features for 1D or 2D signal classification in a wide range of applications, like texture classification, face detection and recognition, EEG signal classification, and so on. However, in almost all the cases, the high computational costs have hampered the extension to higher orders (more than the second order). In this paper we present an effective method for using higher order autocorrelation functions for pattern recognition. We will show that while the autocorrelation feature vectors (described below) are elements of a high dimensional space, one may avoid their explicit computation when the method employed can be expressed in terms of inner products of input vectors. Different typical scenarios of using the autocorrelations will be presented and we will show that the order of autocorrelations is no longer an obstacle.