Recent years have seen an increasing interest in sparse representations for image classiﬁcation and object recognition, probably motivated by evidence from the analysis of the primate visual cortex. It is still unclear, however, whether or not sparsity helps classiﬁcation. In this paper we evaluate its impact on the recognition rate using a shallow modular architecture, adopting both standard ﬁlter banks and ﬁlter banks learned in an unsupervised way. In our experiments on the CIFAR-10 and on the Caltech-101 datasets, enforcing sparsity constraints actually does not improve recognition performance. This has an important practical impact in image descriptor design, as enforcing these constraints can have a heavy computational cost.