In this work we explore the potentialities of a representational framework based on Matching Pursuit (MP) for the decomposition of audio-visual signals over redundant dictionaries. It is relatively easy for a human to correctly interpret a scene consisting on a combination of acoustic and visual stimuli and to take profit of both the information to experience a richer perception of the world. On the contrary, computer systems have considerable difficulties when having to deal with multimodal signals, and the information that each component contains about the others is usually discarded. This is basically due to the complexity of the dependencies that exist between audio and video signals and to the signals representations that are considered when attempting to mix them in multimodal fusion systems. Redundant decompositions describe audio-visual sequences in an extremely concise fashion, preserving good representational properties thanks to the use of redundant, well designed, dictionaries. This allows us to overcome two typical problems of multimodal fusion algorithms, that are the high dimensionality of the considered signals and the limitations of classical representation techniques, like pixel-based measures (for the video) or Fourier-like transforms (for the audio), that take into account only marginally the physics of the problem. The experimental results we obtain by making use of MP decompositions over redundant codebooks are encouraging and make us believe that such a research direction would allow to open a new way through multimodal signal representation.