This report is the extension to the case of sparse approximations of our previous study on the effects of introducing a priori knowledge to solve the recovery of sparse representations when overcomplete dictionaries are used. Greedy algorithms and Basis Pursuit Denoising are considered in this work. Theoretical results show how the use of "reliable" a priori information (which in this work appears under the form of weights) can improve the performances of these methods. In particular, we generalize the sufficient conditions established by Tropp and Gribonval & Vandergheynst, that guarantee the retrieval of the sparsest solution, to the case where a priori information is used. We prove how the use of prior models at the signal decomposition stage influences these sufficient conditions. The results found in this work reduce to the classical case of Tropp and Gribonval & Vandergheynst when no a priori information about the signal is available. Finally, examples validate and illustrate the theoretical results. Finally, examples validate and illustrate theoretical results.