Popular transforms, like the discrete cosine transform or the wavelet transform, owe their success to the fact that they promote sparsity. These transforms are capable of extracting the structure of a large class of signals and representing them by a few transform coefficients. Therefore, they play a major role in the compression of speech, audio, image and video signals. Besides data compression, sparsity can also have major impact on the way we acquire signals. The idea of sparseness leads us to sample signals in a compact form, i.e., the number of samples should be as close as possible to the information rate of the signals. This gave rise to some new research fields such as finite rate of innovation sampling and compressed sensing, with applications including medical and seismic imaging, distributed and remote sensing, and analog to information conversion. The goal of this dissertation is to advance the acquisition and reconstruction techniques for sparse signals from both theoretical and practical points of view, paying special attention to applications. We start with a quick review of finite rate of innovation sampling for the continuous-time periodic stream of Diracs and provide two extensions to this framework, i.e., equal-weight and multiframe scenarios. We design special sampling and reconstruction algorithms for these two extensions. Our theoretical analysis and simulation results indicate that the proposed algorithms improve the robustness of the results to the uncertainties in the measurements. Then, we turn our attention to the discrete-time periodic stream of Diracs and investigate the corresponding support recovery problem in the noisy setting. By adapting an estimation theoretic approach, we specifically derive conditions on the number of measurements required for support recovery using lowpass Fourier and i.i.d. standard Gaussian measurement matrices. Following the classical settings mentioned above, we investigate an interesting scenario where we study the problem of non-adaptive distributed sensing and centralized reconstruction of two signals which may not be themselves sparse, but are linked by an unknown linear and time-invariant sparse filtering operation. We study two strategies for the sampling process along with the achievable sampling pairs and also propose an efficient and robust reconstruction algorithm based on annihilating filters. The last two chapters of the thesis are devoted to two applications which involve sparsity constraints on their solutions. First, we study the effectiveness of sparsity-based regularization techniques for breast screening using ultrasound tomography. The ultrasound signals are sent through the object which are then received by a set of receivers placed around the region of interest. The ultrasound propagation pattern varies as a function of the unknown sound speed distribution which is assumed to be sparse in some appropriate basis. Being a large scale optimization problem, we propose to employ special forward modeling and sparsity-based regularized reconstruction techniques which not only improve the robustness of the results, but also have reasonable complexity. Second, we consider the epidemiology problem where the goal is to design appropriate sampling and recovery mechanisms to successfully identify a set of sparse, virally-infected people in a large population with a few collective measurements. We acquire collective samples by sending agents inside the population whose task is to contact people. Since the transfer of the disease is a probabilistic phenomenon, the measurement process is not fully known to the decoder. Based on a variation of the classical non-adaptive group testing approach, we propose sampling and recovery procedures with which the decoder can overcome the probabilistic nature of the measurements and successfully identify the infected people.