Optical diffraction tomography (ODT) provides us 3D refractive index (RI) distributions of transparent samples. Since RI values differ across different materials, they serve as endogenous contrasts. It, therefore, enables us to image without pre-processing of labeling which can disturb samples during measurement. It has been utilized in various applications to study hematology, morphological parameters, biochemical information, and so on. The fundamental principle of ODT reconstruction is to recover the 3D information from multiple 2D measurements. While we require 2D measurements acquired by fully scanning a sample, there exist missing measurements that we are not able to access due to the limited numerical apertures (NAs) in the optical system. This is called the missing cone problem since the parts which are not covered by the NAs form cone shapes. The missing cone problem degrades the final reconstruction by underestimating RI values and more severely elongating images along the optical axis. Another challenge in ODT reconstruction is to model the nonlinear relationship between a sample and the measurements. The first order of scattering is commonly considered while neglecting the other higher orders to linearize the relationship, however, this results in degradation of the final reconstruction as the higher orders of scattering become more pronounced. In this thesis, we aim at solving the challenges in ODT reconstruction to provide more accurate quantitative information, namely, RI distributions. The first approach is based on model-based iterative reconstruction schemes. We choose the beam propagation method (BPM) for the forward model in order to capture the high orders of scattering. Due to the similarity of the multi-layer structure of the BPM with that of neural networks used in deep learning, we call this scheme learning tomography (LT). We rigorously investigate the performance of LT over the conventional linear model-based reconstruction scheme. Furthermore, by applying a more advanced BPM for the forward model, we even improve the LT and demonstrate the dramatically improved performance by both simulations and experiments. The second approach is based on statistically learning artifacts present in final reconstructions using a deep neural network (DNN) from a large dataset. Unlike the previous approaches which require iterations, the DNN approach instantly reconstructs RI distributions. We demonstrate the use of DNN using red blood cells which are highly distorted by the missing cone problem. In order to overcome the lack of ground truth in 3D ODT reconstruction, we digitally generate a synthetic dataset. The reconstruction results from the network present highly accurate results for the synthetic test set. Most importantly, we obtain high-fidelity reconstructions of experimental data using the network trained only on the synthetic data. Unlike other imaging modalities, ODT provides 3D quantitative information without labeling. To fully benefit from the capacity of quantitative imaging, it is critical to solve the existing challenges in ODT reconstruction to produce high-fidelity reconstructions. In this contribution, we aim to resolve the major challenges in ODT reconstruction using various learning approaches, and we believe that it can further improve ODT as a powerful tool for various applications.