Lim, JoowonAyoub, Ahmed B.Psaltis, Demetri2021-06-052021-06-052021-06-052020-03-0110.1117/1.AP.2.2.026001https://infoscience.epfl.ch/handle/20.500.14299/178489WOS:000648539600004We accurately reconstruct three-dimensional (3-D) refractive index (RI) distributions from highly ill-posed two-dimensional (2-D) measurements using a deep neural network (DNN). Strong distortions are introduced on reconstructions obtained by the Wolf transform inversion method due to the ill-posed measurements acquired from the limited numerical apertures (NAs) of the optical system. Despite the recent success of DNNs in solving ill-posed inverse problems, the application to 3-D optical imaging is particularly challenging due to the lack of the ground truth. We overcome this limitation by generating digital phantoms that serve as samples for the discrete dipole approximation (DDA) to generate multiple 2-D projection maps for a limited range of illumination angles. The presented samples are red blood cells (RBCs), which are highly affected by the ill-posed problems due to their morphology. The trained network using synthetic measurements from the digital phantoms successfully eliminates the introduced distortions. Most importantly, we obtain high fidelity reconstructions from experimentally recorded projections of real RBC sample using the network that was trained on digitally generated RBC phantoms. Finally, we confirm the reconstruction accuracy using the DDA to calculate the 2-D projections of the 3-D reconstructions and compare them to the experimentally recorded projections.Opticsoptical diffraction tomographythree-dimensional imagingmachine learningdeep learningimage reconstructionred blood cellmissing cone problemprinciplesThree-dimensional tomography of red blood cells using deep learningtext::journal::journal article::research article