Files

Abstract

Fourier ptychography (FP) involves the acquisition of several low-resolution intensity images of a sample under varying illumination angles. They are then combined into a high-resolution complex-valued image by solving a phase-retrieval problem. The objective in dynamic FP is to obtain a sequence of high-resolution images of a moving sample. There, the application of standard frame-by-frame reconstruction methods limits the temporal resolution due to the large number of measurements that must be acquired for each frame. In this work instead, we propose a neural-network-based reconstruction framework for dynamic FP. Specifically, each reconstructed image in the sequence is the output of a shared deep convolutional network fed with an input vector that lies on a one-dimensional manifold that encodes time. We then optimize the parameters of the network to fit the acquired measurements. The architecture of the network and the constraints on the input vectors impose a spatiotemporal regularization on the sequence of images. This enables our method to achieve high temporal resolution without compromising the spatial resolution. The proposed framework does not require training data. It also recovers the pupil function of the microscope. Through numerical experiments, we show that our framework paves the way for high-quality ultrafast FP.

Details

Actions

Preview