Nothing Special   »   [go: up one dir, main page]

Next Article in Journal
Bandwidth Enhancement and Frequency Scanning Array Antenna Using Novel UWB Filter Integration Technique for OFDM UWB Radar Applications in Wireless Vital Signs Monitoring
Next Article in Special Issue
An Optical Fiber Sensor Based on La2O2S:Eu Scintillator for Detecting Ultraviolet Radiation in Real-Time
Previous Article in Journal
Hyperspectral Image Classification with Capsule Network Using Limited Training Samples
Previous Article in Special Issue
New Digital Plug and Imaging Sensor for a Proton Therapy Monitoring System Based on Positron Emission Tomography
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Sub-Diffraction Visible Imaging Using Macroscopic Fourier Ptychography and Regularization by Denoising

1
Xi’an Institute of Optics and Precision Mechanics, Chinese Academy of Sciences, Xi’an 710119, China
2
University of Chinese Academy of Sciences, Beijing 100049, China
*
Author to whom correspondence should be addressed.
Sensors 2018, 18(9), 3154; https://doi.org/10.3390/s18093154
Submission received: 17 August 2018 / Revised: 9 September 2018 / Accepted: 16 September 2018 / Published: 18 September 2018
(This article belongs to the Special Issue Optical Sensing and Imaging, from UV to THz Range)

Abstract

:
Imaging past the diffraction limit is of significance to an optical system. Fourier ptychography (FP) is a novel coherent imaging technique that can achieve this goal and it is widely used in microscopic imaging. Most phase retrieval algorithms for FP reconstruction are based on Gaussian measurements which cannot extend straightforwardly to long range, sub-diffraction imaging setup because of laser speckle noise corruption. In this work, a new FP reconstruction framework is proposed for macroscopic visible imaging. When compared with existing research, the reweighted amplitude flow algorithm is adopted for better signal modeling, and the Regularization by Denoising (RED) scheme is introduced to reduce the effects of speckle. Experiments demonstrate that the proposed method can obtain state-of-the-art recovered results on both visual and quantitative metrics without increasing computation cost, and it is flexible for real imaging applications.

1. Introduction

Improving the resolution of an imaging system is a long-term goal in imaging sciences. It has great importance in many optical implementations and computer vision, including medical imaging, remote sensing, and surveillance. However, in long range imaging, the limited angular extent of the aperture results in low spatial resolution. Several methods have been proposed to prevent resolution loss, and the most direct way is to increase the input aperture by using a large lens. However, it is not an ideal solution to physically increase the lens diameter, which leads to expensive and heavy setups. What is more, many corrective optical devices are required to counteract aberrations in these larger lenses. Super-resolution reconstruction is a kind of computational imaging technique, which can improve the spatial resolution by capturing and processing a collection of low-resolution (LR) images. These LR images are subsampled as well as shifted with subpixel precision. New information is contained in each LR image that can be exploited to obtain a high-resolution (HR) image [1]. Actually, it overcomes the pixel-limited resolutions instead of the diffraction blur. Pixel sampling limits are not as critical to many current imaging applications because current sensor pixels are approaching the diffraction limit of visible light [2]. Synthetic aperture radar (SAR) techniques are also useful ways to improve imaging resolution, which work in long-wavelength regimes. For SAR, full complex filed (amplitude and phase) can be directly recorded since the antenna has picosecond timing resolution. Then, these radar returns could stitch together to achieve a HR image. This method is impossible to visible light imaging, because current camera sensors can only measure the intensity of the optical field and lost all phase information [3].
Recent works in ptychography demonstrate that it can recover phase information by intensity measurements and past the diffraction limit of an optical system. The existing research focus on microscopy that image the thin sample with a smooth phase [4]. It is worth noting that the imaged samples must be thin for Fourier ptychography microscopy (FPM). If this assumption is not satisfied, the LR images at different incident angels cannot uniquely map to different passbands of the spectrum, and the panning spectrum constraint cannot be accurately imposed to recover the HR complex image [5,6]. Dong et al. first extended the ptychography to macroscopic imaging, which recovered a super-resolution image of an object placed at the far field by scanning the entire camera at different x-y positions. However, the imaging samples are real daily objects with “optically rough” surface for long distance imaging system. This leads to strong speckle noise because of coherent illumination.
In this work, we make two contributions toward sub-diffraction imaging with FP method. First, we show how the reweighted amplitude flow scheme can be applied to Fourier ptychographic reconstruction. Second, we show how the Regularization by Denoising (RED) framework can be plugged into our long range sub-diffraction imaging model. The recovered results are compared with other methods. It shows that our proposed framework offers excellent reconstruction performance under different noise type.

2. Related Work

2.1. Macroscopic Fourier Ptychography

Fourier ptychography is a newly reported computational imaging technique, which enables imaging past the diffraction limit of optical system. It is essentially a synthetic aperture technology, but it does not require the measurement of phase information. FP illuminates the object with different angels, and correspondingly gets multiple LR images describing different spatial spectrum bands of the object. These captured intensity images have high overlaps ratio between adjacent measurements, which permits the recovery of lost phase information while using reconstruction algorithm. Then, a large field-of-view (FOV) and high resolution image of the object can be obtained by stitching spectrum bands together in Fourier space.
Fourier ptychography macroscopic imaging is first proposed by Dong et al. [6]. They develop a camera-scanning FP platform to displace the angle-varied illumination, which circumvent the thin specimen assumption that is mentioned above and try to extend the FP concept to macroscopic imaging settings. Holloway et al. propose a prototype for macroscopic FP in a reflection imaging geometry and demonstrate it has potential benefits in improving spatial resolution for long distance rough objects [3]. They adopt the alternating projection (AP) algorithm [7,8], which adds constraints alternately in spatial space and Fourier space. Although higher resolution gains can be achieved compared to traditional super-resolution reconstruction algorithms, this method has weak robustness toward noise corruption, especially for laser speckle noise that is caused by the coherence of illumination source. Then stronger regularization and new reconstruction algorithm are required for long range Fourier ptychography macroscopic imaging.

2.2. Coherent Illummination and Speckle Phenomena

When compared with the difference in the transfer functions for incoherent illumination and coherent illumination, the cutoff frequencies of coherent illummination is a half of incoherent illumination if we assume the diffraction-limited system with a circular entrance pupil [9]. Figure 1a shows the optical transfer function (OTF) of an imaging system with incoherent illumination and Figure 1b is the coherent transfer function (CTF) of the imaging system with coherent illumination. It is clear that the CTF can transmits all spatial frequencies within the cutoff frequencies. However, for incoherent imaging system, many important spatial frequencies are attenuated despite higher cutoff frequencies. For the macroscopic Fourier ptychography imaging setup, the CTF is shifted in the Fourier domain by linearly translating the imaging aperture (perpendicular to the optical axis). That is, the Fourier ptychography transfer function (FPTF) is a summation over all shifted CTF, which can synthesize a larger aperture and transmit all spatial frequencies with no attenuation [10].
Most real-word materials are optically rough on the scale of optical wavelength and the surfaces of them fluctuate randomly. The scattering points along the surface act as secondary source with coherent illumination. Laser light scattering from the points interfere with each other that lead to speckle phenomena on the image plane of optical sensor. Actually, speckle is not noise in the conventional sense, which contains the object’s high-resolution information. However, it is recorded by the optical sensor that limits the resolution of coherent imaging system. So, in this work, speckle is regarded as “coherent noise” following the negative exponential distribution [11]. There are two methods to suppress speckle noise, including optical processing and digital image processing [12]. Optical processing methods suppress speckle by reducing spatial coherence of the illumination or diversifying the wavelength and the angle [13], which may affect the reconstruction results of FP. Digital image processing methods are also widely used in SAR and holography to reduce speckle noise, such as median filter method [14], Lee filter [15], nonlocal means algorithm (NLM) [16], and the block matching three-dimensional (3D) algorithm (BM3D) [17,18]. However, most existing research about FP avoid reducing speckle noise by using thin biological samples with smooth phase. Holloway et al. first propose a simple method based on wavelet transformation to suppress laser speckle in long range FP imaging [3]. Removing the speckle from LR images that were captured by macroscopic FP imaging setup requires stronger regularization and more effective reconstruction algorithm.

2.3. Phase Retrieval

Phase retrieval algorithms recovering the input signal from only the intensity of the output have become an important role in many modern computational imaging systems [19,20,21]. FP reconstruction can be seen as a typical phase retrieval problem. People can impose the magnitude of a complex filed recorded by image sensors to retrieve the missing phase information and reconstruct the HR image.
From previous work, several algorithms have been proposed to solve this problem. The AP algorithm is simple to use and fast to converge but it is sensitive to measurement noise. It requires long exposure time for capturing high signal-to-noise-ratio (SNR) inputs, which restricts its application range. Bian et al. extend the Wirtinger flow optimization to FP reconstruction [22]. Wirtinger flow optimization for Fourier Ptychography (WFP) minimizes the intensity error between estimated LR images and corresponding measurements while using the gradient scheme and Wirtinger calculus [23,24]. When compared with WFP, the reshaped Wirtinger flow optimization is based on amplitude error that has great advantages in statistical and computational efficiency [25]. The Possion maximum likelihood is proposed for better signal modeling in FPM [26]. However, recent research of Yet et al. demonstrate that the gradient of the Possion log-likelihood function is very similar to amplitude-based cost function [27]. The experiments of Metzler et al. find that it performs slightly worse than amplitude function [28]. Reweighted amplitude flow (RAF) is an amplitude-based phase retrieval algorithm without extra assumption on the signal to be recovered [29]. What is more, it reweights the gradient of loss function in each iteration to obtain the reliable directions pointing to the true value.

3. Methods

3.1. Image Formation Model

We first describe the image formation model of the proposed sub-diffraction imaging system. Like other FP methods, a serious of LR images is captured with active illumination. Consider an object that is described by complex reflection function r ( x 0 , y 0 ) , where x 0 and y 0 are lateral coordinates on the object plane. The limited camera aperture is described by the pupil function P ( x , y ) , where x and y denote the frequency coordinates in the Fourier plane. For the diffraction-limited setup with a circular entrance pupil, P ( x , y ) can be seen as a binary low-pass filter with P ( x , y ) = 1 when the radius is less than or equal to the cutoff frequencies ξ c . That is, it can transmit all spatial frequencies within a radius of ξ c . In Fourier space, the coherent field spatial distribution is given by
U ( x , y ) = F [ ( r ( x 0 , y 0 ) ) ] P ( x , y ) ,
where F is the Fourier transform operator. To capture a serious of band-limited LR images, the aperture pupil is recentered at m different locations ( c x ( i ) , c y ( i ) ) in the Fourier domian, i = 1 , 2 , m . Since the real imaging sensors can only record (squared) magnitudes of the image pixels and lose phase information, the final image information of Fourier ptychographic setup follows
I ( x , y ) = | F 1 ( U ( x c x ( i ) , y c y ( i ) ) ) | 2 = | F 1 { P ( x c x ( i ) , y c y ( i ) ) F [ ( r ( x 0 , y 0 ) ) ] } | 2
where F 1 denotes the inverse Fourier operator. It is easy to reconstruct the HR image of object when the complex field F [ ( r ( x 0 , y 0 ) ) ] is recovered. The whole imaging process is shown in Figure 2.

3.2. Optimization Framework

In this subsection, we introduce the new optimization framework for long range FP reconstruction. We first review the reweighted amplitude flow (RAF) algorithm, then introduce the RAF formulation into macroscopic FP reconstruction, and incorporate the Regularization by Denoising (RED) constraint.

3.2.1. Review of Reweighted Amplitude Flow Algorithm

Reweighted amplitude flow algorithm is a newly reported phase retrieval technique to find an n -dimensional solution x to a system of quadratic equations of the form y i = | a i , x | 2 [29]. Where a i n are feature/sensing vectors, x n is the wanted unknown signal vector, and y i are given observations. Following the least-squares criterion, the problem of solving systems of quadratic equations is recast as a minimization problem
L ( x ) = 1 2 m i = 1 m ( y i | a i , x | ) 2 ,
where m is the measurements. The RAF develop a flexible scheme reweighted the gradients of the loss function. x is updated in a gradient descending manner as
x t + 1 = x t μ t L r w ( x t ; y i ) = x t μ t w i t L ( x t ; y i ) ,
where L r w ( x t ) is the reweighted gradient at the current point x t , μ is the step size, and { w i } 1 i m are weights.

3.2.2. Reweighted Amplitude Flow for Fourier Ptychography (RAFP)

To introduce the RAF framework into FP reconstruction, we rewrite the formulation of the imaging model (Equation (2)) as
y i = | A i z | 2 ,
here y i denotes the LR intensity measurements I ( x , y ) , A i corresponds to a linear transform matrix incorporating the inverse Fourier transform F 1 and low-pass filtering of pupil function P ( x , y ) , and z is the HR spectrum of object. Then, the reconstruction of HR image translates into a standard phase retrieval problem. According to the research [29], we derive the RAF optimization algorithm to solve the above imaging model. The model becomes.
min    L ( z ) = 1 2 m i = 1 m ( | A i z | y i ) 2 ,
similar to Equation (4), it amounts to the update formula
z t + 1 = z t μ t w i t L ( z ) = z t μ t m i = 1 m w i t ( A i z y i A i z | A i z | ) A i H ,
where t is the iterate count, A i H is the transposed-conjugate matrix of A i , and the convention A i z | A i z | = 0 is adopted if A i z = 0 .
However, there are two main differences compared to RAF algorithm. One is the setting of initial value and the other is the setting of weight. The root-mean-squared measurements estimator is adopted instead of the spectral initialization. The experiments demonstrate that spectral initialization succeeds for Gaussian measurements [23,25,29,30,31], it fails for FP setup (See [32] for detail). This surprising behavior is also noted in the research [22,33]. Bian et al. adopt the center image of LR images as the initial value for FPM imaging. The root-mean-squared measurements are employed as an initial estimator for the proposed method. Experiments show that it is slightly better than the center image. Next, we discuss the setting of weight for gradients.
Assuming that ϕ is the wanted unknown signal vector and z is the recovered value for it. Unfortunately, the gradient of average in Equation (7) may not point towards true ϕ , when the current iterate value z is not very close to ϕ . Worse still, the summands ( A i z y i A i z | A i z | ) A i H could give rise to misleading search directions due to the erroneously estimated signs A i z | A i z | A i ϕ | A i ϕ | , which may drag z away from ϕ . To address this challenge, the RAFP algorithm introduces suitable weights into the Fourier ptychography framework. The weight at the current point z t is given as
w i = | A i z | 2 / y i | A i z | 2 / y i + β i ,
where { β i } i = 1 m are some pre-selected parameters, y i is the intensity measurement, and | A i z | 2 is recovered value for it. The ratio | A i z | 2 / y i can be seen as a confidence score on the reliability of corresponding gradient and satisfy | A i z | 2 / y i 1 because of the existence of noise. The larger the confidence score | A i z | 2 / y i , the more similar between the measurements and recovered value. This is a simple way to achieve small weights to the spurious gradients, and large weights to the reliable ones. Thus, the Equation (7) may point to the true value with large probability. In this work, the intensity-based weights are adopted instead of amplitude-based weights to avoid the effect of phase noise on the gradient direction. Figure 3 shows the comparison of peak signal to noise ratio (PSNR) for the RAFP algorithm with different weights. It is clear that the intensity-based weights improve the PSNR about 7dB when compared with no weights for gradients.

3.2.3. Fourier Ptychography via Regularization by Denoising

Here we show how the Regularization by Denoising (RED) [34] framework can be adapted to Fourier ptychography. RED is an efficient approach to solve imaging inverse-problems, which can incorporate an arbitrary denoiser to regularize an arbitrary imaging inverse-problem. It uses the denoising engine in defining the regularization of inverse-problem, making the overall objective function clearer and better defined. To apply RED to Fourier ptychography, we construct an energy function for the proposed imaging model of the form
  E ( z ) = L ( z ) + R ( z ) ,
where L ( z ) is a data-fidelity term and R ( z ) is a regular term. The data-fidelity term L ( z ) follows the form of Equation (6), which encourages | A z | to match the capture images y . From the research [34], the RED framework defines the regular term R ( z ) , as
R ( z ) = 1 2 z H [ z f ( z ) ] .
In this expression, f ( z ) is the denoising engine that one can choose any filter to plug in the regularization term. Observing the Equation (10), we note that this regularizer not only penalizes the residual difference between recovered value and its denoised self, but it also prevents f ( z ) from removing structure from z [28]. Then, the energy function of FP reconstruction with RED framework can be described as
arg   min   E ( z ) = 1 2 m i = 1 m ( | A i z | y i ) 2 + λ 2 z H [ z f ( z ) ] ,
here are several methods to solve this problem, including gradient decent method, ADMM method, and fixed-point strategy. Given the gradient of the energy function E ( x ) , the gradient decent method is the simplest option that can be adopted. The gradient of Equation (11) is readily available by
E ( z ) = w L ( z ) + R ( z ) ,
where the first term is the reweighted gradient of the data-fidelity term and the gradient of regular term is R ( z ) = λ ( z f ( z ) ) . (See [34] for detail.) Then, the final update formula is
z t + 1 = z t μ t E ( z ) = z t μ t [ 1 m i = 1 m w i ( A i z y i A i z | A i z | ) A i H + λ t ( z f ( z ) ) ]
The whole imaging reconstruction framework can be readily summarized in Algorithm 1.
Algorithm 1 The imaging reconstruction framework
Input: Captured LR images { y i } i = 1 m ; sampling matrix { A i } i = 1 m .
Output: Recovered spectrum z .
1: Parameters: Maximum number of iterations T; step size μ ;
weighting parameters β i = 0.8 ; Regularization parameter λ = 0.01 .
2: Initialization: z 0 = F ( 1 m i = 1 m y i ) .
3: Loop: for t = 0 to T 1
z t + 1   =   z t μ t [ 1 m i = 1 m w i ( A i z y i A i z | A i z | ) A i H + λ t ( z f ( z ) ) ] ,
where w i = | A i z | 2 / y i | A i z | 2 / y i + β i for all 1 i m .
4: end

4. Experiments and Results

In this section, a serious of experiments is conducted to demonstrate the performance of the proposed algorithm.

4.1. Numerical Simulation

We now test the efficacy of the proposed method via a number of simulated images reconstruction experiments. In the simulation experiments, the parameters of our sub-imaging setup are set as follows: The illumination wavelength is λ = 632.8 nm , the aperture diameter is D = 2.5 mm , the focal length of the lens is f = 75   mm , the number of vertical and horizontal measurements are m = 15 × 15 , and the overlap ratio between two adjacent positions is 65%. We assume that the object ( 512 × 512 pixels ‘Lena’ image) itself is 6   cm × 6   cm and is located d = 5 meters away from the camera. Then, the cutoff frequency of the coherent transfer function (CTF) is D / ( λ × d ) 395   m 1 , badly blurring out the important features of object. The simulated low-resolution images are obtained according to Equation (3).
Gaussian noise and speckle noise are introduced to simulate the noise corruption. Adding Gaussian noise to the captured LR images is easy. However, speckle is because the underlying random phase of object distorts the intensity field, which is actually not noise. To acquire intensity images with speckle-like features, a Gaussian random surface is simulated for the object. Following previous research [35,36], we change the height function of surface to introduce rapidly-changed random phase for the object. Then, the Equation (2) is utilized to sample the object to obtain LR images with “speckle noise”.
The pipeline of numerical simulation experiments is described in Figure 4.

4.2. Criterion

In addition to visual results, two objective quantitative indexes are used in the experiments including peak signal to noise ratio (PSNR) and structure similarity (SSIM). They are widely adopted to assess the quality of processed images compared to benchmark. The PSNR describes the intensity difference between recovered image and the ground truth, and is greater for higher quality image. The SSIM measures the spatial structural closeness between two images, and it is lower when two images are of less similar structural information [37].

4.3. Results

First, different denoisers f ( z ) are applied to the RED framework to study its effect on the algorithm’s performance. Here, the simulated captured images are corrupted with Gaussian noise and laser speckle noise, respectively. The first one is common noise in real imaging implementations that is mostly caused by photoelectric effect and dark current. Laser speckle noise is caused by the light’s spatial-temporal coherence of the illumination source and we simulate this noise by imaging an optically rough object, which means introducing a random phase during the imaging process. Actually, speckle noise is the main noise toward a long distance sub-diffraction imaging system while using Fourier ptychography [3]. Specially, the standard derivation σ of the Gaussian noise ranges from 0.001 to 0.005 and the noise strength α of speckle noise ranges from 1 to 5. Four different denoisers are plugged into the proposed scheme, including median filter, wavelet filter, Lee filter, and BM3D method. Table 1 shows the quantitative results of the proposed method with varying amounts of Gaussian noise. All denoisers produce similar reconstruction results. We can see that even the simplest median filter acts as a reasonable regularizer to our reconstruction problem. BM3D is slightly better than other denoising engine on both PSNR and SSIM. Table 2 shows the quantitative results of the proposed method with varying amounts of speckle noise. The RED significantly improves the performance of reconstruction results as compared to without this regular term. The BM3D improves the PSNR about 10dB and improve the SSIM about 0.2. What’s more, it performs strong robustness especially at high noise strength. This is because BM3D integrates the advantages of the transform domain denoising method and the local average method, which can achieve state-of-the-art denoising performance in terms of both PSNR and SSIM quality. In a word, the RED framework can efficiently suppress different types of noise by plugging a reasonable denoise engine. This regularizer can not only penalize the residual difference between recovered image and denoised self, but also penalize correlations between recovered image and the residual (Equation (10)).
Then, we compare the proposed method (RAFP with RED) against alternating projection (AP) [7,8], Wirtinger flow optimization for Fourier Ptychography (WFP) [22] and truncated amplitude flow (TAF) [31]. AP is a baseline algorithm for Fourier ptychographic reconstruction, which is also the only one algorithm used for long distance sub-diffraction imaging. WFP and TAF are both new phase retrieval algorithms. The Wirtinger flow optimization is based on intensity measurements, while the TAF optimization is based on amplitude measurements. Because the gradient of the Possion log-likelihood function is very similar to amplitude-based cost function [27,28], PWFP [33] is not included in the comparison algorithms. Toward the proposed algorithm, Median and BM3D are used as the denoisers, respectively (RAFP-median and RAFP-BM3D).
Figure 5a,b show the quantitative reconstruction results by above methods under Gaussian noise and speckle noise, respectively. It is clear that AP algorithm degrades significantly with the increase of noise. From the Figure 5a, WFP performs strong robustness under Gaussian noise corruption, which can achieve reliable results even at high noise levels. From the Figure 5b, the WFP performs not well under speckle noise because of its Gaussian noise assumption. Thus, they can not recognize the multiplicative noise and filter out them. Actually, TAF can be seen as a special case of the reweighted amplitude flow (RAF) algorithm with particular choices of weights w i (when they take 0/1 values). Therefore, its performance is worse than our proposed method. From the results, it can be seen that the proposed algorithm outperforms state-of-the-arts a lot because incorporates the advantages of RAFP and RED framework. It not only reweights the gradients of loss function to reconstruct the true solution with high probability but also recognizes and filters the outliers using RED framework. When compared to the RAFP-BM3D and RAFP-median, filter selection does not have a significant impact on quantitative results, which reduces the complexity of our reconstruction model.
The running time of each method is displayed in Table 3. All the algorithms are implemented using MATLAB R2015a on an Intel Xeon 1.9 GHz CPU computer, with 16GB RAM and 64bit Win7 system. For computational imaging technology, the spatial resolution is increased at the expense of relatively high computation cost. FP reconstruction also requires a large amount of calculation, which limits its application scope to a certain extent, especially for long-distance real imaging implementation. From the Table 3, the proposed methods (RAFP-median and RAFP-BM3D) converge faster than WFP and TAFP, but needs more time consuming than AP. The RAFP-BM3D achieves the best reconstruction results but it requires the most running time. However, the RAFP-median can also achieve satisfactory results and cost relatively low runtime, which may have a wider application.
Figure 6 shows the visual comparison of different methods. Here, the complex object is used to validate the performance of above algorithms. Specifically, the Gaussian noise level is fixed at σ = 0.004 , and the speckle noise level is fixed at α = 3 . “Lena” image and “Peppers” image are used as the object’s amplitude and phase image, respectively. The captured LR images have very low SNR due to noise corruption in Figure 6a. Figure 6b describes the visual results among above methods under Gaussian noise and speckle noise, which achieves the same reconstruction performance as before. The reconstruction results of AP are very noisy, although the required running time is the shortest. WFP reconstruction algorithm is based on Gaussian assumption that leads to satisfied results under Gaussian noise corruption. However, it is not effective enough to recognize speckle noise and filter them. When compared to WFP, the TAFP can better preserve image details but there exist serious artifacts. The proposed method outperforms the other three methods by combining reconstruction and filtering jointly. It can achieve state-of-the-art visual results, even with the simplest median filter. Obviously, the RAFP-BM3D can better remove noise, which is at the expense of high computation cost.

5. Discussion and Conclusions

In this work, a new FP reconstruction method is proposed for sub-diffraction imaging that can image beyond the diffraction-limit of an optical system. Firstly, the RAF scheme is applied to FP. It reweights the gradients of loss function and offers reliable directions pointing to true value with high probability. Secondly, the RED framework is adapted to our reweighted amplitude-based loss function. It can reduce the noise during recovery process by plugging arbitrary denoiser into the regular term. Furthermore, we can choose the corresponding denoiser according to noise type in measurements, which is very flexible. The performance of our framework is compared to three existing algorithms, including AP, WFF, and TAFP. The experimental results show that the proposed method not only promotes the PSNR and SSIM of the recovered image, but also has strong robustness toward different noise. Especially for real sub-imaging implementations, the main noise corruption is laser speckle noise instead of Gaussian noise. The proposed method outperforms other algorithms and achieves the best results that could be widely used in FP reconstruction.
The choice of denoiser greatly affects the running time of the proposed method. As state-of-the-art filter, the BM3D requires the most running time cost while achieving the best results. From Section 4, it can be seen that the performance of RAFP-median is slightly worse than RAFP-BM3D, but it saves a lot of running time. People can choose the suitable denoiser according to the system’s demand for time and precision, which is very flexible and convenient in practical applications. However, compared to the AP algorithm, the proposed method is still limited in running efficiency. Therefore, shortening the running time of the proposed method is our future work. The parallel computation techniques are employed to accelerate the proposed method.

Author Contributions

Conceptualization, D.W. and Z.S.; Methodology, Z.L.; Software, W.Z. and Z.L.; Validation, G.L. and X.W.; Writing-Original Draft Preparation, Z.L.; Writing-Review & Editing, Z.S.

Funding

This research was funded by National High Technology Research and Development Program of China 863 Program (No. Y512171800) and Youth Innovation Promotion Association (No.1188000111).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Kang, M.; Chaudhuri, S. Super-resolution image reconstruction. IEEE Signal Process. Mag. 2003, 20, 19–20. [Google Scholar] [CrossRef]
  2. Holloway, J.; Asif, M.S.; Sharma, M.K.; Matsuda, N.; Horstmeyer, R.; Cossairt, O.; Veeraraghavan, A. Toward long distance, sub-diffraction imaging using coherent camera arrays. IEEE Trans. Comput. Imaging 2016, 2, 251–265. [Google Scholar] [CrossRef]
  3. Holloway, J.; Wu, Y.; Sharma, M.K.; Cossairt, O.; Veeraraghavan, A. SAVI: Synthetic apertures for long-range, subdiffraction-limited visible imaging using Fourier ptychography. Sci. Adv. 2017, 3, e1602564. [Google Scholar] [CrossRef] [PubMed]
  4. Maiden, A.M.; Humphry, M.J.; Zhang, F.; Rodenburg, J.M. Superresolution imaging via ptychography. JOSA A 2011, 28, 604–612. [Google Scholar] [CrossRef] [PubMed]
  5. Zheng, G.; Horstmeyer, R.; Yang, C. Wide-field, high-resolution Fourier ptychographic microscopy. Nat. Photonics 2013, 7, 739. [Google Scholar] [CrossRef] [PubMed]
  6. Dong, S.; Horstmeyer, R.; Shiradkar, R.; Guo, K.; Ou, X.; Bian, Z.; Xin, H.; Zheng, G. Aperture-scanning Fourier ptychography for 3D refocusing and super-resolution macroscopic imaging. Opt. Express 2014, 22, 13586–13599. [Google Scholar] [CrossRef] [PubMed]
  7. Fienup, J.R. Phase retrieval algorithms: A comparison. Appl. Opt. 1982, 21, 2758–2769. [Google Scholar] [CrossRef] [PubMed]
  8. Fienup, J.R. Reconstruction of a complex-valued object from the modulus of its Fourier transform using a support constraint. JOSA A 1987, 4, 118–123. [Google Scholar] [CrossRef]
  9. Goodman, J.W. Introduction to Frouier Optics; Roberts and Company Publishers: Greenwood Village, CO, USA, 2005; pp. 31–118. [Google Scholar]
  10. Pacheco, S.; Salahieh, B.; Milster, T.; Rodriguez, J.J.; Liang, R. Transfer function analysis in epi-illumination Fourier ptychography. Opt. Lett. 2015, 40, 5343–5346. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  11. Goodman, J.W. Speckle Phenomena in Optics: Theory and Application; Roberts and Company Publishers: Greenwood Village, CO, USA, 2007; pp. 1–57. [Google Scholar]
  12. Huang, X.; Jia, Z.; Zhou, J.; Yang, J.; Kasabov, N. Speckle reduction of reconstructions of digital holograms using Gamma-correction and filtering. IEEE Access 2018, 6, 5227–5235. [Google Scholar] [CrossRef]
  13. Di, C.G.; El, M.A.; Ferraro, P.; Dale, R.; Coppola, G.; Dale, B.; Coppola, G.; Dubois, F. 4D tracking of clinical seminal samples for quantitative characterization of motility parameters. Biomed. Opt. Express 2014, 5, 690–700. [Google Scholar]
  14. Garcia-Sucerquia, J.; Ramirez, J.A.H.; Prieto, D.V. Reduction of speckle noise in digital holography by using digital image processing. Optik 2005, 116, 44–48. [Google Scholar] [CrossRef]
  15. Lee, J.S. Digital image enhancement and noise filtering by use of local statistic. IEEE Trans. Pattern Anal. Mach. Intell. 1980, PAM1-2, 165–168. [Google Scholar] [CrossRef]
  16. Buades, A.; Coll, B.; Morel, J.-M. A non-local algorithm for image denoising. In Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), San Diego, CA, USA, 20–25 June 2005. [Google Scholar]
  17. Dabov, K.; Foi, A.; Katkovnik, V.; Egiazarian, K. Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 2007, 16, 2080–2095. [Google Scholar] [CrossRef] [PubMed]
  18. Metzler, C.A.; Maleki, A.; Baraniuk, R.G. BM3D-prgamp: Compressive phase retrieval based on BM3D denoising. In Proceedings of the 2016 IEEE International Conference on Multimedia & Expo Workshops (ICMEW), Seattle, WA, USA, 11–15 July 2016. [Google Scholar]
  19. Shechtman, Y.; Eldar, Y.C.; Cohen, O.; Chapman, H.N.; Miao, J.; Segev, M. Phase retrieval with application to optical imaging: A contemporary overview. IEEE Signal Process. Mag. 2015, 32, 87–109. [Google Scholar] [CrossRef]
  20. Netrapalli, P.; Jain, P.; Sanghavi, S. Phase retrieval using alternating minimization. IEEE Trans. Signal Process. 2015, 18, 4814–4826. [Google Scholar] [CrossRef]
  21. Katkovnik, V. Phase retrieval from noisy data based on sparse approximation of object phase and amplitude. arXiv, 2017; arXiv:1709.01071. [Google Scholar]
  22. Bian, L.; Suo, J.; Zheng, G.; Guo, K.; Chen, F.; Dai, Q. Fourier ptychographic reconstruction using Wirtinger flow optimization. Opt. Express 2015, 23, 4856–4866. [Google Scholar] [CrossRef] [PubMed]
  23. Candes, E.J.; Li, X.; Soltanolkotabi, M. Phase retrieval via Wirtinger flow: Theory and algorithms. IEEE Trans. Inf. Theory 2015, 61, 1985–2007. [Google Scholar] [CrossRef]
  24. Cai, T.T.; Li, X.; Ma, Z. Optimal rates of convergence for noisy sparse phase retrieval via thresholded Wirtinger flow. Ann. Stat. 2016, 44, 2221–2251. [Google Scholar] [CrossRef] [Green Version]
  25. Zhang, H.; Liang, Y. Reshaped wirtinger flow for solving quadratic system of equations. In Proceedings of the Advances in Neural Information Processing Systems 29 (NIPS 2016), Barcelona, Spain, 5–10 December 2016. [Google Scholar]
  26. Chen, Y.; Candes, E. Solving random quadratic systems of equations is nearly as easy as solving linear systems. Commun. Pure Appl. Math. 2017, 70, 822–883. [Google Scholar] [CrossRef]
  27. Yeh, L.H.; Dong, J.; Zhong, J.; Tian, L.; Chen, M.; Tang, G.; Soltanolkotabi, M.; Waller, L. Experimental robustness of Fourier ptychography phase retrieval algorithms. Opt. Express 2015, 23, 33214–33240. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  28. Metzler, C.A.; Schniter, P.; Veeraraghavan, A.; Baraniuk, R.G. PrDeep: Robust Phase Retrieval with Flexible Deep Neural Networks. arXiv, 2018; arXiv:1803.00212. [Google Scholar]
  29. Wang, G.; Giannakis, G.B.; Saad, Y.; Chen, J. Phase Retrieval via Reweighted Amplitude Flow. IEEE Trans. Signal Process. 2018, 66, 2818–2833. [Google Scholar] [CrossRef]
  30. Wang, G.; Giannakis, G.B.; Eldar, Y.C. Solving systems of random quadratic equations via truncated amplitude flow. IEEE Trans. Inf. Theory 2018, 64, 773–794. [Google Scholar] [CrossRef]
  31. Wang, G.; Zhang, L.; Giannakis, G.B.; Akcakaya, M.; Chen, J. Sparse Phase Retrieval via Truncated Amplitude Flow. IEEE Trans. Signal Process. 2017, 66, 479–491. [Google Scholar] [CrossRef]
  32. Jagatap, G.; Chen, Z.; Hegde, C.; Vaswani, N. Sub-diffraction Imaging using Fourier Ptychography and Structured Sparsity. In Proceedings of the 2018 IEEE International Conference on Acoustics, Speech and Signal Processing, Calgary, AB, Canada, 15–20 April 2018. [Google Scholar]
  33. Bian, L.; Suo, J.; Chung, J.; Ou, X.; Yang, C.; Chen, F.; Dai, Q. Fourier ptychographic reconstruction using Poisson maximum likelihood and truncated Wirtinger gradient. Sci. Rep. 2017, 6, 27384. [Google Scholar] [CrossRef] [PubMed]
  34. Romano, Y.; Elad, M.; Milanfar, P. The little engine that could: Regularization by denoising (RED). SIAM J. Imaging Sci. 2017, 10, 1804–1844. [Google Scholar] [CrossRef]
  35. Cheng, C.F.; Qi, D.P.; Liu, D.L.; Teng, S.Y. The Computational Simulations of the Gaussian Correlation Random Surface and Its Light-Scattering Speckle Field and the Analysis of the Intensity Probability Density. Acta Phys. Sin. 1999, 48, 1643–1648. [Google Scholar]
  36. Fujii, H.; Uozumi, J.; Asakura, T. Computer simulation study of image speckle patterns with relation to object surface profile. JOSA 1976, 66, 1222–1236. [Google Scholar] [CrossRef]
  37. Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Transfer functions of the imaging system with incoherent illumination, coherent illumination and macroscopic Fourier ptychography, respectively.
Figure 1. Transfer functions of the imaging system with incoherent illumination, coherent illumination and macroscopic Fourier ptychography, respectively.
Sensors 18 03154 g001
Figure 2. The imaging process of Fourier ptychography.
Figure 2. The imaging process of Fourier ptychography.
Sensors 18 03154 g002
Figure 3. Peak signal to noise ratio (PSNR) for Reweighted Amplitude Flow for Fourier Ptychography (RAFP) algorithm with different weights.
Figure 3. Peak signal to noise ratio (PSNR) for Reweighted Amplitude Flow for Fourier Ptychography (RAFP) algorithm with different weights.
Sensors 18 03154 g003
Figure 4. Block scheme of the simulation experiments.
Figure 4. Block scheme of the simulation experiments.
Sensors 18 03154 g004
Figure 5. Quantitative comparison of the reconstruction results by different methods under Gaussian noise and speckle noise.
Figure 5. Quantitative comparison of the reconstruction results by different methods under Gaussian noise and speckle noise.
Sensors 18 03154 g005aSensors 18 03154 g005b
Figure 6. Visual comparison of the reconstruction results by different methods under Gaussian noise and speckle noise.
Figure 6. Visual comparison of the reconstruction results by different methods under Gaussian noise and speckle noise.
Sensors 18 03154 g006aSensors 18 03154 g006b
Table 1. PSNR(dB) and structure similarity (SSIM) of the proposed algorithm with different denoisers and varying amounts of Gaussian noise.
Table 1. PSNR(dB) and structure similarity (SSIM) of the proposed algorithm with different denoisers and varying amounts of Gaussian noise.
σ = 0.001   σ = 0.002   σ = 0.003   σ = 0.004   σ = 0.005  
PSNRSSIMPSNRSSIMPSNRSSIMPSNRSSIMPSNRSSIM
Without RED29.720.9028.930.8828.430.8728.100.8627.860.85
RED-median30.100.9129.320.8928.850.8828.470.8728.150.86
RED-wavelet30.120.9028.980.8828.570.8728.180.8627.920.85
RED-BM3D30.150.9029.540.8929.210.8828.940.8828.760.87
Table 2. PSNR(dB) and SSIM of the proposed algorithm with different denoisers and varying amounts of speckle noise.
Table 2. PSNR(dB) and SSIM of the proposed algorithm with different denoisers and varying amounts of speckle noise.
α = 1   α = 2   α = 3   α = 4   α = 5  
PSNRSSIMPSNRSSIMPSNRSSIMPSNRSSIMPSNRSSIM
Without RED24.080.8221.360.7419.200.6617.780.6016.700.54
RED-median27.310.9126.930.8926.090.8625.030.8324.070.81
RED-Lee filter26.930.8926. 690.8825.380.8424.460.8123.520.78
RED-BM3D28.450.9127.250.8826.300.8625.910.8425.520.82
Table 3. Comparison of running time between different algorithms.
Table 3. Comparison of running time between different algorithms.
APWFPTAFPRAFP-medianRAFP-BM3D
Iteration100350300200140
Running time(s)25332294206630

Share and Cite

MDPI and ACS Style

Li, Z.; Wen, D.; Song, Z.; Liu, G.; Zhang, W.; Wei, X. Sub-Diffraction Visible Imaging Using Macroscopic Fourier Ptychography and Regularization by Denoising. Sensors 2018, 18, 3154. https://doi.org/10.3390/s18093154

AMA Style

Li Z, Wen D, Song Z, Liu G, Zhang W, Wei X. Sub-Diffraction Visible Imaging Using Macroscopic Fourier Ptychography and Regularization by Denoising. Sensors. 2018; 18(9):3154. https://doi.org/10.3390/s18093154

Chicago/Turabian Style

Li, Zhixin, Desheng Wen, Zongxi Song, Gang Liu, Weikang Zhang, and Xin Wei. 2018. "Sub-Diffraction Visible Imaging Using Macroscopic Fourier Ptychography and Regularization by Denoising" Sensors 18, no. 9: 3154. https://doi.org/10.3390/s18093154

APA Style

Li, Z., Wen, D., Song, Z., Liu, G., Zhang, W., & Wei, X. (2018). Sub-Diffraction Visible Imaging Using Macroscopic Fourier Ptychography and Regularization by Denoising. Sensors, 18(9), 3154. https://doi.org/10.3390/s18093154

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop