Nothing Special   »   [go: up one dir, main page]

Next Article in Journal
Aptamer-Conjugated Polydiacetylene Colorimetric Paper Chip for the Detection of Bacillus thuringiensis Spores
Previous Article in Journal
Characterization of Arabica and Robusta Coffees by Ion Mobility Sum Spectrum
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Long-Distance Sub-Diffraction High-Resolution Imaging Using Sparse Sampling

1
Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences, Changchun 130033, China
2
College of Materials Science and Opto-Electronic Technology, University of Chinese Academy of Sciences, Beijing 100049, China
*
Author to whom correspondence should be addressed.
Sensors 2020, 20(11), 3116; https://doi.org/10.3390/s20113116
Submission received: 13 April 2020 / Revised: 25 May 2020 / Accepted: 28 May 2020 / Published: 31 May 2020
(This article belongs to the Section Optical Sensors)
Figure 1
<p>Classic 4f optical system. Two lenses with focal length f are required.</p> ">
Figure 2
<p>Path of the 4f system simplified by far-field diffraction.</p> ">
Figure 3
<p>Entire reconstruction process of Fourier ptychography.</p> ">
Figure 4
<p>Framework for implementing long-range macro imaging.</p> ">
Figure 5
<p>Effect of long-range images is a reconstruction using Fourier ptychography (FP) technology. (<b>a</b>) Ground truth: we simulate imaging a 64 × 64 mm resolution target 50 m away using the sensor with a pixel pitch of 2 µm. The width of a bar in group 20 is 25 mm. (<b>b</b>) Center image: The target is observed using a lens with a focal length of 800 mm and an aperture of 18 mm. The aperture is scanned over a 17 × 17 grid (61% overlap) creating a synthetic aperture of 130.2 mm where the synthetic aperture ratio (SAR) is 7.24. We can see it has lost at least 14 pixels of its features due to the low-pass filtering of the aperture. (<b>c</b>) Recovered image: Using FP technology to restore high-frequency information, the resolution of the image can be improved, at least to restore the characteristics of 2 pixels.</p> ">
Figure 6
<p>Phase shift diagram, signal length <math display="inline"><semantics> <mrow> <mi>n</mi> <mo>=</mo> <mn>2000</mn> </mrow> </semantics></math>: (<b>a</b>) Sparsity <math display="inline"><semantics> <mrow> <mi>k</mi> <mo>=</mo> <mn>20</mn> </mrow> </semantics></math>; (<b>b</b>) Sparsity <math display="inline"><semantics> <mrow> <mi>k</mi> <mrow> <mo>=</mo> <mn>30</mn> </mrow> </mrow> </semantics></math>.</p> ">
Figure 7
<p>Experimental results using pixel uniform sub-sampling (<b>a</b>) Ground truth: we simulate imaging a 64 × 64 mm resolution target 50 m away using the sensor with a pixel pitch of 2 µm. (<b>b</b>) Center image: Structural Similarity Index (SSIM) = 0.2866. The target is observed using a lens with a focal length of 800 mm and an aperture of 18 mm. The aperture is scanned over a 17 × 17 grid (70% overlap) creating a synthetic aperture of 104.4 mm synthetic aperture ratio (SAR) is 5.8, (<b>c</b>) Alternating minimization phase retrieval (AMPR): SSIM = 0.3968. The output of phase retrieval is obtained using the AMPR algorithm. (<b>d</b>) Algorithm 1, SSIM = 0.7932. The output of phase retrieval is obtained using Algorithm 1. We can see that Algorithm 1 is improved from 0.3968 to 0.7932 compared to the algorithm AMPR.</p> ">
Figure 8
<p>Partial enlargement of <a href="#sensors-20-03116-f006" class="html-fig">Figure 6</a>. (<b>a</b>) Partially magnify the results of AMPR recovery; (<b>b</b>) Partially magnify the results of Algorithm 1 recovery. When we use the sub-sampling method, we can see that the blur of the algorithm AMPR in the details is significantly higher than Algorithm 1 from the detail image of the image reconstruction effect. The details of the recovered image show that Algorithm 1 improves the quality of image reconstruction.</p> ">
Figure 9
<p>In SSIM at different sampling rates.</p> ">
Figure 10
<p>Schematic diagram of camera switch status. The center red is always on by default, the black part is off, and the white part is on.</p> ">
Figure 11
<p>Experimental results using camera sub-sampling, 50% cameras are active (<b>a</b>) Ground truth: we simulate imaging a 64 × 64 mm resolution target 50 m away using the sensor with a pixel pitch of 2 µm. (<b>b</b>) Center image: SSIM = 0.3198. The image is acquired by the intermediate camera. The aperture is scanned over a 17 × 17 grid (70% overlap) creating a synthetic aperture of 104.4 mm synthetic aperture ratio (SAR) is 5.8, (<b>c</b>) AMPR: SSIM = 0.4384. The output of phase retrieval is obtained using the AMPR algorithm. (<b>d</b>) Algorithm 1, SSIM = 0.8899. The output of phase retrieval is obtained using Algorithm 1. We can see that Algorithm 1 is improved from 0.4384 to 0.8899 compared to the algorithm AMPR.</p> ">
Figure 12
<p>Partial enlargement of <a href="#sensors-20-03116-f011" class="html-fig">Figure 11</a>: (<b>a</b>) Partially magnify the results of AMPR recovery; (<b>b</b>) Partially magnify the results of Algorithm 1 recovery. When we use the camera sub-sampling method, we can see that around the two line pairs, there is a blur around the algorithm AMPR, but Algorithm 1 is basically no blur and can be well resolved. This shows that the quality of Algorithm 1 in image reconstruction is better than the algorithm AMPR.</p> ">
Figure 13
<p>The overlap rate was reduced from 0.7 to 0.2. (<b>a</b>) The effect of reconstruction using the AMPR algorithm, SSIM = 0.2143; (<b>b</b>) The effect of reconstruction using Algorithm 1, SSIM = 0.5941.</p> ">
Versions Notes

Abstract

:
How to perform imaging beyond the diffraction limit has always been an essential subject for the research of optical systems. One effective way to achieve this purpose is Fourier ptychography, which has been widely used in microscopic imaging. However, microscopic imaging measurement technology cannot be directly extended to imaging macro objects at long distances. In this paper, a reconstruction algorithm is proposed to solve the need for oversampling low-resolution images, and it is successfully applied to macroscopic imaging. Compared with the traditional FP technology, the proposed sub-sampling method can significantly reduce the number of iterations in reconstruction. Experiments prove that the proposed method can reconstruct low-resolution images captured by the camera and achieve high-resolution imaging of long-range macroscopic objects.

1. Introduction

Many visible-light imaging applications are for long-range imaging, such as remote sensing and reconnaissance. Imaging from longer distances usually results in lower spatial resolution [1]. In this case, the resolution of imaging is no longer limited by the magnification of the system itself, but rather by the diffraction of the imaging system. To increase the diffraction limit in optics, we can increase the lens aperture or focal length, but increasing the lens size will make the optical system more cumbersome and affect the performance of the whole system. If we increase the focal length, the cost and system weight will increase rapidly. Therefore, physically increasing the aperture of the lens is not the best solution, and more optical devices must be introduced to compensate for the additional aberrations of the larger aperture. To improve the spatial resolution, super-resolution reconstruction technology can be introduced into computational imaging, and low-resolution images obtained are processed to obtain high-resolution images [2].
Recently, a Fourier ptychography (FP) [3,4,5,6,7] technique is proposed, which uses light waves with different angles to illuminate objects and record all low-resolution diffraction patterns. As a result of the various illumination angles, the spectrum collected in the frequency domain is equivalent to the spectrum obtained in different spatial positions of samples [8]. In traditional imaging, because the aperture of the camera cannot be infinite, the objective lens is equivalent to a low-pass filter, and the spatial spectrum of the object is filtered by the cutoff frequency, resulting in low image resolution. FP technology combines aperture synthesizing [9,10,11] and phase retrieval [12,13,14,15,16]. The high-frequency information of the object is collected by the camera through inclined light wave illumination. Similar to the synthetic aperture, the FP collects information images and integrates them in Fourier space to expand the passband of the optical system, which is equivalent to expanding the equivalent numerical aperture (NA) [17]. Besides, phase imaging is achieved through LED arrays [18]. However, instead of directly recording phase information, FP recovers lost phase information through a phase recovery algorithm. Finally, all low-resolution images are combined with a phase retrieval algorithm to recover high-resolution images. In recent years, this method has provided a new idea for quantitative phase measurement [7], optical aberration correction [19,20], and high-resolution imaging [21]. It has been widely used in quantitative phase microscopy imaging [18], high-speed in vitro live cell imaging [22], super-resolution fluorescence microscopy [23], and other fields.
Recent work shows that FP technology can recover lost phase information by only obtaining the intensity information and bypassing the diffraction limit of the optical system [17]. At present, the application of this technology is often focused on microscopy [24,25,26,27], so the sample must be very thin [3]. The purpose is to obtain the unique different passband mapped to the spectrum of low-resolution images with different angles. To accurately impose the translational spectrum constraints to avoid the thin sample assumption, Dong [28] simply recovered the super-resolution image of the object placed in the far-field by scanning the camera at different positions; Holloway [29,30] used the camera array method of coherent illumination to improve the resolution of imaging the long-distance object.
The biggest problem with FP is that it requires oversampling of the observations, which means that the number of observations must exceed the dimensionality of the problem and sometimes even exceed a large amount. This will cause severe restrictions on the storage and processing of the computer, especially when the target number and the problem dimensions are too large.
To meet the requirements of oversampling, the method adopted in [29] is to overlap the camera array on the same plane continuously, and the overlap of two adjacent cameras exceeds 60%. It is difficult to do in one imaging process, which is challenging to perform in actual production cameras. If the cameras do not overlap, their experimental results are not ideal.
Based on Reference [29], we intend to use very few observation samples, especially to use samples that are much smaller than the problem dimension to solve the problem of oversampling in FP. Therefore, we propose a sub-sampling method, which is applied to the existing FP technology to realize the imaging of macroscopic objects at a long distance.

2. Principle

2.1. Optical Path Setting

Typical FP technology is based on a 4f system and its optical path structure are shown in Figure 1. Suppose the amplitude function of the object is g ( x , y ) ; after passing through lens 1, the expression on the Fourier spectrum surface is G ( f x , f y ) . If a light pass hole is set on the spectrum surface of the 4f system, and the aperture size is d ( f x , f y ) , then the image of G ( f x , f y ) will selectively transmit to the charge-coupled device (CCD)by moving the circular hole. For each position of the pinhole denoted as i , then the intensity of the image obtained from the object I i is I i = | [ d i ( f x , f y ) · G ( f x , f y ) ] | 2 , where is the Fourier transform operator. By processing these captured low-resolution images in the frequency domain, a high-resolution image of the object is finally achieved.
The physical method of Fourier transform operation for a plane-transmitting object is to realize its Fraunhofer diffraction [31]. To observe the far-field diffraction pattern of an object at a close distance, it is often necessary to use a traditional optical element, namely a lens. In this way, the role of the optical lens is equivalent to the Fourier transform of the object. For long-distance imaging systems, there is no need to add a lens [28,32]. Therefore, the optical path structure in Figure 1 is changed to Figure 2, and the aperture of the camera lens replaces the small hole in Figure 1 as an actual diaphragm for far-field imaging. Then, only moving the camera can realize the effect of synthesizing a broader passband in Fourier space, thereby avoiding the diffraction limit of the optical camera’s resolution.

2.2. Imaging Model

The computational reconstruction process of macro Fourier ptychography imaging requires the intensity information of the object recorded by the image sensor to restore the complete spectrum information of the object. This is mathematically equivalent to the phase recovery problem. To ensure that the problem is solvable, the Fourier ptychography reconstruction algorithm uses low-resolution intensity constraints and overlap rate constraints to reconstruct the object’s spectrum.
The basic principle of the phase recovery algorithm used is to iteratively update the low-resolution image to obtain the synthesized spectrum and obtain the high-resolution image. Figure 3 shows the reconstruction process of Fourier ptychography.
The reconstruction steps are as follows:
(1)
Assumed sample initial spectrum: φ 0 ( x , y )
(2)
Set the aperture function D ( x x i , y y i ) represents the aperture of the lens, i represents the camera at the i position, multiplied by the object spectrum to obtain the spectrum obtained by the aperture: φ i ( x , y ) = φ 0 ( x , y ) · D ( x x i , y y i ) .
(3)
Inverse Fourier transform of the intercepted spectrum to the spatial domain: φ i ˜ = 1 ( φ i ) ; replace the corresponding position amplitude with the intensity of the image collected by the detector but retain the phase: φ i ˜ = I i φ i ˜ | φ i ˜ | .
(4)
Transform the φ i ˜ Fourier transform to the frequency domain to obtain φ i = ( φ i ˜ ) and update the spectrum: ϕ i = ϕ 0 + η D i * | D i | ( | D i | 2 + γ ) max ( | D i | ) × ( φ i φ 0 ) . In the formula, η is the forgetting factor, which determines the ratio between the previous iteration value and the next iteration value, which affects the iterative convergence rate; γ is the adjustment factor, and the purpose is to ensure that the denominator is not 0.
(5)
Update the spectrum at i + 1 positions until the spectrum at all positions has been updated. At this point, complete one iteration.
(6)
Continue the iteration until the preset number of iterations t is reached or the iteration error is less than the threshold.
(7)
Get the final synthesized spectrum ϕ t , and take the square of the inverse Fourier to transform to obtain the reconstructed image: I t = | 1 ( ϕ t ) | .

3. Reconstruction Algorithm

In this section, a new phase retrieval algorithm is proposed: it is obtained by integrating the alternating minimization phase retrieval (AMPR) [13,33] with the sparse phase retrieval by truncating the amplitude flow (SPARTA) [34]. The method described in [13,33] requires a sufficiently accurate initial signal at the beginning. The method described in [34] can be closer to the real signal after each iteration, but the trouble is that the samples need to be updated each time again. The integrated method can avoid the shortcomings of the respective algorithms.
Suppose y i = | A x | , where i = 1 , 2 m , our purpose is to measure the matrix A = [ a 1 , a 2 a m ] to recover the complex vector x , y is the amplitude information recorded by the detector, and x belongs to the complex field. In other words, given y   and   A to recover x , the recovery process is called phase recovery. Then, the optical path structure of Figure 2 can be expressed in mathematical form: A i = R i 1 D i , where R i is the sub-sampling operator, which can retain the measured amplitude value, D i is the pupil at the i th position, and it can intercept different bandwidths in the Fourier domain. Assuming that the signal is sparse when only the amplitude value y is given to calculate the signal x , we use the least square criterion, and then the problem is transformed into the known sparsity k to solve the signal x from the equation without phase. Then, we use the estimation of the signal as a solution to the non-convex optimization: minimize ( x ) : 1 2 m i = 1 m [ | A ( x ) | y i ] 2 where m is the measurements. To introduce the above scheme into the FP framework, we rewrite the imaging expression I i = | [ d i ( f x , f y ) · G ( f x , f y ) ] | 2 to y i = | A i x | , where y represents the amplitude of the low-resolution image obtained by the camera, i.e., y i = I i , A i represents the transform matrix of the inverse Fourier transform, and x corresponds to the high-resolution spectrum. Then, the problem becomes a standard phase recovery problem. If the real phase A T x is P , then the problem can be expressed as: P y = A T x , where P is the diagonal matrix of phases.
Since we cannot measure P directly, we have to find a suitable method for phase retrieval. Considering that the traditional FP algorithm is not ideal for the measurement or phase recovery of a sparse signal, and its reconstruction effect is not ideal, we intend to apply the theory of compression perception to remote high-resolution imaging: a sparse image in the transform domain is recovered with a lower sampling rate. This can effectively reduce the storage of devices and the complexity and time cost of sampling. The structural framework of the algorithm is shown in Figure 4. Therefore, we combine the method of phase retrieval by alternating minimization [33] and sparse phase retrieval by truncated amplitude flow [34] to find the phase information P that we want to obtain, so as to realize the imaging of long-distance macro objects. For the specific implementation steps, see Algorithm 1.
The main problem of the algorithm AMPR is that the initial value is relatively weak, and the solution may not converge. It is required to give a good enough initial estimate. Therefore, we used the root-mean-square measurement to replace the non-singular value for the initial guess of the signal in the initial stage. After solving the initialization problem to obtain a simple initial estimate, we then use an alternating minimization algorithm to improve this estimate. The specific method is to estimate the initial phase according to A i = R i 1 D i in any iteration and assign the obtained initial phase to the intensity information collected by the detector. Next, we use a sparse recovery algorithm to obtain an estimate of the next signal.
We combined and improved the two methods of phase retrieval using alternating minimization and sparse phase retrieval by a truncated amplitude stream and applied it to long-range high-resolution imaging. In the following, the effectiveness of the algorithm is proved through experiments.
In order to make the code concise, we have made the following definitions and descriptions: the signal power ψ 2 is well approximated by the average power in the measurements; x 0 will give us a good initial approximation of the real signal x ; A T is the transpose of the vector-matrix A ; we also define Ph ( z ) = def z | z | , and k ( u ) sets all entries of u to zero except for the k-ones of largest magnitudes.
Algorithm 1 Long-distance sub-diffraction high-resolution imaging for sparse sampling
Input: Sampling matrix Ai,captured LR images yi
  • Parameters: Maximum number of iterations t, step size μ, sparsity level k
  • Set: Set S to include indices corresponding to the k-largest instances:
                                                           ψ 2 = 1 m i = 1 m y i 2
                                 Compute the principal eigenvectorz of matrix:
                                                      Y = 1 m i = 1 m y i 2 A i , S A T i , S
  • Initialize:
                                                           x 0 1 m i = 1 m y i 2
                                                     {Initial approximation}
                                                                      t = 0
  • Loop:
                                                                    t t + 1
                                                      φ t + 1 diag ( Ph ( A T x t ) )
                                                     {Initial approximation}
                                  x t + 1 k ( x t u m i = 0 ( A T x t   φ i A T x t | A T x t | A ) )
                                                 {obtain the next signal estimate}
Output: Recovered spectrum x t .

4. Experimental Verification

4.1. Experimental Design

For optical imaging systems, the imaging of distant targets will result in a lower spatial resolution due to the limitation of the lens aperture. To avoid too low resolution, usually use a large-diameter lens. This will increase the weight of the lens and increase the corresponding cost. Therefore, in the design experiment, we intend to artificially synthesize a larger aperture. In the experimental simulation, we use a 512 × 512 USAF resolution chart, as shown in Figure 5a. The chart contains line pairs with widths ranging from 20 pixels to 1 pixel. Suppose the illumination wavelength is 550 nm, the focal length is 800 mm, and the aperture is 18 mm. The resolution chart itself is 64 mm × 64 mm and is located 50 meters away from the camera. For simulation experiments, we assume that the pixel width of the image sensor is 2 µm. It should be noted that in the simulation experiment, we only regard the blur of the image as the limitation of the resolution of the remote image.
In our first simulation experiment, we capture a 17 × 17 grid of images with 61% overlap between neighboring images. However, due to the effect of low-pass filtering, high-frequency information is lost. With the help of the lens scanning on the spectrum surface, we artificially synthesized a larger aperture. We define a concept: synthetic aperture ratio (SAR), which is the ratio between the synthetic aperture diameter and lens aperture diameter. In the first experiment, the aperture is 18 mm; the value of the synthetic aperture is 130.32 mm, SAR = 7.24. Figure 5b shows the image obtained from the center has lost at least 14 pixels of its features due to the low-pass filtering of the aperture. If we use FP technology, the resolution of the image is much improved; features as small as 2 pixels can be recovered, as shown in Figure 5c. The next thing we need to do is to reconstruct high-definition images based on a small amount of observation data.
We will divide several experiments to prove the effectiveness of the algorithm. First, we will prove the probability that the algorithm can recover in the case of sparse; then, we will build a sub-sampling method to recover the signal in the case of sparse constraints; finally, we test the case that the image can be recovered in the case of reducing the amount of aperture overlap.

4.2. Success Probability of Algorithm Recovery

In the first experiment, the phase shift was used to evaluate the algorithm’s success probability for signal recovery. A signal length of n = 2000 and a sparsity of k = 20   and   30 were set. The probability test results recovered by Algorithm 1 are shown in Figure 6. We can see that as the signal sparsity increases, a phase shift to the right will occur. At the same time, compared with the SPARTA algorithm, we can observe that as k increases, SPARTA moves more obviously to the right than the phase change of Algorithm 1, indicating that Algorithm 1 requires lower sample complexity.

4.3. Subsampling

4.3.1. Pixel Uniform Sub-Sampling

A sub-sampling template was designed, and the elements on the template obeyed the binomial distribution. The function of the template is that there are only 1 and 0 operations in the matrix operation, 1 is maintained in the required area, and the discarded area becomes 0. That is, only the pixels corresponding to 1 remain on the pupil. Sub-sampling is performed in N cameras that obtain intensity information, and the number of values is No . = p ( n N ) , where p represents the proportion of sampled samples, and N represents the total number of camera lenses.
The United States Air Force (USAF) resolution chart is selected as the sparse experimental object. The USAF resolution chart can better observe the details of each algorithm in the restored image, so as to compare the experimental results.
The parameters selected by the algorithm are a n = 512 2 ( 512 × 512 ) image (the USAF resolution chart) as the ground truth image, this image has an only amplitude and no phase information; the camera array consists of N = 289 ( 17 × 17 ) cameras; the camera aperture size is 75 pixels; the overlap rate between the two cameras is set to 0.7; Gaussian noise is added to each image (SNR = 30 dB); the number of iterations in the phase recovery loop is set to 30; and the proportion of sub-sampling accounts for 0.3 of the original measurement data.
To achieve this, the R i template was designed for this purpose. For the implementation of Algorithm 1, the sparsity k = 0.25 n is set, and the Structural Similarity Index (SSIM) is used as the evaluation index. On the basis of FP, Algorithm 1 is used for sparse phase retrieval, and the experimental results are compared with the method of alternating minimization phase retrieval (AMPR) in Reference [29]. The experimental comparison results of the recovered image are shown in Figure 7. In order to be able to see the local details, (c) and (d) in Figure 7 are enlarged, and the effect is shown in Figure 8.
At the same time, we studied the change of the sub-sampling ratio and also used SSIM as the evaluation standard to evaluate the effect of image restoration. The experimental results are shown in Figure 9. We can observe that no matter what the proportion p of the sample is, the effect of Algorithm 1 on the image is always better than that of AMPR from Figure 9.

4.3.2. Using Camera Sub-Sampling

The camera sub-sampling method is different from the pixel uniform sub-sampling method. The specific implementation method is to randomly leave some cameras on and leave the others off. We also design the template R i . In the case of this sub-sampling, the state of the camera is shown in Figure 10, where the center camera is turned on by default. Out of a total of 289 cameras, 146 cameras are on and 143 cameras are off. About half of the cameras are active. Algorithm 1 executes the same parameters as in Section 4.3.1. The experimental comparison results of the recovered image are shown in Figure 11. To be able to see the local details, (c) and (d) in Figure 11 is enlarged, and the effect is shown in Figure 12.

4.4. Reduce the Case of Aperture Overlap

The defect in Reference [29] is mentioned in Section 1: a large area overlap of camera array is required. For common phase retrieval methods, oversampling is necessary. If the overlap rate is less than 0.6, the reconstruction effect is very unsatisfactory. However, if the overlap rate is too large, it means that more cameras overlap on the same plane, which can not be operated in practice. To this end, sparse constraints are added to the algorithm to achieve the high-quality restoration of the target. In the experiment, the amount of overlap between the two cameras was reduced from 0.7 to 0.2 ( p = 1 ). Other condition parameters were not changed. The experimental comparison results of the recovered images are shown in Figure 13 when the overlap rate drops to 0.2.The experimental results show that even if the amount of overlap is reduced, Algorithm 1 has a better reconstruction effect compared with AMPR.

5. Conclusions

In this paper, we propose and prove the reconstruction method of the macro object image for pixel super-resolution. In the proposed method, we avoid the oversampling problem of traditional FP technology. Compared with the original application of FP technology for macro object imaging, we greatly reduce the need to keep the camera fixed on the platform and then obtain the image in a sequence for tens of minutes. On the one hand, it solves the problem of too long time caused by oversampling, so as to improve the imaging speed; on the other hand, it also enhances the SSIM of the restored image, and accordingly only recovers more details of the real image from the low-resolution image obtained by the camera. From the experimental results, no matter whether in the visual effect or quantitative index, our method uses less low-resolution images, consumes less time, and still can get a better-reconstructed image. Therefore, it is expected that our approach can be applied to many other problems that require phase retrieval.

Author Contributions

Conceptualization, D.W.; validation, T.F. and G.B.; formal analysis, D.W.; investigation, T.F. and L.J.; original draft preparation, D.W.; review and editing, D.W. and X.Z.; funding acquisition, G.B. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China, grant number 61801455; Key Science and Technology Project of Jilin Science and Technology Department, grant number 20170204029GX.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Li, Z.; Wen, D.; Song, Z.; Liu, G.; Zhang, W.; Wei, X. Sub-Diffraction Visible Imaging Using Macroscopic Fourier Ptychography and Regularization by Denoising. Sensors 2018, 18, 3154. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  2. Baker, S.; Kanade, T. Limits on super-resolution and how to break them. IEEE Trans. Pattern Anal. Mach. Intell. 2002, 24, 1167–1183. [Google Scholar] [CrossRef]
  3. Zheng, G.; Horstmeyer, R.; Yang, C. Wide-field, high-resolution Fourier ptychographic microscopy. Nat. Photonics 2013, 7, 739. [Google Scholar] [CrossRef] [PubMed]
  4. Tian, L.; Li, X.; Ramchandran, K.; Waller, L. Multiplexed coded illumination for Fourier Ptychography with an LED array microscope. Biomed. Opt. Express 2014, 5, 2376–2389. [Google Scholar] [CrossRef] [Green Version]
  5. Bian, L.; Suo, J.; Zheng, G.; Guo, K.; Chen, F.; Dai, Q. Fourier ptychographic reconstruction using Wirtinger flow optimization. Opt. Express 2015, 23, 4856–4866. [Google Scholar] [CrossRef]
  6. Chung, J.; Lu, H.; Ou, X.; Zhou, H.; Yang, C. Wide-field Fourier ptychographic microscopy using laser illumination source. Biomed. Opt. Express 2016, 7, 4787–4802. [Google Scholar] [CrossRef] [Green Version]
  7. Sun, J.; Chen, Q.; Zhang, J.; Fan, Y.; Zuo, C. Single-shot quantitative phase microscopy based on color-multiplexed Fourier ptychography. Opt. Lett. 2018, 43, 3365–3368. [Google Scholar] [CrossRef]
  8. He, X.; Liu, C.; Zhu, J. Single-shot Fourier ptychography based on diffractive beam splitting. Opt. Lett. 2018, 43, 214–217. [Google Scholar] [CrossRef]
  9. Mico, V.; Zalevsky, Z.; García-Martínez, P.; García, J. Synthetic aperture superresolution with multiple off-axis holograms. JOSA A 2006, 23, 3162–3170. [Google Scholar] [CrossRef]
  10. Hillman, T.R.; Gutzler, T.; Alexandrov, S.A.; Sampson, D.D. High-resolution, wide-field object reconstruction with synthetic aperture Fourier holographic optical microscopy. Opt. Express 2009, 17, 7873–7892. [Google Scholar] [CrossRef]
  11. Gutzler, T.; Hillman, T.R.; Alexandrov, S.A.; Sampson, D.D. Coherent aperture-synthesis, wide-field, high-resolution holographic microscopy of biological tissue. Opt. Lett. 2010, 35, 1136–1138. [Google Scholar] [CrossRef] [PubMed]
  12. Di, J.; Zhao, J.; Jiang, H.; Zhang, P.; Fan, Q.; Sun, W. High resolution digital holographic microscopy with a wide field of view based on a synthetic aperture technique and use of linear CCD scanning. Appl. Opt. 2008, 47, 5654–5659. [Google Scholar] [CrossRef] [PubMed]
  13. Maiden, A.M.; Rodenburg, J.M. An improved ptychographical phase retrieval algorithm for diffractive imaging. Ultramicroscopy 2009, 109, 1256–1262. [Google Scholar] [CrossRef] [PubMed]
  14. Granero, L.; Micó, V.; Zalevsky, Z.; García, J. Synthetic aperture superresolved microscopy in digital lensless Fourier holography by time and angular multiplexing of the object information. Appl. Opt. 2010, 49, 845–857. [Google Scholar] [CrossRef] [Green Version]
  15. Candes, E.J.; Li, X.; Soltanolkotabi, M. Phase retrieval via Wirtinger flow: Theory and algorithms. IEEE Trans. Inf. Theory 2015, 61, 1985–2007. [Google Scholar] [CrossRef] [Green Version]
  16. Waldspurger, I.; d’Aspremont, A.; Mallat, S. Phase recovery, maxcut and complex semidefinite programming. Math. Program. 2015, 149, 47–81. [Google Scholar] [CrossRef] [Green Version]
  17. Pacheco, S.; Salahieh, B.; Milster, T.; Rodriguez, J.J.; Liang, R. Transfer function analysis in epi-illumination Fourier ptychography. Opt. Lett. 2015, 40, 5343–5346. [Google Scholar] [CrossRef] [Green Version]
  18. Ou, X.; Horstmeyer, R.; Yang, C.; Zheng, G. Quantitative phase imaging via Fourier ptychographic microscopy. Opt. Lett. 2013, 38, 4845–4848. [Google Scholar] [CrossRef]
  19. Ou, X.; Zheng, G.; Yang, C. Embedded pupil function recovery for Fourier ptychographic microscopy: Erratum. Opt. Express 2015, 23, 33027. [Google Scholar] [CrossRef]
  20. Chan, A.C.S.; Shen, C.; Williams, E.; Lyu, X.; Lu, H.; Ives, C.; Hajimiri, A.; Yang, C. Extending the wavelength range of multi-spectral microscope systems with Fourier ptychography. In Proceedings of the Label-Free Biomedical Imaging and Sensing (LBIS), San Francisco, CA, USA, 2–5 February 2019; p. 108902O. [Google Scholar]
  21. Ou, X.; Horstmeyer, R.; Zheng, G.; Yang, C. High numerical aperture Fourier ptychography: Principle, implementation and characterization. Opt. Express 2015, 23, 3472–3491. [Google Scholar] [CrossRef] [Green Version]
  22. Tian, L.; Liu, Z.; Yeh, L.-H.; Chen, M.; Zhong, J.; Waller, L. Computational illumination for high-speed in vitro Fourier ptychographic microscopy. Optica 2015, 2, 904–911. [Google Scholar] [CrossRef] [Green Version]
  23. Dong, S.; Nanda, P.; Shiradkar, R.; Guo, K.; Zheng, G. High-resolution fluorescence imaging via pattern-illuminated Fourier ptychography. Opt. Express 2014, 22, 20856–20870. [Google Scholar] [CrossRef] [PubMed]
  24. Zhanghao, K.; Chen, L.; Yang, X.-S.; Wang, M.-Y.; Jing, Z.-L.; Han, H.-B.; Zhang, M.Q. Super-resolution dipole orientation mapping via polarization demodulation. Light Sci. Appl. 2016, 5, e16166. [Google Scholar] [CrossRef] [PubMed]
  25. Zhang, J.; Xu, T.; Wang, X.; Chen, S.; Ni, G. Fast gradational reconstruction for Fourier ptychographic microscopy. Chin. Opt. Lett. 2017, 15, 111702. [Google Scholar] [CrossRef] [Green Version]
  26. Öztürk, H.; Yan, H.; He, Y.; Ge, M.; Dong, Z.; Lin, M.; Nazaretski, E.; Robinson, I.K.; Chu, Y.S.; Huang, X. Multi-slice ptychography with large numerical aperture multilayer Laue lenses. Optica 2018, 5, 601–607. [Google Scholar] [CrossRef] [Green Version]
  27. Cheng, Y.F.; Strachan, M.; Weiss, Z.; Deb, M.; Carone, D.; Ganapati, V. Illumination pattern design with deep learning for single-shot Fourier ptychographic microscopy. Opt. Express 2019, 27, 644–656. [Google Scholar] [CrossRef] [Green Version]
  28. Dong, S.; Horstmeyer, R.; Shiradkar, R.; Guo, K.; Ou, X.; Bian, Z.; Xin, H.; Zheng, G. Aperture-scanning Fourier ptychography for 3D refocusing and super-resolution macroscopic imaging. Opt. Express 2014, 22, 13586–13599. [Google Scholar] [CrossRef]
  29. Holloway, J.; Asif, M.S.; Sharma, M.K.; Matsuda, N.; Horstmeyer, R.; Cossairt, O.; Veeraraghavan, A. Toward long-distance subdiffraction imaging using coherent camera arrays. IEEE Trans. Comput. Imaging 2016, 2, 251–265. [Google Scholar] [CrossRef]
  30. Holloway, J.; Wu, Y.; Sharma, M.K.; Cossairt, O.; Veeraraghavan, A. SAVI: Synthetic apertures for long-range, subdiffraction-limited visible imaging using Fourier ptychography. Sci. Adv. 2017, 3, e1602564. [Google Scholar] [CrossRef] [Green Version]
  31. Smith, W.J. Modern Optical Engineering; Tata McGraw-Hill Education: New York, NY, USA, 2008. [Google Scholar]
  32. Wu, Y.; Sharma, M.K.; Veeraraghavan, A. WISH: Wavefront imaging sensor with high resolution. Light Sci. Appl. 2019, 8, 1–10. [Google Scholar] [CrossRef] [Green Version]
  33. Netrapalli, P.; Jain, P.; Sanghavi, S. Phase retrieval using alternating minimization. In Proceedings of the Advances in Neural Information Processing Systems, Lake Tahoe, NV, USA, 5–10 December 2013; pp. 2796–2804. [Google Scholar]
  34. Wang, G.; Zhang, L.; Giannakis, G.B.; Akçakaya, M.; Chen, J. Sparse phase retrieval via truncated amplitude flow. IEEE Trans. Signal Process. 2017, 66, 479–491. [Google Scholar] [CrossRef]
Figure 1. Classic 4f optical system. Two lenses with focal length f are required.
Figure 1. Classic 4f optical system. Two lenses with focal length f are required.
Sensors 20 03116 g001
Figure 2. Path of the 4f system simplified by far-field diffraction.
Figure 2. Path of the 4f system simplified by far-field diffraction.
Sensors 20 03116 g002
Figure 3. Entire reconstruction process of Fourier ptychography.
Figure 3. Entire reconstruction process of Fourier ptychography.
Sensors 20 03116 g003
Figure 4. Framework for implementing long-range macro imaging.
Figure 4. Framework for implementing long-range macro imaging.
Sensors 20 03116 g004
Figure 5. Effect of long-range images is a reconstruction using Fourier ptychography (FP) technology. (a) Ground truth: we simulate imaging a 64 × 64 mm resolution target 50 m away using the sensor with a pixel pitch of 2 µm. The width of a bar in group 20 is 25 mm. (b) Center image: The target is observed using a lens with a focal length of 800 mm and an aperture of 18 mm. The aperture is scanned over a 17 × 17 grid (61% overlap) creating a synthetic aperture of 130.2 mm where the synthetic aperture ratio (SAR) is 7.24. We can see it has lost at least 14 pixels of its features due to the low-pass filtering of the aperture. (c) Recovered image: Using FP technology to restore high-frequency information, the resolution of the image can be improved, at least to restore the characteristics of 2 pixels.
Figure 5. Effect of long-range images is a reconstruction using Fourier ptychography (FP) technology. (a) Ground truth: we simulate imaging a 64 × 64 mm resolution target 50 m away using the sensor with a pixel pitch of 2 µm. The width of a bar in group 20 is 25 mm. (b) Center image: The target is observed using a lens with a focal length of 800 mm and an aperture of 18 mm. The aperture is scanned over a 17 × 17 grid (61% overlap) creating a synthetic aperture of 130.2 mm where the synthetic aperture ratio (SAR) is 7.24. We can see it has lost at least 14 pixels of its features due to the low-pass filtering of the aperture. (c) Recovered image: Using FP technology to restore high-frequency information, the resolution of the image can be improved, at least to restore the characteristics of 2 pixels.
Sensors 20 03116 g005
Figure 6. Phase shift diagram, signal length n = 2000 : (a) Sparsity k = 20 ; (b) Sparsity k = 30 .
Figure 6. Phase shift diagram, signal length n = 2000 : (a) Sparsity k = 20 ; (b) Sparsity k = 30 .
Sensors 20 03116 g006
Figure 7. Experimental results using pixel uniform sub-sampling (a) Ground truth: we simulate imaging a 64 × 64 mm resolution target 50 m away using the sensor with a pixel pitch of 2 µm. (b) Center image: Structural Similarity Index (SSIM) = 0.2866. The target is observed using a lens with a focal length of 800 mm and an aperture of 18 mm. The aperture is scanned over a 17 × 17 grid (70% overlap) creating a synthetic aperture of 104.4 mm synthetic aperture ratio (SAR) is 5.8, (c) Alternating minimization phase retrieval (AMPR): SSIM = 0.3968. The output of phase retrieval is obtained using the AMPR algorithm. (d) Algorithm 1, SSIM = 0.7932. The output of phase retrieval is obtained using Algorithm 1. We can see that Algorithm 1 is improved from 0.3968 to 0.7932 compared to the algorithm AMPR.
Figure 7. Experimental results using pixel uniform sub-sampling (a) Ground truth: we simulate imaging a 64 × 64 mm resolution target 50 m away using the sensor with a pixel pitch of 2 µm. (b) Center image: Structural Similarity Index (SSIM) = 0.2866. The target is observed using a lens with a focal length of 800 mm and an aperture of 18 mm. The aperture is scanned over a 17 × 17 grid (70% overlap) creating a synthetic aperture of 104.4 mm synthetic aperture ratio (SAR) is 5.8, (c) Alternating minimization phase retrieval (AMPR): SSIM = 0.3968. The output of phase retrieval is obtained using the AMPR algorithm. (d) Algorithm 1, SSIM = 0.7932. The output of phase retrieval is obtained using Algorithm 1. We can see that Algorithm 1 is improved from 0.3968 to 0.7932 compared to the algorithm AMPR.
Sensors 20 03116 g007
Figure 8. Partial enlargement of Figure 6. (a) Partially magnify the results of AMPR recovery; (b) Partially magnify the results of Algorithm 1 recovery. When we use the sub-sampling method, we can see that the blur of the algorithm AMPR in the details is significantly higher than Algorithm 1 from the detail image of the image reconstruction effect. The details of the recovered image show that Algorithm 1 improves the quality of image reconstruction.
Figure 8. Partial enlargement of Figure 6. (a) Partially magnify the results of AMPR recovery; (b) Partially magnify the results of Algorithm 1 recovery. When we use the sub-sampling method, we can see that the blur of the algorithm AMPR in the details is significantly higher than Algorithm 1 from the detail image of the image reconstruction effect. The details of the recovered image show that Algorithm 1 improves the quality of image reconstruction.
Sensors 20 03116 g008
Figure 9. In SSIM at different sampling rates.
Figure 9. In SSIM at different sampling rates.
Sensors 20 03116 g009
Figure 10. Schematic diagram of camera switch status. The center red is always on by default, the black part is off, and the white part is on.
Figure 10. Schematic diagram of camera switch status. The center red is always on by default, the black part is off, and the white part is on.
Sensors 20 03116 g010
Figure 11. Experimental results using camera sub-sampling, 50% cameras are active (a) Ground truth: we simulate imaging a 64 × 64 mm resolution target 50 m away using the sensor with a pixel pitch of 2 µm. (b) Center image: SSIM = 0.3198. The image is acquired by the intermediate camera. The aperture is scanned over a 17 × 17 grid (70% overlap) creating a synthetic aperture of 104.4 mm synthetic aperture ratio (SAR) is 5.8, (c) AMPR: SSIM = 0.4384. The output of phase retrieval is obtained using the AMPR algorithm. (d) Algorithm 1, SSIM = 0.8899. The output of phase retrieval is obtained using Algorithm 1. We can see that Algorithm 1 is improved from 0.4384 to 0.8899 compared to the algorithm AMPR.
Figure 11. Experimental results using camera sub-sampling, 50% cameras are active (a) Ground truth: we simulate imaging a 64 × 64 mm resolution target 50 m away using the sensor with a pixel pitch of 2 µm. (b) Center image: SSIM = 0.3198. The image is acquired by the intermediate camera. The aperture is scanned over a 17 × 17 grid (70% overlap) creating a synthetic aperture of 104.4 mm synthetic aperture ratio (SAR) is 5.8, (c) AMPR: SSIM = 0.4384. The output of phase retrieval is obtained using the AMPR algorithm. (d) Algorithm 1, SSIM = 0.8899. The output of phase retrieval is obtained using Algorithm 1. We can see that Algorithm 1 is improved from 0.4384 to 0.8899 compared to the algorithm AMPR.
Sensors 20 03116 g011
Figure 12. Partial enlargement of Figure 11: (a) Partially magnify the results of AMPR recovery; (b) Partially magnify the results of Algorithm 1 recovery. When we use the camera sub-sampling method, we can see that around the two line pairs, there is a blur around the algorithm AMPR, but Algorithm 1 is basically no blur and can be well resolved. This shows that the quality of Algorithm 1 in image reconstruction is better than the algorithm AMPR.
Figure 12. Partial enlargement of Figure 11: (a) Partially magnify the results of AMPR recovery; (b) Partially magnify the results of Algorithm 1 recovery. When we use the camera sub-sampling method, we can see that around the two line pairs, there is a blur around the algorithm AMPR, but Algorithm 1 is basically no blur and can be well resolved. This shows that the quality of Algorithm 1 in image reconstruction is better than the algorithm AMPR.
Sensors 20 03116 g012
Figure 13. The overlap rate was reduced from 0.7 to 0.2. (a) The effect of reconstruction using the AMPR algorithm, SSIM = 0.2143; (b) The effect of reconstruction using Algorithm 1, SSIM = 0.5941.
Figure 13. The overlap rate was reduced from 0.7 to 0.2. (a) The effect of reconstruction using the AMPR algorithm, SSIM = 0.2143; (b) The effect of reconstruction using Algorithm 1, SSIM = 0.5941.
Sensors 20 03116 g013

Share and Cite

MDPI and ACS Style

Wang, D.; Fu, T.; Bi, G.; Jin, L.; Zhang, X. Long-Distance Sub-Diffraction High-Resolution Imaging Using Sparse Sampling. Sensors 2020, 20, 3116. https://doi.org/10.3390/s20113116

AMA Style

Wang D, Fu T, Bi G, Jin L, Zhang X. Long-Distance Sub-Diffraction High-Resolution Imaging Using Sparse Sampling. Sensors. 2020; 20(11):3116. https://doi.org/10.3390/s20113116

Chicago/Turabian Style

Wang, Duo, Tianjiao Fu, Guoling Bi, Longxu Jin, and Xingxiang Zhang. 2020. "Long-Distance Sub-Diffraction High-Resolution Imaging Using Sparse Sampling" Sensors 20, no. 11: 3116. https://doi.org/10.3390/s20113116

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop