검색 전체 메뉴
PDF
맨 위로
OA 학술지
Fast-Converging Algorithm for Wavefront Reconstruction based on a Sequence of Diffracted Intensity Images
  • 비영리 CC BY-NC
  • 비영리 CC BY-NC
ABSTRACT
Fast-Converging Algorithm for Wavefront Reconstruction based on a Sequence of Diffracted Intensity Images
KEYWORD
Phase retrieval , Wave-front sensing , Diffraction theory , Image reconstruction techniques
  • I. INTRODUCTION

    Optical wavefronts that are reflected by or transmitted through an object experience amplitude and phase modulation by the object. If the light field is passing through an object, the measured wavefronts carry information related to the phase delay caused by the object, which is due to the refractive index and shape of the object; thus when the wavefront is measured, the thickness and shape of the object can be obtained. If the light field is reflected by an object, the wavefront carries information regarding the topology of the object, and thus can be used to measure the surface profile of the object. Hence wavefront measurements can be utilized in many areas, including imaging, surface profiling, adaptive optics, astronomy, ophthalmology, microscopy, and atomic physics [1, 2]. However it is not possible to detect the phase using modern detectors because the oscillation frequency of light is too high to be followed by detectors. Therefore, many attempts to acquire phase information have been made in the past. The techniques for this can mainly be categorized into two types, one based on interferometry and the other a phase-retrieval method based on capturing the intensity of the object beam. The interferometry method, generally referred to as holography, records the interference pattern of an object beam and a reference beam. This process converts phase information to intensity modulation [3]. However, introducing a reference beam requires a complicated interference experimental setup, which also induces some other problems in hologram reconstruction, such as the DC-term problem and the twin-image problem. These problems require a more complicated experiment setup [4, 5].

    Wavefront reconstruction using phase-retrieval techniques does not require any reference beam. Generally, several diffracted intensity images and some digital image post-processing are needed. Deterministic phase retrieval using the transport of intensity equation (TIE) works under conditions of the paraxial approximation, which limits its application in general [6-8]. Iterative phase-retrieval methods mainly involve the use of the Gerchberg-Saxton (GS) and Yang-Gu (YG) algorithms, which use prior knowledge of the object as constraints [9-11]. For the case of an object with an unknown constraint, it is difficult to reconstruct the wavefront using the GS and YG algorithms. Various approaches have been proposed to solve this problem. One approach is to characterize the object indirectly, using an aperture to capture small images and then obtain the object constraint [12], or using a known illumination pattern as a constraint instead of the object constraint [13]. The other approach is to use two or more intensity images with variations, which can be produced by capturing intensity images with different focuses [14-16], translating an aperture transversely [17], or shifting the illumination [18]. However, all of these approaches require additional optical components and involve complicated digital postprocessing.

    Pedrini et al. used images captured at axial translated planes, i.e., single-beam multiple-intensity reconstruction (SBMR) [19]. The SBMR method has the simplest capture setup compared to the other phase-retrieval methods based on multiple intensity images [14-18]: It uses only one camera. It has also been applied in some other research areas, such as measuring the shape, deformation, and angular displacement of a three-dimensional object [20, 21]. Phase retrieval using multiple intensity images captured at a series of planes can detect spatial frequencies at different sensitivities. Therefore, the significant amount of data carried by these intensity images makes this approach very robust and rather stable to the effects of noise [22]. Figure 1 shows the experimental scheme for capturing intensity images by the SBMR method. The object is illuminated by a collimated light source. The charge-coupled device (CCD) is shifted along the optical axis and captures the diffracted intensity images with an interval Δz between adjacent images. The total number of intensity images is K.

    It is known that the number of captured images affects reconstruction quality, with more intensity images resulting in higher-quality reconstructions [23]. However, capturing more intensity images requires more movement steps of the camera or the object, which makes the captured images sensitive to small misalignments in the experimental setup, and thus the noise in the captured intensity images becomes more serious. Furthermore, capturing more intensity images requires more experimental time, which makes the method not suitable for dynamic-object or real-time applications. In order to improve the capabilities of SBMR [19], beam splitters [24], spatial light modulators (SLM) [25, 26], or deformable mirrors (DM) [27] were used to achieve single-shot or single-plane intensity image capturing processes. However, beam splitters cause the attenuation of illuminating light. In the cases of SLM and DM, the simplicity of the experimental setup is sacrificed. A random amplitude mask [28] and random phase plate [29] have also been used to transform low frequency components to fast varying high frequencies, which requires fewer iterations for reconstruction. However, an amplitude mask results in the diminution of light energy, and the use of a phase plate requires difficult fabrication.

    Multiscale signal representation is a very efficient tool that can be used in signal and image processing [30, 31]. It represents signals or images with different resolutions: The high-resolution images carry the high spatial frequencies of the original image, and the low-resolution images carry the low spatial frequencies. An example is shown in Fig. 2. Figure 2(a) shows an image with 128 pixels and Fig. 2(b) shows a scaled image with 64 pixels, which looks like a low-pass-filtered image of Fig. 2(a), i.e. Fig. 2(b) contains the low frequencies of the image [32]. In the conventional phase-retrieval method, because the high-frequency components of the object vary faster than the low-frequency components, the variation between two diffracted intensity images is larger for the higher-frequency components of the object than the low-frequency components of the object. Therefore, fewer iterations are needed to recover the high-frequency than the low-frequency components. This is why the convergence proceeds rapidly in the beginning, and then becomes very slow.

    In this study, we optimized the SBMR method by using a multi-scale technique. The captured diffractive intensity images are resampled to several images with different resolutions. Phase retrieval is first performed on the images resampled with a lower resolution, and then transferred to images that are resampled with a higher resolution. This corresponds to performing more iterations for the low-frequency components than the high-frequency components. Suppose a captured image is resampled to some number A of level images, and the number of iterations at each level is B. Then for the low-frequency components of the lowest-resolution image the number of iterations would be AB, and for the high-frequency components of the highest-resolution image the number of iterations would be just B. Moreover, because the image that is resampled with a low resolution contains fewer pixels, the time required for one iteration for this image is less than that for an image resampled at a higher resolution. Because the image resampling process corresponds to separating the high-frequency components from the low-frequency components, different numbers of iterations can be performed for low-frequency components and high-frequency components separately. Therefore, although we perform more iterations for the images resampled with low resolutions than for images resampled with high resolutions, the total time consumed does not increase by as much as in the conventional method. Furthermore, the large variation between intensity images results in fast convergence of the iteration process [33], and the variation between images that are resampled with low resolution is increased. For these reasons, the proposed algorithm achieves a faster convergence than the conventional method. We describe the principle of the proposed method in Section 2, and the results of computer simulations and experiments to verify the proposed method are shown in Sections 3 and 4.

    II. PRINCIPLE

    In our proposed algorithm the capturing process is the same as that for the conventional method, as shown in Fig. 1. A series of speckle diffractive intensity images Ik is captured along the optical axis at different positions, where k is an integer from 1 to K. When the captured images have a lateral resolution of 2M pixels, the captured intensity image can be resampled into M different level images. Therefore, the resampled image of the mth level has 2m sampling points, where m is an integer from 1 to M. The resampling process starts from the captured image which has the most sampling points, i.e., where The next sampling level can be expressed as

    image

    where Z is the bicubic interpolation, in which the value of the pixel that needs to be calculated is determined by the average value of its 16 weighted neighboring pixels. The weight of each neighboring pixel is calculated using 16 equations, which are given by the gradients in both the horizontal and vertical directions, and the cross derivatives at each of the four corners of the pixel square [34, 35]. The second parameter of the Z operation is the image size after performing an interpolation. This process is performed until the resampling level reaches the first resampling level . The whole resampling process of Ik is performed for all of the captured intensity images I1··· IK. After this resampling process we have a total of M×K intensity images, and these images are grouped by resampling level. For example, the mth group of the resampled intensity images is The bicubic interpolation is used to preserve the fine detail with better performance than other common interpolation algorithms, such as bilinear interpolation or nearest-neighbor interpolation [34, 35]. The ‘sinc’ interpolation may be thought to be the most appropriate method for band-limited optical fields, but oscillations at signal borders may restrict image processing. Particularly in small images, noticeable ripples from the borders may occupy a substantial part of the image [36-38]. Considering the reduced size of the resampled images in our approach, i.e. 21 to 2N pixels, sinc interpolation is not appropriate in our method. Figure 3 shows an example of the resampling scheme for an image that has 24 pixels.

    Figure 4 shows the flowchart for our proposed algorithm. The circled numbers in this figure denote the steps of the procedure. In the flowchart, N is the number of iterations, and f is a binary value of 1 or -1, where 1 denotes forward propagation and -1 denotes backward propagation. The conventional iteration method is performed from the first resampling level, i.e. M = 1 and The solution at this level is interpolated to the next resampling level (M = 2) and used as the initial solution for the next resampling level. This process is continued until the highest resampling level GM is reached. It should be noted that this is different from optical ptychography, which simply uses a part of the intensity images and extends the image areas until the full intensity images are addressed [39].

    The details of the iterative procedure are as follows.

    Step 1: Set the initial value. The procedure starts from the first image at the first level, i.e. k=1 and m=1. The initial complex field is a composite of an amplitude and a random initial phase. The square root of the intensity is used as the amplitude, and the initial phase is a random distribution that has 21 pixels, which is the size of the lowest sampled level. Step 2: Check the current iteration number. When the iteration number reaches the maximum iteration value, go to the next level, i.e. go to step 7; otherwise, perform an iteration at the current level, i.e. go to step 3. Step 3: Decide which process should be continued according to whether the current image is the last, the first, or not in the current group. If not, do the forward propagation to the next image, i.e. go to step 5; otherwise, do the backward propagation, i.e. go to step 4. Step 4: Reverse the propagation direction. Step 5: Build the complex wave field. Use the square root of the current intensity distribution as the amplitude, and the previously calculated phase as the phase: Then propagate this wavefield to the next plane: Here P indicates the angular spectrum propagation of the Rayleigh-Sommerfeld diffraction [40]:

    image

    where J is the operation of the Fourier transform, λ is the illumination wavelength, and (u, v) are the coordinates after propagation.Step 6: Updating the amplitude of the calculated complex wavefront in the next plane by using the square root of the intensity image at this plane: Also update the index of the iteration and intensity image. Shift the process to the next image plane, i.e. go to step 2. Step 7: If the current level is the last level, retain the calculated phase in step 6 as the final solution. Otherwise, using the solution in step 6 as the initial phase of the next level, go to step 8. Step 8: Interpolate the calculated phase in step 6 to the size of the next level using Eq. (1). Here the second parameter in this equation should be 2m+1. Perform steps 2 to step 8 until the iteration number that was set previously is reached.

    From the final calculated wave field, the wavefront at all the captured image planes can be obtained by propagating this field, and the object can be reconstructed by back-propagating the wavefront to any image plane.

    As presented in the introduction, the images sampled with fewer points carry the low-frequency components of the image, which requires fewer iterations in the phase reconstruction, and vice versa. This is indicated in Fig. 5. The two images with 128 and 64 pixels in Figs. 2(a) and 2(b) are reconstructed using the conventional error-reduction method. The normalized root-mean-square errors (RMSEs) according to the iteration number are plotted. The red dashed and blue solid lines indicate the convergence of the iterations for Figs. 2(a) and 2(b), respectively. It can be seen that the iterative convergence for the image sampled with 64 pixels is faster than for that with 128 sampling points.

    Resampling the images also produces increased variations between images. Figure 6 shows an example of the variations between diffracted images along the optical axis with varying resampling levels. Figure 2(a) is used to generate two diffracted images with a slice-depth difference. The two images are resampled to 8 levels respectively. The variations for different level images are then calculated. The horizontal axis represents the level number and the vertical axis represents the variations with the corresponding sampling level on the horizontal axis. This figure demonstrates that the variation between images at two planes along the optical axis increases with decreasing sampling numbers, which is the requirement for achieving fast convergence in iterative phase-retrieval techniques. This can be attributed to the fast convergence of phase retrieval because the amount of defocusing between the diffracted intensity images affects the accuracy of the phase retrieval [41]. The amount of defocusing can be obtained with a larger depth difference, in our method, by just carrying out a post-resampling of the diffracted images.

    Based on the two points presented above, we can speculate that resampling diffracted intensity images with different levels makes it possible to perform a faster-converging iterative phase retrieval.

    III. NUMERICAL SIMULATIONS

    In the simulation, the object is a composition of the amplitude pattern shown in Fig. 7(a) and a random phase distribution ranging from 0 to 2π. The size of the object is 256×256 pixels. The object is padded with zeroes to avoid energy leakage from the edges of the captured intensity images. In experiment, this corresponds to making almost all of the intensity distribution be within the camera sensor area. The object is illuminated by a light source with a wavelength of 532 nm. Five intensity images are captured by a CCD camera with a pixel pitch of 4.65 μm. The nearest capture plane is located at a distance from the object plane of z = 30 mm, and the interval between two adjacent capture planes is Δz = 5 mm. Figure 7(b) is the calculated diffracted intensity image at the first plane, and Fig. 7(c) shows eight resampled images of the first image I1.

    The original image is reconstructed by both the conventional and the proposed methods, respectively. The normalized RMSE is used to measure the results. Figure 8 shows the RMSE with respect to time consumption. The proposed and conventional reconstructions are plotted as a red dashed line and a blue solid line, respectively. Figure 8(b) is the square area crossed by the blue dashed line in Fig. 8(a). From Fig. 8(b) it can be seen that the conventional method reached a minimum RMSE value at about 4.5 seconds, while the proposed method arrived at the same RMSE value in about 2.2 seconds, which is about 51% faster than the conventional method.

    IV. EXPERIMENTAL RESULTS

    In the experiment, a laser with a wavelength of 532 nm was used to illuminate a Newport USAF 1951 resolution chart. In the resolution chart the background area is opaque, while the line area is transparent. The light wave after the object is modulated in the line area and stopped in the opaque area. Thus the optical wavefront, just after the object, reflects the shape of the resolution chart. By reconstructing the wavefront at the location of the object, the shape image of the object can be reconstructed. Seven diffracted intensity images were captured by a Point Grey FL2-14S3M/C CCD camera. The first image was located a distance of 40 mm from the target, and the interval between the two images was 8 mm. Figure 9 shows the experimental setup. Figure 10(a) shows the directly captured image of the resolution chart, while Figs. 10(b)-(h) show the captured diffracted intensity images. The wavefront at the first plane was calculated by both the conventional and the proposed methods, respectively. The object was reconstructed by digitally propagating the calculated wavefront.

    Three hundred iterations were performed in both the conventional and proposed methods. The corresponding time consumption is shown in Fig. 11. The horizontal dashed lines are the chosen timelines for convergence comparison. The red dashed line and blue solid line represent the proposed and conventional methods, respectively. The horizontal axis values of the two intersection points on each horizontal dashed line indicate the numbers of iterations used in the conventional and proposed methods, respectively, and are listed in Table 1. The corresponding reconstructed images for the iteration numbers in Table 1 are shown in Fig. 12. The lines crossing the “2” character are plotted in Fig. 13(b). Figure 13(a) shows a plot corresponding to the directly captured object, i.e. Fig. 10(a), which is used as the reference. From Fig. 13 it can be seen that, in the conventional method, about 80 seconds were needed to obtain the best reconstruction. In order to find how much time is needed for the proposed method to reconstruct the image with a quality similar to this image, we compared this image with the proposed reconstructed images for time consumptions of 10s, 20s, 30s, 40s, 50s, 60s, 70s, and 80s. The correlation coefficients are plotted as Fig. 14. From this figure, a similar image was obtained with the proposed method within 40 seconds. This means that the proposed method is capable of obtaining a similar reconstruction to the original with a time consumption of about 40 seconds, which is about 50% faster than the conventional method with the case of our example images. This result is also in agreement with the simulated results. The small stagnation in the reconstructions by the proposed method is induced by the noise in the experimental images.

    [TABLE 1.] Iteration requirements for the crossed points in Fig. 11

    label

    Iteration requirements for the crossed points in Fig. 11

    V. CONCLUSION

    We propose a fast-convergening wavefront reconstruction algorithm by resampling diffractive intensity images. Both simulation and experimental results show that the convergence of the proposed method is about 50% faster than the conventional method for the case of our test images. We expect that the proposed method can be applied to real-time phase-retrieval applications, with further development of computer processor speeds.

참고문헌
  • 1. Joo W.-D. 2012 “Wavefront sensitivity analysis using global wavefront aberration in an unobscured optical system,” [J. Opt. Soc. Korea] Vol.16 P.228-235 google
  • 2. Vest C. M. 1979 Holographic Interferometry google
  • 3. Gabor D. 1948 “A new microscopic principle,” [Nature] Vol.161 P.777-778 google
  • 4. Leith E. N., Upatnieks J. 1963 “Wavefront reconstruction with continues-tone objects,” [J. Opt. Soc. Am.] Vol.53 P.1377-1381 google
  • 5. Kreis T., Juptner W. 1997 “Suppression of the dc term in digital holography,” [Opt. Eng.] Vol.36 P.2357-2360 google
  • 6. Teague M. R. 1982 “Irradiance moments: Their propagation and use for unique retrieval of phase,” [J. Opt. Soc. Am.] Vol.72 P.1199-1209 google
  • 7. Gureyev T. E., Roberts A., Nugenr K. A. 1995 “Partially coherent fields, the transport of intensity equation, and the phase uniqueness,” [J. Opt. Soc. Am. A] Vol.12 P.1942-1946 google
  • 8. Paganin D., Nugent K. A. 1998 “Noninterferometric phase imaging with partially coherent light,” [Phys. Rev. Lett.] Vol.80 P.2586-2589 google
  • 9. Gerchberg R. W., Saxton W. O. 1972 “A practical algorithm for the determination of phase from image and diffraction plane pictures,” [Optik] Vol.35 P.227-246 google
  • 10. Fienup J. R. 1982 “Phase retrieval algorithms: A comparison,” [Appl. Opt.] Vol.21 P.2758-2769 google
  • 11. Yang G. Z., Dong B. Z., Gu B. Y., Zhuang J. Y., Ersoy O. K. 1994 “Gerchberg-Saxton and Yang-Gu algorithms for phase retrieval in a nonunitary transform system: A comparison,” [Appl. Opt.] Vol.33 P.209-218 google
  • 12. Fienup J. R., Kowalczyk A. M. 1990 “Phase retrieval for a complex-valued object by using a low-resolution image,” [J. Opt. Soc. Am. A] Vol.7 P.450-458 google
  • 13. Fienup J. R. 2006 “Lensless coherent imaging by phase retrieval with an illumination pattern constraint,” [Opt. Express] Vol.14 P.498-508 google
  • 14. Saxton W. O. 1980 “Correction of artefacts in linear and nonlinear high resolution electron micrographs,” [J. Microsc. Spectrosc. Electron.] Vol.5 P.661-670 google
  • 15. Misell D. L. 1973 “A method for the solution of the phase problem in electron microscopy,” [J. Phys. D: Appl. Phys.] Vol.6 P.L6-L9 google
  • 16. Misell D. L. 1973 “An examination of an iterative method for the solution of the phase problem in optics and electron optics,” [J. Phys. D: Appl. Phys.] Vol.6 P.2200-2225 google
  • 17. Brady G. R., Guizar-Sicairos M., Fienup J. R. 2009 “Optical wavefront measurement using phase retrieval with transverse translation diversity,” [Opt. Express] Vol.17 P.624-639 google
  • 18. Rodenburg J. M., Faulkner H. M. L. 2004 “A phase retrieval algorithm for shifting illumination,” [Appl. Phys. Lett.] Vol.85 P.4795-4797 google
  • 19. Pedrini G., Osten W., Zhang Y. 2005 “Wave-front reconstruction from a sequence of iterferograms recorded at different planes,” [Opt. Lett.] Vol.30 P.833-835 google
  • 20. Anand A., Chhaniwal V. K., Almoro P., Pedrini G., Osten W. 2009 “Shape and deformation measurements of 3D objects using volume speckle field and phase retrieval,” [Opt. Lett.] Vol.34 P.1522-1524 google
  • 21. Almoro P. F., Pedrini G., Anand A., Osten W., Hanson S. G. 2009 “Angular displacement and deformation analyses using a speckle-based wavefront sensor,” [Appl. Opt.] Vol.48 P.932-940 google
  • 22. Nugent K. A. 2007 “X-ray noninterferometric phase imaging: A unified picture,” [J. Opt. Soc. Am. A] Vol.24 P.536-547 google
  • 23. Almoro P., Pedrini G., Osten W. 2006 “Complete wavefront reconstruction using sequential intensity measurements of a volume speckle field,” [Appl. Opt.] Vol.45 P.8596-8605 google
  • 24. Almoro P., Maallo A. M. S., Hanson S. G. 2009 “Fast-convergent algorithm for speckle-based phase retrieval and a design for dynamic wavefront sensing,” [Appl. Opt.] Vol.48 P.1485-1493 google
  • 25. Camacho L., Mico V., Zalevsky Z., Garcia J. 2010 “Quantitative phase microscopy using defocusing by means of a spatial light modulator,” [Opt. Express] Vol.18 P.6755-6766 google
  • 26. Agour A., Almoro P. F., Falldorf C. 2012 “Investigation of smooth wave fronts using SLM-based phase retrieval and a phase diffuser,” [J. Eur. Opt. Soc.-Rapid] Vol.7 P.12046 google
  • 27. Almoro P., Gluckstad J., Hanson S. G. 2010 “Single-plane multiple speckle pattern phase retrieval using a deformable mirror,” [Opt. Express] Vol.18 P.19304-19313 google
  • 28. Anand A., Pedrini G., Osten W., Almoro P. 2007 “Wavefront sensing with random amplitude mask and phase retrieval,” [Opt. Lett.] Vol.32 P.1584-1586 google
  • 29. Almoro P. F., Hanson S. G. 2008 “Random phase plate for wavefront sensing via phase retrieval and a volume speckle field,” [Appl. Opt.] Vol.47 P.2979-2987 google
  • 30. Crowley J. L., Sanderson A. C. 1987 “Multiple resolution representation and probabilistic matching of 2-D gray-scale shape,” [IEEE Trans. Pattern Anal. Mach. Intel.] Vol.9 P.113-121 google
  • 31. Lindeberg T. 1990 “Scale-space for discrete signals,” [IEEE Trans. Pattern Anal. Mach. Intel.] Vol.12 P.234-254 google
  • 32. Saleh B. E. A., Teich M. C. 2007 Fundamental of Photonics google
  • 33. Teague M. R. 1982 “Irradiance moments: Their propagation and use for unique retrieval of phase,” [J. Opt. Soc. Am.] Vol.72 P.1199-1209 google
  • 34. Acharya T., Tsai P. S. 2007 “Computational foundations of image interpolation algorithms,” [ACM Ubiquity] Vol.8 P.page 4 google
  • 35. Miller F. P., Vandome A. F., McBrewster J. 2010 Bicubic Interpolation google
  • 36. Fraser D. 1989 “Interpolation by the FFT revisited - An experimental investigation,” [IEEE Trans. Acoust. Speech Signal Process] Vol.37 P.665-675 google
  • 37. Smith T., Smith M. S., Nichols S. T. 1990 “Efficient sinc function interpolation technique for center padded data,” [IEEE Trans. Acoust. Speech Signal Process] Vol.38 P.1512-1517 google
  • 38. Yaroslavsky L. 1997 “Efficient algorithm for discrete sinc interpolation,” [Appl. Opt.] Vol.36 P.460-463 google
  • 39. Maiden A. M., Rodenburg J. M., Humphry M. J. 2010 “Optical ptychography: A practical implementation with useful resolution,” [Opt. Lett.] Vol.35 P.2585-2587 google
  • 40. Matsushima K., Shimobaba T. 2009 “Band-limited angular spectrum method for numerical simulation of free-space propagation in far and near fields,” [Opt. Express] Vol.17 P.19662-19673 google
  • 41. Mayo S., Miller P., Wilkins S., Davis T., Gao D., Gureyev T., Paganin D., Parry D., Pogany A., Stevenson A. 2002 “Diversity selection for phase-diverse phase retrieval,” [J. Microscopy] Vol.207 P.79-96 google
OAK XML 통계
이미지 / 테이블
  • [ FIG. 1. ]  Capturing process for the speckle intensity images for SBMR.
    Capturing process for the speckle intensity images for SBMR.
  • [ FIG. 2. ]  Example of resampling an image (a) with 128 sampling points and (b) with 64 sampling points.
    Example of resampling an image (a) with 128 sampling points and (b) with 64 sampling points.
  • [ ] 
  • [ FIG. 3. ]  Example of the resampling process for an image with M = 4.
    Example of the resampling process for an image with M = 4.
  • [ FIG. 4. ]  Flow chart of the proposed algorithm.
    Flow chart of the proposed algorithm.
  • [ ] 
  • [ FIG. 5. ]  Iterative convergence for images with different sampling points.
    Iterative convergence for images with different sampling points.
  • [ FIG. 6. ]  Pixel variation between two diffracted images along the optical axis at each resampling level.
    Pixel variation between two diffracted images along the optical axis at each resampling level.
  • [ FIG. 7. ]  Example of the resampling of an intensity image: (a) Amplitude image of the object used in the numerical simulation, (b) the first captured intensity image and (c) resampled images.
    Example of the resampling of an intensity image: (a) Amplitude image of the object used in the numerical simulation, (b) the first captured intensity image and (c) resampled images.
  • [ FIG. 8. ]  RMSE with respect to time consumption. Part (b) is the magnification of the square area crossed by blue lines in part (a).
    RMSE with respect to time consumption. Part (b) is the magnification of the square area crossed by blue lines in part (a).
  • [ FIG. 9. ]  Experiment setup.
    Experiment setup.
  • [ FIG. 10. ]  Captured images: (a) the directly captured image of the object; (b)-(h) the captured diffracted intensity images.
    Captured images: (a) the directly captured image of the object; (b)-(h) the captured diffracted intensity images.
  • [ FIG. 11. ]  Time consumption versus the number of iterations.
    Time consumption versus the number of iterations.
  • [ TABLE 1. ]  Iteration requirements for the crossed points in Fig. 11
    Iteration requirements for the crossed points in Fig. 11
  • [ FIG. 12. ]  The conventional and proposed reconstructions at the timelines.
    The conventional and proposed reconstructions at the timelines.
  • [ FIG. 13. ]  The plotted images along the lines (a) on the direct captured image of Fig. 10(a) and (b) on the reconstructed images of Fig. 12.
    The plotted images along the lines (a) on the direct captured image of Fig. 10(a) and (b) on the reconstructed images of Fig. 12.
  • [ FIG. 14. ]  The correlation coefficients between each proposed reconstructed image and the conventional image reconstructed with a time consumption of 40 seconds.
    The correlation coefficients between each proposed reconstructed image and the conventional image reconstructed with a time consumption of 40 seconds.
(우)06579 서울시 서초구 반포대로 201(반포동)
Tel. 02-537-6389 | Fax. 02-590-0571 | 문의 : oak2014@korea.kr
Copyright(c) National Library of Korea. All rights reserved.