검색 전체 메뉴
PDF
맨 위로
OA 학술지
Distance Extraction by Means of Photon-Counting Passive Sensing Combined with Integral Imaging
  • 비영리 CC BY-NC
  • 비영리 CC BY-NC
ABSTRACT
Distance Extraction by Means of Photon-Counting Passive Sensing Combined with Integral Imaging
KEYWORD
Distance extraction , Integral imaging , Image reconstruction , Photon-counting , Passive sensing
  • I. INTRODUCTION

    Distance information extraction has been the subject of research for numerous applications [1-3]. For depth information to be extracted, three-dimensional (3D) information needs to be acquired and processed. For example, stereo images or a sequence of images are often matched to each other to extract depth information based on pixel disparity [2, 3].

    Integral imaging (II) is primarily a 3D display technique but it has been widely adopted for information processing such as obtaining depth information and object recognition [4-11]. During II recording, an elemental image array is generated, that has different views of the object. One advantage of II is that only a single exposure is required to obtain 3D information; no calibration is needed, unlike stereo imaging, and no active illumination is needed, unlike holography or light detection and ranging (LIDAR) [12, 13]. A depth extraction technique using elemental images has been studied in [8-11]. Depth is extracted by means of one-dimensional elemental image modification and a correlation-based multi-baseline stereo algorithm [10]. In [11], the depth level of the reconstruction plan is estimated by minimizing the sum of the standard deviations of the corresponding pixels’ intensity.

    Photon-counting imaging has been developed for low-light-level imaging applications such as night vision, and laser radar, radiological, and stellar imaging [14-18]. Advanced photon-counting imaging technology can register a single photo-event at each pixel. In that case, photo-detection is carried out in the binary mode generating a binary dotted image. The object recognition with nonlinear matched filtering is proposed in [19]. II reconstruction with maximum likelihood estimation (MLE) is proposed in [20]. Stereoscopic photon-counting sensing has been proposed for distance infor-mation extraction [21].

    This paper proposes the use of photon-counting passive sensing combined with integral imaging for distance infor-mation extraction under low-light-level conditions. Photon-limited imagery is reconstructed with maximum likelihood estimation (MLE). In this paper, the MLE for photon-limited scene reconstruction in 3D space is proposed with the Poisson distribution in [20], while the probability model is modified according to the low-light-level conditions. It has been shown that the MLE is merely the average of the photo-events in the elemental image array. Those photo-events are associated with pixels corresponding to a specific point in 3D space. The obtained depth level is the distance that minimizes the sum of the standard deviations of the corresponding photo-counts. The sum of the standard deviations represents the uncertainty of the sampled information. There have been efforts to minimize the uncertainty in order to reconstruct the occluded scene in [1] and the intensity elemental image array in [11].

    Photon-limited elemental images are simulated on a computer while varying the expected total number of photo-events. The performance is evaluated accordingly. We also compare the distance extraction between photon-limited and intensity elemental images. The uncertainty minimization has been applied to both photo-event and intensity cases and consistent results are obtained from both cases. The experimental results confirm that the proposed method can extract distance information under low-light-level conditions. To the best of the authors’ knowledge, it is the first report on distance extraction by use of photon-counting passive sensing combined with integral imaging.

    The rest of the paper is organized as follows. Section 2 describes the photo-event model and the distance information extraction algorithm. The experimental and simulation results are presented in Section 3. Conclusion follows in Section 4.

    II. DISTANCE INFORMATION EXTRACTION WITH PHOTON-COUNTING INTEGRAL IMAGING

    The II recording system generates an elemental image array as illustrated in Fig. 1. The microlens array is composed of a large number of small convex lenslets, and the ray information captured by each lenslet appears as an elemental image, that has different view of the object.

    Under low-light-level conditions, a photo-detector can register a single photo-event and generate a binary dotted elemental image array. It can be assumed that the probability of a photo-event is proportional to the intensity of the pixel at a low-light level [15]. Thus, the following probability model is valid:

    image
    image

    where yi indicates a single photo-event at pixel i, Np is an expected total photo-counts in the scene, xi is the normali-zed intensity at pixel i, i.e.,

    image

    where Nt is the total number of pixels in the scene. It can be seen that E(yi) = ni and

    image

    Let yi, where i = 1, 2,… ,K, is the number of photocounts detected at pixel i, which corresponds to the xi; K is the number of lenslets, which captures the point A in Fig. 1. The joint probability distribution of yi,… ,yK, since they are independently registered, becomes

    image

    It can be assumed that all ni’s are equal and proportional to intensity nA at the point A in the reconstruction plane as illustrated in Fig. 1(b), thus Eq. (3) simplifies to

    image

    The MLE (maximum likelihood estimation) of nA in Eq. (4) is obtained as

    image

    which is the average of the photo-counts originated from the point A.

    The sum of the standard deviation of the photo-counts over the reconstruction plane is chosen for our metric. It is assumed that the distance level minimizes the sum of standard deviations of the corresponding photo-counts. Therefore, the depth z to the reconstruction plane is estimated as

    image
    image
    image

    where Nr is the number of voxels in the reconstruction plane, Kj is the number of the lenslets capturing the voxel j, and yj,i is the photo-counts in the imaging plane associated with the voxel j.

    Eqs. (6) and (8) are equivalent with Eqs. (9) and (11), which extract distance information in the conventional intensity elemental images as the following [11]:

    image
    image
    image

    In the next section, we evaluate the performance of the depth extraction using Eq. (6) with different expected total number of photo-events. The distance extraction derived by intensity images in Eq. (9) is also compared with the photon-limited images.

    III. EXPERIMENTAL RESULTS

    The II recording system is composed of a microlens array and a pick-up camera. The pitch of each lenslet is 1.09 mm, and the focal length of each lenslet is about 3.3 mm. One toy car is used in the experiments. Figure 2 shows the elemental image array. The size of the elemental image array is 1419×1161 pixels and the number of elemental images is 22×18. One-hundred photon-limited elemental image arrays are generated using a pseudo random number generator on a computer. Figures 3(a)-(d) show the examples of the photon-limited images while varying photo-counts; Np varies as 1 × 106, 5 × 106, 1 × 107, and 5 × 107.

    Figures 4 is a plot of the sum of the standard deviations obtained from a gray-scaled image according to Eq. (9) [11]. The sum of the standard deviations is minimized at the depth level of 84 mm. 5(a)-(d) display the average and the error-bar of the standard deviations’sum

    over 100 photon limited images, which are obtained by Eq. (6). As more photo-events are acquired, the photon-limited image starts to resemble the intensity image in Fig.3, and the results in Fig. 5 approach those of Fig. 4. In this experiment, the depth level can be extracted when the photo-counts exceed 5×106 as illustrated in Fig. 5(b).

    IV. CONCLUSIONS

    In this paper, a photon-counting integral imaging method for distance information extraction is proposed. The depth level of the object is determined by the distance at which the sum of the standard deviations of the photo-counts is minimized. The method is based on a compact system that requires only a single exposure under passive mode to obtain 3D information. Experimental and simulation results confirm that the proposed method can be used to obtain the distance information to an object at a low-light level. It was confirmed that the depth level is the same as the intensity images. Further investigation on the distance infor-mation extraction of multiple or occluded objects under low-light-level conditions remains for future study.

참고문헌
  • 1. Schechner Y. Y., Kiryati N. (1998) “Depth from defocus vs.stereo: how different really are they?” [Proc. International Conference on Pattern Recognition] P.1784-1786 google
  • 2. Lee S.-W., Kim N. (2006) “A method for precise depth detection in stereoscopic display” [J. Opt. Soc. Korea] Vol.10 P.37-41 google
  • 3. Dalmia A. K., Trivedi M. (1996) “Depth extraction using a single moving camera: an integration of depth from motion and depth from stereo” [Machine Vision and Applications] Vol.9 P.43-55 google
  • 4. Lippmann G. (1908) “La photographie integrale” [C. R. Acad. Sci.] Vol.146 P.446-451 google
  • 5. Son J.-Y., Saveljev V. V., Choi Y.-J, Bahn J.-E., Kim S.-K, Choi H. (2003) “Parameters for designing autostereoscopic imaging systems based on lenticular, parallax barrier, and integral photography plates” [Opt. Eng.] Vol.42 P.3326-3333 google
  • 6. Park S.-G., Song B.-S., Min S.-W. (2010) “Analysis of image visibility in projection-type integral imaging system without diffuser,” [J. Opt. Soc. Korea] Vol.14 P.121-126 google
  • 7. Park J.-H., Kim J., Lee B. (2005) “Three-dimensional optical correlator using a sub-image array,” [Opt. Express] Vol.13 P.5116-5126 google
  • 8. Hwang D.-C., Shin D.-H., Kim S.-C., Kim E.-S. (2008) “Depth extraction of three-dimensional objects in space by the computational integral imaging reconstruction technique,” [Appl. Opt.] Vol.47 P.D128-D135 google
  • 9. Wu C., McCormick M., Aggoun A., Kung S. Y. (2008) “Depth mapping of integral images through viewpoint image extraction with a hybrid disparity analysis algorithm,” [Journal of Display Technology] Vol.4 P.101-108 google
  • 10. Park J.-H., Jung S., Choi H., Kim Y., Lee B. (2004) “Depth extraction by use of a rectangular lens array and one-dimen-sional elemental image modification,” [Appl. Opt.] Vol.43 P.4882-4895 google
  • 11. Lee D., Yeom S., Kim S., Son J.-Y. (2008) “Occluded object reconstruction and recognition with computational integral imaging,” [Hankook Kwanghak Hoeji (Korean J. Opt. Photon.)] Vol.19 P.270-274 google
  • 12. Shaked N. T., Katz B., Rosen J. (2009) “Review of three-dimensional holographic imaging by multiple-viewpoint-projection based methods,” [Appl. Opt.] Vol.48 P.H120-H136 google
  • 13. Degnan J. J. 2010 “Photon counting lidars for airborne and spacebornetopographic mapping,” [in Proc. Applications of Lasers for Sensing and Free Space Communications (LSC)] google
  • 14. Hecht E. 2001 Optics google
  • 15. Goodman J. W. 1985 Statistical Optics google
  • 16. Refregier Ph., Goudail F., Delyon G. (2004) “Photon noise effect on detection in coherent active images,” [Opt. Lett.] Vol.29 P.162-164 google
  • 17. Morris G. M. (1984) “Scene matching using photon-limited images,” [J. Opt. Soc. Am. A] Vol.1 P.482-488 google
  • 18. Watson E. A., Morris G. M. (1990) “Comparison of infrared upconversion methods for photon-limited imaging,” [J. Appl. Phys.] Vol.67 P.6075-6084 google
  • 19. Yeom S., Javidi B., Watson E. (2005) “Photon counting passive 3D image sensing for automatic target recognition,” [Opt. Express] Vol.13 P.9310-9330 google
  • 20. Yeom S., Javidi B., Lee C.-W., Watson E. (2007) “Photon-counting passive 3D image sensing for reconstruction and recognition of partially occluded objects,” [Opt. Express] Vol.15 P.16189-16195 google
  • 21. Yeom S. (2011) “Stereoscopic photon counting passive sensing for extraction of distance information,” [3D Res.] Vol.02 P.03003 google
OAK XML 통계
이미지 / 테이블
  • [ FIG. 1. ]  Integral imaging system with, (a) CCD camera, (b) photo-detector.
    Integral imaging system with, (a) CCD camera, (b) photo-detector.
  • [ FIG. 2 ]  An elemental image array.
    An elemental image array.
  • [ FIG. 3 ]  Photon-limited elemental image arrays when Np is (a) 1 × 106, (b) 5 × 106, (c) 1 × 107, (d) 5 × 107.
    Photon-limited elemental image arrays when Np is (a) 1 × 106, (b) 5 × 106, (c) 1 × 107, (d) 5 × 107.
  • [ FIG. 4 ]  Sum of standard deviations (SSD) with a gray scaled elemental image array.
    Sum of standard deviations (SSD) with a gray scaled elemental image array.
  • [ FIG. 5 ]  Average and error bar of SSD over 100 photon-limited images when Np is (a) 1 × 106, (b) 5 × 106, (c) 1 × 107, (d) 5 × 107.
    Average and error bar of SSD over 100 photon-limited images when Np is (a) 1 × 106, (b) 5 × 106, (c) 1 × 107, (d) 5 × 107.
(우)06579 서울시 서초구 반포대로 201(반포동)
Tel. 02-537-6389 | Fax. 02-590-0571 | 문의 : oak2014@korea.kr
Copyright(c) National Library of Korea. All rights reserved.