검색 전체 메뉴
PDF
맨 위로
OA 학술지
3D Integral Imaging Display using Axially Recorded Multiple Images
  • 비영리 CC BY-NC
  • 비영리 CC BY-NC
ABSTRACT
3D Integral Imaging Display using Axially Recorded Multiple Images
KEYWORD
3D display , Depth extraction , Multiple images , Elemental images
  • I. INTRODUCTION

    Integral imaging (II) has been actively researched as one of the next-generation 3D display techniques [1-9]. This is due to full parallax, continuous viewing points and full-color 3D images. II has simple structure of both pickup and display schemes compared to other glass-free 3D displays. It uses a lenslet array to capture and display 3D information. That is, a 3D object is recorded through a lenslet array. Here the recorded images are called elemental images. And, the 3D images are formed by integrating the rays from 2D elemental images by use of a lenslet array.

    Recently, several 3D imaging methods have been studied to extract high-resolution 3D information in a 3D scene. Among them, an axially distributed image sensing (ADS) method [10-14] has been proposed. In this method, a single camera is translated along its optical axis and then longitudinal perspective information is obtained for a 3D scene. Compared with the other 3D imaging techniques, it can provide simple structure such as single movement and high resolution elemental images. Therefore, it can be used to extract a high-resolution depth map in the practical environment.

    In this paper, we propose a 3D display method combining the pickup process using axially recorded multiple images and the integral imaging display process. First, we extract the color and depth information of 3D objects for displaying 3D images from axially recorded multiple 2D images. Next, using the extracted depth map and color images, elemental images are computationally synthesized based on a ray mapping model between 3D space and the elemental image plane. Finally, we display 3D images optically by an integral imaging system with a lenslet array. We perform the preliminary experiment and present the experimental results.

    II. 3D INTEGRAL IMAGING DISPLAY USING AXIALLY RECORDED MULTIPLE 2D IMAGES

    In this section, we present a 3D integral imaging display system using axially recorded multiple 2D images as shown in Fig. 1. It is consisted of four processes: the ADS pickup process, depth extraction process, computationally elemental image synthesis process, and integral imaging display process.

       2.1. ADS Pickup Process

    Figure 2 illustrates the ADS pickup stage. Multiple 2D images with slightly different perspectives are recorded by translating the single camera along its optical axis [10]. The focal length of the imaging lens is g. When the camera moves along the z axis (its optical axis), multiple 2D images with different perspectives are captured (e.g. first camera position is z1 and last camera position is zN where z1<zN). Then, N images can be recorded by shifting the camera N-1 times. The separation distance between two adjacent cameras is ∆z in Fig. 2. Thus, the i-th camera position can be calculated by zi=z1+(i-1)∆z from the object where z1 is the distance between the origin (z=0) and the first image sensor. Therefore, each recorded 2D image has a different magnification ratio. That is, the 2D image with the smallest magnification ratio can be obtained when i=1 because the camera position is farthest from the 3D object. This magnification ratio difference between recorded 2D images is useful to estimate or extract depth information of the 3D object.

       2.2. Depth Extraction using Profilometry

    In this section, we explain the point-based depth extraction method using the profilometry for axially recorded multiple images. The basic principle of our depth extraction method is to find the depth information of 3D objects using statistical variance of intensity of integrated rays generated from all the recorded multiple images. That is, rays from a 3D object point are recorded into the different images according to the camera position as shown in Fig. 3. The corresponding pixel of the 3D object point in each camera is recorded with the same intensity level. When we reconstruct the integrated rays at the original position of the 3D object point as shown in Fig. 4(a), all intensities of rays become the same. On the other hand, rays have different intensities when the estimation position is not equal to the original position of the 3D object point as shown in Fig. 4(b). Based on this principle, we can estimate the depth information using the statistical variance of intensity distribution of integrated rays. However, it does not consider spatial variation of pixel intensity in the local area around the 3D object point. Thus, it may increase the noise which is caused by point-wise depth fluctuation in the extracted depth map.

    Let us estimate the reconstruction point with (x, y, Z). Then, we can find the corresponding pixel for each camera and obtain its intensity value. The position of each corresponding pixel is different according to the distance between the reconstruction point and the image sensor. Let the local coordinates and the intensity of the corresponding pixel on the i-th image sensor be denoted by (ξi, ηi) and Ii(ξi, ηi, zi), respectively. Here, zi is the position of the i-th image sensor from the origin (z=0). As shown in Fig. 3, we can see that ξ i=-gx/(Z-zi-g) and hi=-gy/(Z-zi-g). And, (ξ i, η i) is varied according to Z. To find whether the reconstruction point is 3D object point or not, the statistical variance of the intensity values for all cameras is used. First, we average the intensities of all corresponding pixels at the reconstruction point (x, y, Z). It is calculated by the following:

    Then, we define the variance metric D as

    From Eq. (2), it is seen that the variance metric D is varied according to the Z distance. Finding the local minimum of variance metric D, the 3D object point can be estimated. This can be formulated as

    Finally, when the local minimum variance metric is found, the intensity value of the 3D object point can be obtained using the mean value as described in the following equation:

    To obtain the final depth map and the color image of the 3D object, the depth estimation process is repeated for all x, y and Z. Here, the depth search range of Z may be determined by the system performance due to the heavy computation load in the large search range.

       2.3. Generation Process of Elemental Images

    Now, we present the generation process of elemental images for the 3D display from the calculated depth map and color image. The generation process of elemental images based on the geometry of pixel mapping between 3D object points and the elemental images plane through the lenslet array (in fact, virtual pinhole array) is shown in Fig. 5. The rays from the 3D object point pass through the center of each lenslet (virtual pinhole) and then they are recorded into the corresponding pixels in the elemental image plane. The pixel coordinates for the recorded pixels in each elemental image are given by

    where ( x, y ) is the rescaled version of ( x, y ) in Eq. (3) by scale change of the 3D images in the display space, f is the focal length of the lenslet and Pk,h and Pk,v are the horizontal and vertical position of the k-th lenslet, respectively. For all pixels, the final elemental images can be calculated by iteration of the pixel mapping process.

       2.4. Display of 3D Images in DPII System

    In a typical II system, depth-priority integral imaging (DPII) and resolution-priority integral imaging (RPII) can be classified by the gap between the lenslet array and the display panel, g [4,5]. In DPII, g is the same as the focal length of lenslet ( f ). Since displayed or reconstructed voxel size is equal to the lenslet size in DPII, the lateral resolution is low. However, the large depth can be provided because the depth of focus of the lenslet in DPII is large and 3D image can be displayed in real and virtual image fields simultaneously. On the other hand, g is not equal to the focal length of the lenslet in RPII. Since the voxel size is small enough in RPII, high lateral resolution can be provided. However, a 3D image has shallow depth because the depth of focus of the lenslet in RPII is small. In this paper, we use the DPII system for display to provide the large depth of the 3D object.

    Figure 6 illustrates the DPII system setup to display 3D images using computationally synthesized elemental images from axially recorded multiple images. The elemental images are focused at the focal plane of the lenslet array by passing through the projector to implement the DPII system. Then, 3D images are displayed in free space by displaying the elemental images through the lenslet array. We capture 3D images at different viewpoints to prove that the DPII system can provide the full parallax of 3D objects.

    III. EXPERIMENTS AND RESULTS

    To demonstrate the proposed scheme, we performed preliminary experiments. Figure 7 shows the entire experimental system setup including the ADS process and the DPII display process. First, the 3D object is composed of two toys: ‘car’ and ‘sign board’ as shown in Fig. 7. They have different shapes and colors. They are located at 300 mm and 530 mm from the image sensor, respectively. In the ADS pickup process, the objects should be located out of the optical axis due to the low perspective collection of 3D objects near the optical axis. Therefore, two toys are located at approximately 80 mm from the optical axis. Their size is approximately 40 mm×30 mm. We record multiple 2D images by moving the single image sensor along its optical axis. We use a Nikon camera (D70) whose pixels are 2184(H)×1426(V). The imaging lens with focal length f=50 mm is used in this experiments. The camera is translated with ∆z=5 mm increments for a total of K=41 elemental images and a total displacement distance is 200 mm. Two recorded 2D images (1st image and 41th image) are shown in Fig. 7(b) and (c).

    For the synthesis of elemental images, we extract the depth map and color image using the depth extraction process (i.e. profilometry) as described in Chapter 2.2. The depth search range of Z was from 200 mm to 600 mm with the step of 10 mm. The extracted color image and depth map are shown in Fig. 8. Their resolution is 640(H)×480(V).

    Then, the extracted depth map and color image are used to synthesize the elemental images for the DPII display as explained in Fig. 5. 3D object points are located within 20 mm(H) to 20 mm(V). To fit the depth range between real 3D objects and displayed 3D images, we used the rescale process [4]. That is, we relocated the ‘car’ object at 30mm and the ‘sign’ object at -20mm in the display space. In addition, the depth inversion was applied to avoid the pseudoscopic image problem in the integral imaging display. The computationally modeled lenslet array has 45(H)×45(V) lenslets and each elemental image is mapped with 20(H)×20(V) pixels through a single lenslet. Thus, we synthesize an elemental image array of 900(H)×900(V) pixels as shown in Fig. 8(c).

    Finally, we carried out the optical experiments to display 3D images using the synthesized elemental image array in the DPII as shown in Fig. 8(c). In the experimental setup as shown in Fig. 6, the elemental images are displayed through a projector. The lenslet array has 45(H)×45(V) lenslets whose focal length and diameter are 3 mm and 1.08 mm, respectively. The size of reconstructed images is approximately 5 mm. The ‘car’ image is observed at z = 30 mm (real image) and the ‘sign board’ image is observed at z= -20 mm (virtual image). Figure 9 shows experimental results of displayed 3D images. The measured viewing angle is approximately 16°. It is shown that we can observe the 3D images correctly through both real and virtual fields in the DPII system.

    IV. CONCLUSION

    In conclusion, a novel integral imaging display method has been proposed using axially recorded multiple images. The proposed method extracts 3D information based on the statistical variance of rays from a 3D object point. With the extracted 3D information of objects, the elemental images are computationally synthesized based on a ray mapping model between 3D space and an elemental image plane. We can display 3D images optically in the depthpriority integral imaging system with lenslet array. The experimental results show that the proposed method can display 3D images using the elemental images synthesized from axially recorded multiple 2D images.

참고문헌
  • 1. Lippmann G. 1908 “La photographie integrale,” [C. R. Acad. Sci.] Vol.146 P.446-451 google
  • 2. Stern A., Javidi B. 2006 “Three-dimensional image sensing, visualization, and processing using integral imaging,” [Proc. IEEE] Vol.94 P.591-607 google
  • 3. Park J.-H., Hong K., Lee B. 2009 “Recent progress in threedimensional information processing based on integral imaging,” [Appl. Opt.] Vol.48 P.H77-H94 google
  • 4. Shin D.-H., Lee S.-H., Kim E.-S. 2007 “Optical display of ture 3D objects in depth-priority integral imaging using an active sensor,” [Opt. Commun.] Vol.275 P.330-334 google
  • 5. Jang J.-S., Javidi B. 2002 “Improved viewing resolution of three-dimensional integral imaging by use of nonstationary micro-optics,” [Opt. Lett.] Vol.27 P.324-326 google
  • 6. Okano F., Hoshino H., Arai J., Yuyama I. 1997 “Real-time pickup method for a three-dimensional image based on integral photography,” [Appl. Opt.] Vol.36 P.1598-1603 google
  • 7. Yeom S., Woo Y.-H., Beak W.-W. 2011 “Distance extraction by means of photon-counting passive sensing combined with integral imaging,” [J. Opt. Soc. Korea] Vol.15 P.357-361 google
  • 8. Kim S.-c., Kim C.-K., Kim E.-S. 2011 “Depth-of-focus and resolution-enhanced three-dimensional integral imaging with non-uniform lenslets and intermediate-view reconstruction technique,” [3D Research] Vol.2 P.6 google
  • 9. Li G., Kwon K.-C., Shin G.-H., Jeong J.-S., Yoo K.-H., Kim N. 2012 “Simplified integral imaging pickup method for real objects using a depth camera,” [J. Opt. Soc. Korea] Vol.16 P.381-385 google
  • 10. Schulein R., DaneshPanah M., Javidi B. 2009 “3D imaging with axially distributed sensing,” [Opt. Lett.] Vol.34 P.2012-2014 google
  • 11. Xiao X., Javidi B. 2011 “Axially distributed sensing for three-dimensional imaging with unknown sensor positions,” [Opt. Lett.] Vol.36 P.1086-1088 google
  • 12. Shin D., Javidi B. 2011 “3D visualization of partially occluded objects using axially distributed sensing,” [J. Disp. Technol.] Vol.7 P.223-225 google
  • 13. Shin D., Javidi B. 2012 “Visualization of 3D objects in scattering medium using axially distributed sensing,” [J. Disp. Technol.] Vol.8 P.317-320 google
  • 14. Hong S.-P., Shin D., Lee B.-G., Kim E.-S. 2012 “Depth extraction of 3D objects using axially distributed image sensing,” [Opt. Express] Vol.20 P.23044-23052 google
이미지 / 테이블
  • [ FIG. 1. ]  Procedure of the proposed 3D display method.
    Procedure of the proposed 3D display method.
  • [ FIG. 2. ]  Pickup process to obtain axially recorded multiple images.
    Pickup process to obtain axially recorded multiple images.
  • [ FIG. 3. ]  Ray model for recording of 3D object point.
    Ray model for recording of 3D object point.
  • [ FIG. 4. ]  (a) Ray relation at the original position of 3D object point (b) Ray relation at the different position.
    (a) Ray relation at the original position of 3D object point (b) Ray relation at the different position.
  • [ FIG. 5. ]  Ray mapping for elemental image generation.
    Ray mapping for elemental image generation.
  • [ FIG. 6. ]  Optical integral imaging display with large depth.
    Optical integral imaging display with large depth.
  • [ FIG. 7. ]  (a) Experimental setup (b) 1st recorded elemental image, (c) 41th recorded elemental image.
    (a) Experimental setup (b) 1st recorded elemental image, (c) 41th recorded elemental image.
  • [ FIG. 8. ]  (a) Extracted color image (b) Extracted depth map. (c) Generated elemental images.
    (a) Extracted color image (b) Extracted depth map. (c) Generated elemental images.
  • [ FIG. 9 ]  Experimental result (a) Left view (-8 deg). (b) Right view (8 deg).
    Experimental result (a) Left view (-8 deg). (b) Right view (8 deg).
(우)06579 서울시 서초구 반포대로 201(반포동)
Tel. 02-537-6389 | Fax. 02-590-0571 | 문의 : oak2014@korea.kr
Copyright(c) National Library of Korea. All rights reserved.