검색 전체 메뉴
PDF
맨 위로
OA 학술지
Viewing Angle-Improved 3D Integral Imaging Display with Eye Tracking Sensor
  • 비영리 CC BY-NC
  • 비영리 CC BY-NC
ABSTRACT
Viewing Angle-Improved 3D Integral Imaging Display with Eye Tracking Sensor
KEYWORD
Eye tracking , Integral imaging , Projective transformation , 3D space calibration
  • I. INTRODUCTION

    Recently, integral imaging (InIm) has been considered one of the effective technologies for next-generation threedimensional (3D) displays [1-10]. In general, the pickup part of InIm is composed of a lenslet array and a twodimensional (2D) image sensor. Here, the optical rays coming from a 3D object are picked up by the lenslet array and recorded with the 2D image sensor as elemental images, which have their own perspective of the 3D object. On the other hand, the display part of InIm is a reverse of the pickup process. The elemental images are displayed in front of the lenslet array for the reconstruction of 3D images. InIm has several merits and can provide both horizontal and vertical parallaxes, color images, and quasi-continuous views to observers [1-4]. However, it also has the drawbacks of a low-resolution 3D image, a narrow viewing angle, and a small depth range. Many researchers have been working towards solving these problems [5-10]. Among them, as a solution to the abovementioned limitation of the viewing angle, the computer vision technology of eye tracking has been applied to the InIm display system [5]. Here, the researchers proposed a tracking InIm system with an infrared (IR) camera and IR light-emitting diodes, which can track the viewers’ exact positions. However, since the user has to wear headgear with IR diodes on his/her head to check his/her position, this system is not suitable for practical applications. In this paper, we propose an improved InIm system with an eye tracking method based on the Kinect sensor [11,12]. To do so, we need a tracking system to change the elemental images dynamically when the viewer’s position is changed. In the proposed method, we newly introduce a 3D space calibration process for obtaining a more exact 3D position of the observer’s eyes. The use of the eye tracking technology in InIm can provide a wider viewing angle for 3D image observations.

    II. PROPOSED INIM DISPLAY SYSTEM WITH THE KINECT SENSOR

      >  A. System Structure

    Fig. 1 shows the principle of the InIm method. An InIm system is composed of two processes: a pickup process and a display process, as shown in Fig. 1(a) and (b), respectively. In InIm, the lenslet array is used in both processes to capture 3D objects and reconstruct 3D images. In the pickup process, the light rays coming from 3D objects pass through the lenslet array and record a set of different perspective images by using an image sensor. These recorded images are referred to as elemental images (EIs). In the reconstruction process, a lenslet array similar to that in the pickup process is used. The 3D images are formed at the location where the image was picked up by backpropagating light rays of the EIs through the lenslet array.

    In general, the viewing angle ψ of the InIm system is defined by

    image

    From Eq. (1), we can see that the viewing angle is dependent on the diameter of each lens and the gap between the lens array and the display panel. In general, the viewing angle is small because of the use of the lens array with a large f-number. For example, when p = 1 mm and g = 3 mm, the viewing angle becomes 18.925°.

    In this study, we want to improve the viewing angle by using the eye tracking technology based on the Kinect sensor. The structure of the proposed system is shown in Fig. 2. The Kinect sensor is placed on top of the display panel to detect the eye position of the observer. In this system, the proposed tracking method is as follows: for the given InIm system, firstly, we calibrate the 3D space to calculate the exact 3D location of the observer’s eyes. Secondly, we find 3D location of the observer’s eyes by using the tracking technology. Thirdly, we generate elemental images to display 3D images after considering the abovementioned eye location at a real-time speed.

      >  B. 3D Space Calibration and the Coordinate Conversion Matrix

    In general, a tracked object is oriented by the image’s coordinate system and displayed on the 2D coordinate display system. In the real world, however, the coordinate system is represented by the 3D space. Therefore, we need to calibrate the 3D space and perform a coordinate conversion between the image plane and the real 3D space. To calibrate between the image plane and the real 3D space, we need the position information of the observer and the target object. To solve this problem, we choose a 3D camera and use it as a tracking system.

    Various 3D cameras and different types of methods have been developed to obtain 3D information in a real 3D space [11-14]. One of the widely used sensors is Kinect from Microsoft, which is a pattern structured IR-light type sensor. This device can provide color and depth information simultaneously [11]. However, it faces a problem due to the physical separation between the color and the depth cameras. To overcome this problem, we reference its software development kit (SDK) from Microsoft and develop a simple solution. We can use references in the function ‘NuiImageGet-ColorPixelCoordinateFrameFromDepth PixelFrameAtResolution()’ [11]. Fig. 3 shows the processing result. Thus, we obtain the color and depth information, which is simultaneously mapped to one another in real time.

    Then, we calibrate the 3D space. To do so, we use the calibrator to detect feature points and regular pattern influences in order to increase accuracy [14]. In the proposed system, we use the calibrator with the same size of the monitor display, as shown in Fig. 4. This figure shows the specific information of the calibrator. Next, we extend the monitor’s coordinate system by using the calibrator. We mount Kinect on top of the monitor and move in steps of 10 cm while capturing the calibrator. The initial distance is 80 cm due to the shortest possible capturing depth distance from the Kinect sensor. Further, the maximum distance is 150 cm. This distance is chosen because it is a reliable distance to accurately find the corner points. After capturing the points at each distance, we need a parallel listing of the corner points, i.e., the calibrator’s corner points (based on the image coordinates) and the calibrator’s physical corner positions (based on the world coordinates). Fig. 5 shows the corresponding processing result obtained by using OpenGL.

    Using the listing of the corner points, we can calculate the least squares through projective transformation, for various 3D feature points. For the given n point correspondences, the n vector equation is as follows:

    image

    In this case, normally, λ1 becomes 1 [15]. P is the projective transformation matrix. It is a 3D expression and is represented by a 4 × 4 matrix as follows:

    image

    These equations can be simplified as follows:

    image

    A is a 4n × (15 + n) matrix, and b is a 4n vector. Among them, we can calculate x through the least squares solution and calculate P through Eq. (5) by using paired and listed correspondences.

    image

    However, the difference between the columns of the data which is inside of the matrix has quite different magnitude. Thus, we need to normalize the values. When we find the least squares for various values, we prefer to normalize the values because doing so yields good results with respect to the least squares for the projective transformation matrix [16]. In our system, we set the center coordinates of the image as (0, 0). If we follow equation 6, the normalization result is from (−1.0, −1.0) to (1.0, 1.0). Further, xMax and yMax denote the full resolution size of the image.

    image

    By following the normalization process, we can obtain a stable and correct projective transformation matrix P. P is the coordinate conversion matrix between the Kinect and the monitor.

      >  C. Estimating 3D Eye Position and Its Coordinate Conversion

    The Kinect SDK includes a face tracking library called ‘FaceTrackLib.’ It can detect a face with 121 vertices. Furthermore, it can provide the translate, rotate, and scale factors [11]. Among them, we just need the eye position. The Kinect SDK can detect the eye’s position with 18 vertices. In fact, the Kinect SDK can provide the coordinate position of only 2D vertices from the tracked face information. Therefore, we mapped the color and the depth information processes and consequently, obtained the eye’s correct depth distance from the Kinect sensor on the basis of the mapped depth information. Fig. 6 illustrates our process to detect the 3D position of the observer’s eye.

    However, detected eye’s coordinate system is still oriented by Kinect’s own coordinate system. Therefore, we need to convert the coordinates system from the Kinect to the monitor. We already calculated the projective transformation matrix P in the previous section and the eye’s 3D coordinates system can be converted into the center coordinates of the monitor system by using matrix P. Fig. 7 shows the experimental results. Fig. 7(a) shows the scene generated by using the coordinate system of Kinect, and Fig. 7(b)–(d) illustrate the result of the conversion into the center coordinates of the monitor system. Finally, we can process the coordinate system conversion between the Kinect and the monitor.

      >  D. Computational Generation of Elemental Integral Image for InIm Display

    To display 3D images in the InIm system, we need elemental images. These images can be obtained by using either optical pickup or computational pickup. In this study, we use computational pickup, which is suitable for real-time generation of elemental images. For the sake of simplicity, we use two different images, namely a background image and a foreground image, as shown in Fig. 8. Each image is converted into the corresponding elemental images. Then, they are merged into the final elemental images and displayed through the InIm display system. Fig. 8 shows the concept of the generation process of elemental images, and the final generated elemental image for the InIm display is shown on the right side of Fig. 8.

    III. EXPERIMENTS AND RESULTS

    To show the usefulness of our system, we performed preliminary experiments. We used an ultra-high-definition (UHD) monitor as the display panel and placed the lens array in front of this panel. Kinect was placed at the top of the monitor. Fig. 9 shows an overview of our system environment. The specifications of this system are presented in Table 1.

    [Table 1.] Specifications of the proposed implementation system

    label

    Specifications of the proposed implementation system

    For the given system, we calibrated a 3D space by using the calibrator shown in Fig. 4. Then, we built a Kinect sensor to track the observer and find the 3D position of the observer’s eyes. Based on the 3D position of the tracked eyes, elemental images with two different image planes (background and foreground images) were generated and merged at a real-time speed.

    We checked the computational speed of the elemental image generation process for the real-time display system. Table 2 presents the measured results of the generation speed. The measurement process consisted of two steps. First, we measured the generation speed of each single elemental image, and then, we calculated the generation speed of merge two elemental images. We obtained generation speeds of 10 frames per second (FPS) for an image resolution size of 1500 × 1500 and 5 FPS for an image resolution size of 3000 × 3000.

    [Table 2.] Evaluation of the generation speed of elemental images

    label

    Evaluation of the generation speed of elemental images

    Next, we experimentally measured the viewing angle of the display system. Fig. 10 shows the results of a comparison of the conventional InIm display system and the proposed method. The observer is placed at a distance of 1 m from the display panel. When the observer moves to the left by 10 cm from the center position of the monitor with parallel, the conventional method causes the flip effect, as shown in Fig. 10(a). In contrast, the proposed method results in a more enhanced viewing angle and there is no flip effect at the displayed image, as shown in Fig. 10(b). Furthermore, there is no flip effect when we move in other directions in the case of the proposed method, as shown in Fig. 10(d). From the results shown in Fig. 10, we infer that the proposed system can improve the viewing angle compared with the conventional method.

    IV. CONCLUSION

    The proposed system is a combination of an InIm display with a tracking system to overcome the InIm display’s limitations of the viewing angle and the flipped image. We conducted two rounds of calibrations to calibrate the 3D space. Further, we demonstrated the enhancement of the viewing angle of the InIm display for a dynamic user’s eye position by using a face tracking system based on Kinect. In the experimental result, we could see the 3D displayed image clearly in various positions. Further, the experimental results revealed that the implemented system effectively overcame the limitations of the conventional InIm system.

참고문헌
  • 1. Stern A., Javidi B. 2006 “Three-dimensional image sensing, visualization, and processing using integral imaging” [Proceedings of the IEEE] Vol.94 P.591-607 google cross ref
  • 2. Okano F., Hoshino H., Arai J., Yuyama I. 1997 “Real-time pickup method for a three-dimensional image based on integral photography” [Applied Optics] Vol.36 P.1598-1603 google cross ref
  • 3. Jang J. S., Javidi B. 2002 “Improved viewing resolution of threedimensional integral imaging by use of nonstationary microoptics” [Optics Letters] Vol.27 P.324-326 google cross ref
  • 4. Kim Y., Hong K., Lee B. 2010 “Recent researches based on integral imaging display method” [3D Research] Vol.1 P.17-27 google cross ref
  • 5. Park G., Jung J. H., Hong K., Kim Y., Kim Y. H., Min S. W., Lee B. 2009 “Multi-viewer tracking integral imaging system and its viewing zone analysis” [Optics Express] Vol.17 P.17895-17908 google cross ref
  • 6. Martinez-Corral M., Javidi B., Martinez-Cuenca R., Saavedra G. 2004 “Integral imaging with improved depth of field by use of amplitude-modulated microlens arrays” [Applied Optics] Vol.43 P.5806-5813 google cross ref
  • 7. Park J. H., Kim J., Kim Y., Lee B. 2005 “Resolution-enhanced three-dimension/two-dimension convertible display based on integral imaging” [Optics Express] Vol.13 P.1875-1884 google cross ref
  • 8. Shin D. H., Lee S. H., Kim E. S. 2007 “Optical display of true 3D objects in depth-priority integral imaging using an active sensor” [Optics Communications] Vol.275 P.330-334 google cross ref
  • 9. Jang J. Y., Shin D., Lee B. G., Kim E. S. 2014 “Multi-projection integral imaging by use of a convex mirror array” [Optics Letters] Vol.39 P.2853-2856 google cross ref
  • 10. Oh Y., Shin D., Lee B. G., Jeong S. I., Choi H. J. 2014 “Resolution-enhanced integral imaging in focal mode with a timemultiplexed electrical mask array” [Optics Express] Vol.22 P.17620-17629 google cross ref
  • 11. Kinect for Window SDK [Internet] google
  • 12. Kramer J., Burrus N., Echtler F., Daniel H. C., Parker M. 2012 “Multiple kinects,” in Hacking the Kinect. P.207-246 google
  • 13. Winscape [Internet] google
  • 14. Bradski G., Kaehler A. 2008 Learning OpenCV: Computer Vision with the OpenCV Library. google
  • 15. Zhang Z. 2010 “Estimating projective transformation matrix (collineation, homography)” Microsoft Research google
  • 16. Hartley R. I. 1997 “In defense of the eight-point algorithm” [IEEE Transactions on Pattern Analysis and Machine Intelligence] Vol.19 P.580-593 google cross ref
이미지 / 테이블
  • [ Fig. 1. ]  Integral imaging system: (a) pickup and (b) display.
    Integral imaging system: (a) pickup and (b) display.
  • [ ] 
  • [ Fig. 2. ]  System structure.
    System structure.
  • [ Fig. 3. ]  Mapping the color and depth information with the given function from the Kinect software development kit (SDK).
    Mapping the color and depth information with the given function from the Kinect software development kit (SDK).
  • [ Fig. 4. ]  Specifications of the calibrator.
    Specifications of the calibrator.
  • [ Fig. 5. ]  (a) Extending the monitor’s coordinate system and (b) the corresponding processing result.
    (a) Extending the monitor’s coordinate system and (b) the corresponding processing result.
  • [ ] 
  • [ ] 
  • [ ] 
  • [ ] 
  • [ ] 
  • [ Fig. 6. ]  Finding the 3D position of both eyes by using the face tracking information.
    Finding the 3D position of both eyes by using the face tracking information.
  • [ Fig. 7. ]  Coordinate conversion results between the Kinect (a) and the monitor (b?d).
    Coordinate conversion results between the Kinect (a) and the monitor (b?d).
  • [ Fig. 8. ]  Result of merging two different elemental images.
    Result of merging two different elemental images.
  • [ Fig. 9. ]  Overview of the proposed implementation system.
    Overview of the proposed implementation system.
  • [ Table 1. ]  Specifications of the proposed implementation system
    Specifications of the proposed implementation system
  • [ Table 2. ]  Evaluation of the generation speed of elemental images
    Evaluation of the generation speed of elemental images
  • [ Fig. 10. ]  Results of a comparison of the conventional method and the proposed method. (a) Left view (conventional method). (b) Left view (proposed method). (c) Top view (conventional method). (d) Top view (proposed method).
    Results of a comparison of the conventional method and the proposed method. (a) Left view (conventional method). (b) Left view (proposed method). (c) Top view (conventional method). (d) Top view (proposed method).
(우)06579 서울시 서초구 반포대로 201(반포동)
Tel. 02-537-6389 | Fax. 02-590-0571 | 문의 : oak2014@korea.kr
Copyright(c) National Library of Korea. All rights reserved.