검색 전체 메뉴
PDF
맨 위로
OA 학술지
3D Visualization of Partially Occluded Objects Using Axially Distributed Image Sensing With a Wide-Angle Lens
  • 비영리 CC BY-NC
  • 비영리 CC BY-NC
ABSTRACT
3D Visualization of Partially Occluded Objects Using Axially Distributed Image Sensing With a Wide-Angle Lens
KEYWORD
3D imaging , Axially distributed sensing , Camera calibration , Wide-angle lens
  • I. INTRODUCTION

    The visualization of partially occluded 3D objects has been considered one of the most challenging drawbacks in the 3D-vision field [1, 2]. To solve this problem, several multiperspective imaging approaches, including integral imaging and axially distributed image sensing (ADS), have been studied [3-9]. Integral imaging uses a planar pickup grid or a camera array. On the other hand, an ADS method, implemented by translating a camera along its optical axis, was proposed to take digital plane images that can be refocused after elemental images (EIs) have been taken for 3D visualization of partially occluded objects [10-14]. This method provides a relatively simple architecture to capture the longitudinal perspective information of a 3D object.

    However, the capacity of the ADS method is related to how far the object is located from the optical axis. Due to a lower capacity for objects located close to the optical axis, wide-area elemental images are needed to reconstruct better 3D slice images under a large field of view (FOV).

    In this paper, in order to capture a wide-area scene of 3D objects, we propose axially distributed image sensing using a wide-angle lens (WAL). Using this type of lens we can collect a lot of parallax information. The wide-area EIs are recorded by translating the wide-angle camera along its optical axis. These EIs are calibrated for compensation of radial distortion. With the calibrated EIs, we generate volumetric slice images using a computational reconstruction algorithm based on ray back-projection. To verify our idea, we carried out optical experiments to visualize a partially occluded 3D object.

    II. SYSTEM CONFIGURATION

    In general, a camera calibration process is needed for the images captured with the camera mounted with a WAL. Therefore, to visualize the correct 3D slice images in our ADS method, we introduce a camera calibration process to generate calibrated elemental images. This camera calibration method has never been applied to the conventional ADS system.

    Figure 1 shows the scheme of the proposed ADS system with a WAL. It is composed of three different subsystems: (1) ADS pickup, (2) a calibration process for elemental images, and (3) a digital reconstruction process.

       2.1. ADS Pickup Process

    The ADS pickup of 3D objects in the proposed method is shown in Fig. 2. Compared to the conventional method, the camera mounted with a WAL is translated along the optical axis. Let us define the focal length of the WAL as f. When 3D objects are located at a distance Z-z1 away from the first camera position, the wide-area EIs are captured along the optical axis by moving the camera along that axis. A total of K EIs can be recorded by moving a wideangle camera K-1 times. Here Δz is the separation between two adjacent camera positions. The kth EI is recorded at the camera position zk=z1+(k−1)Δz. Since we capture each EI at a different camera position, each contains the object's image at a different scale level.

       2.2. Calibration Process

    In a typical imaging system, lens distortion usually can be classified among three types: radial distortion, decentering distortion, and thin prism distortion [15]. However, for most lenses the radial component is predominant. We assume that our WAL produces predominantly radial distortion, and ignore other distortions in a recorded image. Therefore, the image distortion should be corrected by a calibration process before the digital reconstruction used in the proposed method. Our calibration process is composed of two steps. In the first step, the radial distortion model is considered. We suppose that the center of distortion is (cx,cy) in the recorded image with radial distortion. Let Id be the distorted image and Iu the undistorted image. To correct the distorted image, the distorted point located at (xd,yd) in Id has to move to the undistorted point at (xu,yu) in Iu. If rd and ru are respectively defined as the distance between (cx,cy) and (xd,yd) and the distance between (cx,cy) and (xu,yu), the coordinates (xu,yu) can be calculated by [16, 17]

    image

    From Eq. (1) we can see that the distortion model has a set of five parameters Θd = [cx,cy,k1,k2,k3].

    In the second step, the point with coordinates (xu,yu) is projected to a new point (xp,yp) in the desired image using a projective transformation, which is the most general transformation that maps lines into lines. The new coordinates of (xp,yp) are given by [16]

    image

    Here, it is seen that the projection parameters are Θp =[m0,m1,m2,m3,m4,m5,m6,m7]. Therefore, the parameter sets Θd and Θpmust be found to obtain the corrected images.

    Before recording 3D objects, we want to find the two parameter sets Θd and Θp for a given system. To do so, a chessboard pattern is used to apply the point-correspondences method [16]. Figure 3 shows the ADS pickup of the chessboard pattern for the calibration process. For the chessboard pattern located at a certain position, EIs are recorded by moving the wide-angle camera through its total range of motion, as shown in Fig. 3. We can see that the recorded EIs have radial image distortion. In this paper, the coordinate mapping of distorted elemental images is performed by using the chessboard image to identify whether the distortion of the EIs has mapped the coordinates correctly. Based on recording the chessboard image, the flowchart of the calibration process we used is shown in Fig. 4. The first step is to extract the corner feature points from the recorded chessboard pattern. We want to recover the mapping from the distorted EIs to the EIs using the extracted feature points. Using the Gauss-Newton method, we can find the two parameter sets Θd and Θp for the radial distortion and projective transformation [16]. The computed parameters are then used to correct the image distortion in the wide-area EIs of the desired 3D object. After repeating the computation process of the calibration parameters for each EI of the chessboard pattern, we store a set of calibration parameters in the computer. Based on the stored parameter sets, the recorded EIs of 3D objects are corrected.

       2.3. Digital Reconstruction Process

    The final process of our wide-angle ADS method is digital reconstruction using the calibrated EIs described in Section 2.2. In this process we generate a slice-plane image according to the reconstruction distance. Figure 5 shows the digital reconstruction process based on an inverse-mapping procedure through a pinhole model [7]. Each wide-angle camera is modeled as a pinhole camera with the calibrated EI located at a distance g from the camera. We assume that the reconstruction plane is located at a distance z = L. Each EI is inversely projected through each corresponding pinhole to the reconstruction plane at L. Then the ith inversely projected EI is magnified by Mi=(L-zi)/g. At the reconstruction plane, all inversely mapped EIs are superimposed upon each other using the different magnification factors. In Fig. 3, Ei is the ith EI with a size of p×q, where p and q are the pixel counts corresponding to width and height in the EI. IL is the superimposed image of all the inversely mapped images of the EI at the reconstruction plane L. IL can be calculated by the following equation:

    image

    where Ui is the upsampling factor for magnification of Ei at the reconstruction plane L, and the size of IL is M1p ×M1q.

    To reduce the computational load imposed by the large magnification factor, Eq. (3) is modified by using the downsampling factor Dr of the image by a factor of r. Then the superimposed image is given by

    image

    In Eq. (4) IL is the reconstructed plane image after superimposing all EIs at the reconstruction distance L. To generate the 3D volume information, we should reconstruct the plane images for the desired depth ranges. To do so, the digital reconstruction process is repeated for the given distance range.

    III. EXPERIMENTS AND RESULTS

    We performed preliminary experiments to demonstrate our proposed ADS system for partially occluded object visualization. Figure 6 shows the experimental structure we implemented. As shown in Fig. 6, we used two scenarios at the same time. The first scenario has a single object with a chessboard pattern with square size 100 mm × 100 mm. The second scenario has two objects: a tree as the occluder, and ‘DSU’ letter objects with letter size 100 mm × 70 mm to demonstrate partially occluded object visualization. The chessboard pattern and ‘DSU’ are located 350 mm from the first wide-angle camera position, as shown in Fig. 6. The occluder is located 150 mm in front of the ‘DSU’ object.

    We use a 1/4.5 inch CMOS camera with a resolution of 640 × 480. The WAL has a focal length f=1.79 mm and maximum FOV angle 131°. The wide-angle camera was translated in z=1 mm increments for a total of K=150 EIs and a total displacement distance of 149 mm. Examples of recorded EIs are shown in Fig. 7.

    After recording the EIs using the wide–angle camera, we applied the calibration process to them. Each EI was corrected using the corresponding calibration parameters. The calculated parameters are shown in Tables 1 and 2 for the radial distortion and projective transformation (Θd and Θp ) respectively. The calibrated EIs are shown in Fig. 8. From the result in the top left of Fig. 8, it is seen that the projection parameters were well computed. Based on these projection parameters, the EIs of 3D objects were calibrated. In our calibration process the calibrated images were cropped for the next digital reconstruction.

    [TABLE 1.] Computed parameters Θd for radial distortion

    label

    Computed parameters Θd for radial distortion

    [TABLE 2.] Computed parameters Θp for projective transformation

    label

    Computed parameters Θp for projective transformation

    With the calibrated EIs as shown in Fig. 8, we reconstructed slice plane images for 3D objects according to the different reconstruction distances. The 150 calibrated EIs were used in the digital reconstruction algorithm employing Eq. (2). The slice image at the original position of the 3D objects is shown in Fig. 9. For comparison, we include the results of using the conventional ADS method without a calibration process. From the experimental results, we can see that our method can be demonstrated successfully for visualizing a partially occluded object.

    IV. CONCLUSION

    In conclusion, we have presented a wide-angle ADS system to capture a wide-area scene of 3D objects. Using a WAL we can collect a lot of parallax information for a large scene. The calibration process was introduced to compensate for the image distortion due to the use of this type of lens. We performed a preliminary experiment of partially occluded 3D objects and demonstrated our idea successfully.

참고문헌
  • 1. Stern A., Javidi B. 2006 “Three-dimensional image sensing, visualization, and processing using integral imaging,” [Proc. IEEE] Vol.94 P.591-607 google cross ref
  • 2. Park J.-H., Hong K., Lee B. 2009 “Recent progress in three-dimensional information processing based on integral imaging,” [Appl. Opt.] Vol.48 P.H77-H94 google cross ref
  • 3. Hong S.-H., Javidi B. 2005 “Three-dimensional visualization of partially occluded objects using integral imaging,” [J. Display Technol.] Vol.1 P.354 google cross ref
  • 4. DaneshPanah M., Javidi B., Watson E. A. 2008 “Three dimensional imaging with randomly distributed sensors,” [Opt. Express] Vol.16 P.6368-6377 google cross ref
  • 5. Maycock J., McElhinney C. P., Hennelly B. M., Naughton T. J., McDonald J. B., Javidi B. 2006 “Reconstruction of partially occluded objects encoded in three-dimensional scenes by using digital holograms,” [Appl. Opt.] Vol.45 P.2975-2985 google cross ref
  • 6. Shin D.-H., Lee B.-G., Lee J.-J. 2008 “Occlusion removal method of partially occluded 3D object using sub-image block matching in computational integral imaging,” [Opt. Express] Vol.16 P.16294-16304 google cross ref
  • 7. Zhou Z., Yuan Y., Bin X., Wang Q. 2011 “Enhanced reconstruction of partially occluded objects with occlusion removal in synthetic aperture integral imaging,” [Chin. Opt. Lett.] Vol.9 P.041002 google cross ref
  • 8. Yeom S.-W., Woo Y.-H., Baek W.-W. 2011 “Distance extraction by means of photon-counting passive sensing combined with integral imaging,” [Journal of the Optical Society of Korea] Vol.15 P.357-361 google cross ref
  • 9. Rivenson Y., Rot A, Balber S., Stern A., Rosen J. 2012 “Recovery of partially occluded objects by applying compressive Fresnel holography,” [Opt. Lett.] Vol.37 P.1757-1759 google cross ref
  • 10. Shin D., Javidi B. 2011 “3D visualization of partially occluded objects using axially distributed sensing,” [J. Disp. Technol.] Vol.7 P.223-225 google cross ref
  • 11. Shin D., Javidi B. 2012 “Three-dimensional imaging and visualization of partially occluded objects using axially distributed stereo image sensing,” [Opt. Lett.] Vol.37 P.1394-1396 google cross ref
  • 12. Hong S.-P., Shin D., Lee B.-G., Kim E.-S. 2012 “Depth extraction of 3D objects using axially distributed image sensing,” [Opt. Express] Vol.20 P.23044-23052 google cross ref
  • 13. Piao Y., Zhang M., Shin D., Yoo H. 2013 “Three-dimensional imaging and visualization using off-axially distributed image sensing,” [Opt. Lett.] Vol.38 P.3162-3164 google cross ref
  • 14. Cho M., Shin D. 2013 “3D integral imaging display using axially recorded multiple images,” [Journal of the Optical Society of Korea] Vol.17 P.410-414 google cross ref
  • 15. Stein G. P. 1997 “Lens distortion calibration using point correspondences,” [Proc. CVPR] P.602-608 google
  • 16. Romero L., Gomez C., Stolkin R. 2007 Correcting Radial Distortion of Cameras with Wide Angle Lens Using Point Correspondences, Source: Scene Reconstruction, Pose Estimation and Tracking P.530 google
  • 17. Kim N.-W., Lee S.-J., Lee B.-G., Lee J.-J. 2007 “Vision based laser pointer interaction for flexible screens,” [Lecture Notes in Computer Science] Vol.4551 P.845-853 google cross ref
OAK XML 통계
이미지 / 테이블
  • [ FIG. 1. ]  Scheme of the proposed ADS method.
    Scheme of the proposed ADS method.
  • [ FIG. 2. ]  Pickup process for 3D objects by moving a camera with a wide angle lens according to the proposed ADS method.
    Pickup process for 3D objects by moving a camera with a wide angle lens according to the proposed ADS method.
  • [ ] 
  • [ ] 
  • [ FIG. 3. ]  Optical pickup process to capture the elemental images, using a chessboard pattern for camera calibration.
    Optical pickup process to capture the elemental images, using a chessboard pattern for camera calibration.
  • [ FIG. 4. ]  Flowchart of the calibration process to find the best parameters for radial and projective transformation.
    Flowchart of the calibration process to find the best parameters for radial and projective transformation.
  • [ ] 
  • [ FIG. 5. ]  Digital reconstruction process based on ray back-projection in the proposed ADS method.
    Digital reconstruction process based on ray back-projection in the proposed ADS method.
  • [ ] 
  • [ FIG. 6. ]  Experimental setup to capture the elemental images of 3D objects.
    Experimental setup to capture the elemental images of 3D objects.
  • [ FIG. 7. ]  Examples of the recorded elemental images with the image distortion (a) Chessboard patter for calibration process (b) 3D objects.
    Examples of the recorded elemental images with the image distortion (a) Chessboard patter for calibration process (b) 3D objects.
  • [ TABLE 1. ]  Computed parameters Θd for radial distortion
    Computed parameters Θd for radial distortion
  • [ TABLE 2. ]  Computed parameters Θp for projective transformation
    Computed parameters Θp for projective transformation
  • [ FIG. 8. ]  Experimental results: (a) 150th recorded elemental images for chessboard pattern and 3D objects before calibration process, (b) 150th calibrated elemental images for chessboard pattern and 3D objects after calibration process.
    Experimental results: (a) 150th recorded elemental images for chessboard pattern and 3D objects before calibration process, (b) 150th calibrated elemental images for chessboard pattern and 3D objects after calibration process.
  • [ FIG. 9. ]  3D slice images reconstructed at the original position of the ‘DSU’ object: (a) conventional method without calibration process, (b) proposed method with calibration process.
    3D slice images reconstructed at the original position of the ‘DSU’ object: (a) conventional method without calibration process, (b) proposed method with calibration process.
(우)06579 서울시 서초구 반포대로 201(반포동)
Tel. 02-537-6389 | Fax. 02-590-0571 | 문의 : oak2014@korea.kr
Copyright(c) National Library of Korea. All rights reserved.