검색 전체 메뉴
PDF
맨 위로
OA 학술지
Depth Extraction of Partially Occluded 3D Objects Using Axially Distributed Stereo Image Sensing
  • 비영리 CC BY-NC
  • 비영리 CC BY-NC
ABSTRACT
Depth Extraction of Partially Occluded 3D Objects Using Axially Distributed Stereo Image Sensing
KEYWORD
Axially distributed image sensing , Depth extraction , Elemental images , Stereo imaging
  • I. INTRODUCTION

    Depth extraction from three-dimensional (3D) objects in real world is attracting a great deal of interest in many diverse fields of computer vision, 3D display and 3D recognition [1-3]. Especially, 3D passive imaging makes it possible to extract depth information by recording different perspectives of 3D objects [3-9]. Among 3D passive imaging technologies, recently axially distributed stereo image sensing (ADSS) has been studied [8] where the stereo camera is translated along its optical axis and many elemental image pairs for a 3D scene are collected. The collected images are reconstructed as 3D slice images using a computational reconstruction algorithm based on ray back-projection. The ADSS is an attractive way to provide high-resolution elemental images and simple axial movement.

    Partially occluded object has been one of the challenging problems in 3D image processing area. In the conventional stereo imaging case [1,2], two different perspective images were used for 3D information extraction using a corresponding pixel matching technique. However, it is difficult to find accurate 3D information of partially occluded objects because 3D object image may be hidden by partial occlusion and the correspondences are lost.

    In this paper, we present a computational method for depth extraction of partially occluded 3D objects using ADSS. In the proposed method, the high resolution elemental image pairs (EIPs) are recorded by simply moving the stereo camera along the optical axis and the recorded EIPs are used to reconstruct 3D slice images using the computational reconstruction algorithm. To extract depth information of partially occluded 3D object, we utilize the edge enhancement filter and block matching algorithm between two reconstructed slice image pair. To demonstrate the proposed method, we carry out the preliminary experiments and the results are presented.

    II. DEPT EXTRACTION OF PARTIALLY OCCLUDED 3D OBJECTS USING ADSS

      >  A. System Structure

    The proposed depth extraction method using ADSS for partially occluded 3D objects is shown in Fig. 1. It has three processes: ADSS pickup process, digital reconstruction process and the depth extraction process.

      >  B. ADSS Pickup

    Fig. 2 shows the recording process of 3D objects in the ADSS system. We record EIPs by moving the stereo camera along its optical axis. We suppose that optical axis is the center of the two cameras located at x axis and the distance between the imaging lens and sensor is g. Stereo camera can record a pair of elemental images according to its position on the z axis. They have two different perspectives for 3D objects. When 3D objects are located at the longitudinal distance zo away from the first stereo camera as shown in Fig. 2, we can capture K EIPs. The kth EIP is located at a distance zk=z0-(k-1)Δz from the object, where Δz is the axial separation between two adjacent stereo cameras. Since we capture each EIP at different distances, the object can be recorded on each EIP with different magnification factors.

      >  C. Digital Reconstruction in ADSS

    The second process of the proposed method is the computational reconstruction using the recorded EIPs. Fig. 3 illustrates the digital reconstruction process. The computational reconstruction is based on the inverse ray projection model [8]. We suppose that the reconstruction plane is at distance zr. In Fig. 3, we assume that the first camera is located at z=0 and the kth camera is at z=(k-1)Δz. Let EkL and EkR be the left and right image of the kth EIP, respectively. The reconstructed image can be represented by

    image

    where

    and

    Then, the reconstructed 3D slice image at (x, y, zr) is the summation of all the inversely mapped EIPs:

    image

      >  D. Depth Extraction Process

    In the depth extraction process of the proposed method, we extract depth information using slice images from digital reconstruction process. The reconstructed slice images consist of different mixture images with focused images and blurred images according to the reconstruction distances.

    The focused images of 3D object are reconstructed only at the original position of 3D object. While, blurred images are shown out of original position. Based on this principle, we find the focused image part in the reconstructed slice images. To do so, we firstly apply an edge-enhancement filter to the slice image. Next, we extract depth information using the edge-enhanced images as shown in Fig. 4. The depth extraction algorithm used in this paper is shown in Fig. 4. In general, block matching minimizes a measure of matching error between two images. The matching error between the blocks at the position (x,y) in the left image, IL, and the candidate bock at position (x+u, y+v) in the reference image, IR, is usually defined as the sum of absolute difference (SAD)

    image

    where the block size is B×B. Using SAD result, the best estimate is defined to be the (u,v) which minimizes SAD. This estimate calculates and compares the SADs for all the search positions {(x+u, y+v)} in the right image IR. That is,

    image

    III. EXPERIMENTS AND RESULTS

    To demonstrate our depth extraction method using ADSS, we performed the preliminary experiments for partially occluded 3D objects. The experimental structure is shown in Fig. 5(a). The 3D object is a toy car. It is positioned at 430 mm away from the first stereo camera. The occlusion is a tree which is located at 300 mm. Two different cameras are used for stereo camera. The baseline of the two cameras was 65 mm. Each camera has an image sensor of 2184×1456 pixels. Two lenses with focal length f=50 mm are used in this experiments. Then, g becomes 50 mm for computational reconstructions. The stereo camera is translated at z=5 mm increments for a total of K=41 EIPs.

    Next, to reconstruct slice images for the partially occluded 3D objects, the recorded 41 EIPs were applied to the computational reconstruction algorithm as shown in Fig. 4. The 2D slice images of the 3D objects were obtained according to the reconstruction distances. Some reconstructed slice images are shown in Fig. 5(b). The reconstructed slice image is focused on the car object. Here, the distance of the reconstruction plane was 430 mm from the sensor where the ‘car’ object is originally located.

    Now, we estimated the depth of the object using the depth extraction process described in Eqs. (3) and (4). We applied the edge-enhancement filter to the slice image as shown in Fig. 5(b). Next, we extracted depth information using the edge-enhanced images. The block size was 8×8. The estimated depths are shown in Fig. 6(b). This result reveals that the proposed method can extract the 3D information of partially occluded object shown in Fig. 6(a).

    IV. CONCLUSIONS

    In conclusion, we have presented a depth extraction method using ADSS. In the proposed method, the high-resolution EIPs were recorded by simply moving the stereo camera along the optical axis and the recorded EIPs were used to generate a set of 3D slice images using the computational reconstruction algorithm. To extract depth of 3D object, the edge enhancement filter and block matching algorithm between two slice images were used. To demonstrate our method, we performed the preliminary experiments of partially occluded 3D objects. The experimental results reveal that we can extract depth information because ADSS provides clear images for partially occluded 3D objects.

참고문헌
  • 1. Okoshi T. 1976 Three-Dimensional Imaging Techniques. google
  • 2. Ku J. S., Lee K. M., Lee S. U. 2001 “Multi-image matching for a general motion stereo camera model,” [Pattern Recognition] Vol.34 P.1701-1712 google cross ref
  • 3. Stern A., Javidi B. 2006 “Three-dimensional image sensing, visualization, and processing using integral imaging,” [Proceedings of the IEEE] Vol.94 P.591-607 google cross ref
  • 4. Passalis G., Sgouros N., Athineos S., Theoharis T. 2007 “Enhanced reconstruction of three-dimensional shape and texture from integral photography images,” [Applied Optics] Vol.46 P.5311-5320 google cross ref
  • 5. Park J. H., Hong K., Lee B. 2009 “Recent progress in three-dimensional information processing based on integral imaging,” [Applied Optics] Vol.48 P.H77-H94 google cross ref
  • 6. DaneshPanah M., Javidi B., Watson E. A. 2008 “Three dimensional imaging with randomly distributed sensors,” [Optical Express] Vol.16 P.6368-6377 google cross ref
  • 7. Shin D., Javidi B. 2011 “3D visualization of partially occluded objects using axially distributed sensing,” [Journal of Display Technology] Vol.7 P.223-225 google cross ref
  • 8. Shin D., Javidi B. 2012 “Three-dimensional imaging and visualization of partially occluded objects using axially distributed stereo image sensing,” [Optics Letters] Vol.37 P.1394-1396 google cross ref
  • 9. Cho M., Shin D. 2013 “Depth resolution analysis of axially distributed stereo camera systems under fixed constrained resources,” [Journal of the Optical Society of Korea] Vol.17 P.500-505 google cross ref
이미지 / 테이블
  • [ Fig. 1. ]  Proposed depth extraction method using ADSS.
    Proposed depth extraction method using ADSS.
  • [ Fig. 2. ]  System structure of ADSS pickup process.
    System structure of ADSS pickup process.
  • [ ] 
  • [ Fig. 3. ]  Ray diagram of digital reconstruction process.
    Ray diagram of digital reconstruction process.
  • [ ] 
  • [ ] 
  • [ ] 
  • [ Fig. 4. ]  Depth extraction process for edge-enhanced images. (a) Left image and (b) right image.
    Depth extraction process for edge-enhanced images. (a) Left image and (b) right image.
  • [ Fig. 5. ]  (a) Experimental structure. (b) reconstructed slice images.
    (a) Experimental structure. (b) reconstructed slice images.
  • [ Fig. 6. ]  Depth extraction results for partially occluded car object. (a) 2D image and (b) extracted depth.
    Depth extraction results for partially occluded car object. (a) 2D image and (b) extracted depth.
(우)06579 서울시 서초구 반포대로 201(반포동)
Tel. 02-537-6389 | Fax. 02-590-0571 | 문의 : oak2014@korea.kr
Copyright(c) National Library of Korea. All rights reserved.