검색 전체 메뉴
PDF
맨 위로
OA 학술지
Computational Integral Imaging Reconstruction of a Partially Occluded Three-Dimensional Object Using an Image Inpainting Technique
  • 비영리 CC BY-NC
  • 비영리 CC BY-NC
ABSTRACT
Computational Integral Imaging Reconstruction of a Partially Occluded Three-Dimensional Object Using an Image Inpainting Technique
KEYWORD
Integral imaging , Elemental images , Occlusion removal , Image inpainting , 3D visualization
  • I. INTRODUCTION

    Computational integral imaging (CII) is able to capture and reconstruct partially occluded 3D objects for 3D visualization and recognition [1-19]. CII consists of the pickup process and the subsequent computational integral imaging reconstruction (CIIR) process. In the pickup process, a 3D object is recorded as elemental images through a lenslet array. In CIIR, the elemental images are digitally processed using a computer, by which 3D images can be easily reconstructed at a desired reconstruction plane. This CII approach is very useful in the visualization and recognition of 3D objects which are partially occluded by interposed objects [13-20].

    In the CII method for partially occluded 3D objects, the generally uncharacterized occlusion seriously degrades the resolution of the computationally reconstructed plane images, since it hides the 3D object to be recognized. Studies to address this problem [14-18] have been based on removing the occlusion from the recorded elemental images to obtain plane images of good visual quality. However, after the occluding object has been removed, corresponding holes remain in the elemental images. This may prevent us from obtaining good visual quality in the reconstructed images. In 2010, Jung et al. proposed an interesting reconstruction method for occlusion holes using optical flow and triangular mesh reconstruction [20]. However, even though reconstructed images can be obtained with high accuracy, the depth extraction algorithm is quite complex because it consists of image rectification, image sequence generation, depth segmentation, depth extraction, and triangular mesh reconstruction. This may impose high computational cost and slow reconstruction speed.

    In this paper we propose an improved CII method that utilizes the image inpainting technique for visualization of a partially occluded object. The inpainting technique, which is widely used in two-dimensional image and video processing, is a quite simple and robust method that allows for calculation in real time [21, 22]. The proposed method fills in the missing data for the occluded region with an inpainting technique that uses information from the neighboring pixels. The inpainting technique fills in the data in such a way that the occluded region is recovered. With the missing data filled in, the CIIR can reconstruct a 3D image that is faithful to the original 3D image, prior to occlusion. To verify the validity of the proposed system, we perform preliminary experiments in imaging faces and present the results.

    II. CIIR METHOD USING IMAGE INPAINTING

    We want to improve the visual quality of the 3D plane images obtained from the CIIR process, for applications involving 3D visualization and recognition of partially occluded objects. To do so, we use both an occlusion removal technique [20] and an image inpainting technique [21] to generate modified elemental images without occlusion. The proposed method is mainly composed of four different processes, as shown in Fig. 1.

       2.1. Pickup of a Partially Occluded 3D Object

    The first process is the pickup of the partially occluded 3D object, as shown in Fig. 2(a). The 3D object of interest and the occluding object are located respectively at two arbitrary distances. They are recorded through the lenslet array using a digital camera. These recordings are called elemental images.

       2.2. Estimation of the Occluded Area

    The second process in the proposed method is the estimation of the occluded area in the elemental images. Here we want to estimate the position of the occlusion to generate the weight mask pattern used in the next process. To do this, the occlusion is estimated by the method in [20], which uses two different kinds of CIIR. In this method two different sets of plane images are generated, and the absolute differences between the two plane images are calculated. In this paper, the CIIR methods based respectively on square mapping and round mapping are used. These CIIR methods are well described in Ref. [20].

    The main principle of depth estimation is as follows: The elemental images are processed by the two different CIIR methods, as shown in Fig. 3. Then we can generate two different plane image sets, using the square mapping and round mapping methods. After obtaining the plane images we calculate the absolute difference images between the two plane image sets, as shown in Fig. 3. Next we use the difference images to estimate the depth of the occlusion, and then we find the best estimate of the depth of the occlusion, based on the fact that the plane image reconstructed at the original position of a 3D object is obtained clearly, regardless of the mapping model. That is, the differential image for a clearly reconstructed plane image is composed of many low-intensity pixels; on the other hand, the differential image for a blurred plane image has many high-intensity pixels. Based on this principle, we can estimate the depth of a 3D object by searching for the minimum pixel intensity among the differential images. Finally, we can find the occlusion’s position and shape from the estimated depth using a proper threshold value, and generate the mask’s elemental images via the computational pickup process of integral imaging [15].

       2.3. CIIR Using Image Inpainting

    As the next step, we introduce an image inpainting technique to the recorded elemental images. Image inpainting refers to the technique that automatically reconstructs the region from which image information is missing [21]. This technique uses information from the boundary that surrounds the missing region. The central problem of image inpainting can be divided into two main parts: first, how to define the missing region, and second, how to utilize the boundary information to fill in the missing region. In this study we use the image inpainting technique to fill in the unwanted occluded region with reliable data, using the data from the outer boundary of the occluded region [21]. Since the result of image inpainting depends on the boundary data, it is crucial to define well the outer boundary. We use the following steps to obtain a reliable boundary.

    1. Perform thresholding on the depth map u obtained by the method in section 2.2, to obtain a binary map that describes the occluded region Ω. 2. zPerform a binary dilation operation on the binary map, to obtain an extended occluded region Ω+. 3. Obtain the boundary ∂Ω+ of the region Ω+. The first step, i.e. obtaining the occluded region Ω,

    The first step, i.e. obtaining the occluded region Ω, is accomplished using the following thresholding process:

    image

    where max(a, b) is the operator that keeps the greater of the arguments a and b, Th is a predefined threshold value, and H(ㆍ)is the Heaviside step function, defined below.

    image

    Equation (1) extracts the region for which the depth map has a value larger than the predefined threshold value Th.

    After obtaining the binary domain Ω, we dilate it to obtain the region Ω+:

    image

    Finally, Ω+ can be obtained by performing any kind of edge-detection algorithm on Ω+, e.g. Canny edge detection.

    image

    The reason to extract the boundary of the dilated region Ω+, rather than the region Ω, is to ensure that the boundary contains no data from the occluded object, i.e. to ensure that only reliable data is painted into the occluded region. Figure 4 illustrates the relationships between u, Ω, Ω+, and the extended boundary Ω+.

    With Ω+ obtained, the image inpainting process can now be applied to fill in the missing region.

    In other words, we fill in the region of the elemental image array corresponding to the region Ω+ of the mask image, using the data from the extended boundary Ω+. We utilize the method proposed in Ref. [21], a fast and robust smoothing method based on the discrete cosine transform (DCT). Let y denote the elemental image array with missing values in the occluded region, and let denote the elemental image array with the missing region filled in with reliable data via the following inpainting procedure. Inpainting is performed by minimizing the following energy function with respect to :

    image

    Here D is a tridiagonal square matrix for performing a second-order divided difference, i.e.

    image

    and W is a diagonal weight matrix containing the weight values Wi,i ∈ [0,1]in its diagonal elements. These elements take on the value 0 if the corresponding pixel in y is inside the occluded region, or the value 1 if not. Thus, referring to Eq. (1), Wi,i can be expressed as

    image

    The term is the L-2 norm calculated in the region B+), which is B+) the union of all subregions including the region Ω+ in every elemental image, but which is smaller than the entire elemental image array region. The reason that we take a region of calculation smaller than the whole region is to reduce the computation time. The minimization of the functional in (5) tries to find a solution that fills in the missing region in y.

    The parameter s is a real, positive scalar that controls the degree of smoothing. In fact, it is difficult to decide the value of s automatically, and the choosing of the value for s is the main issue in the work of [21]. The cost for deciding s is high, since it cannot be calculated using the DCT when the weight matrix is used. Furthermore, when there is no abrupt noise in the data, the smoothing is not so sensitive to the parameter s, and therefore there is little need to adjust s precisely. Therefore, we simply fix the value of s to be 1.

    The minimizer can be solved by a DCT-based iterative method:

    image

    where DCTN and IDCTN denote the N-dimensional discrete cosine transform and inverse discrete cosine transform respectively, k denotes the kth iteration step, and ○ stands for the Schur (elementwise) product. ΓN is a tensor of rank N defined by

    image

    where the operator ÷ denotes element-by-element division, 1N is an N-rank tensor of ones, and ΛN is an N-rank tensor with elements

    image

    where nj denotes the size of ΛN along the jth dimension.

    After performing the image inpainting technique expressed in Eq. (8), and applying the resulting weight mask pattern to the original elemental images, we can obtain the modified elemental images where the occlusion is recovered using similar information from the original 3D object.

    The final process of the proposed method is to reconstruct the 3D plane images for 3D visualization. The modified elemental images are used in the CIIR method as shown in Fig. 2(b), and then the improved plane images of good visual quality are reconstructed.

    III. EXPERIMENTS AND RESULTS

    To verify the proposed method, we performed preliminary experiments. Figure 5 shows the experimental setup. We modeled the experimental setup computationally; the pickup experiments are performed by computer. We used two images (‘ball’ and ‘tree’) for the unknown occlusion and ten ‘face’ images for the target 3D object, respectively. The two occlusion images and ten ‘face’ images are shown in Fig. 6; they are located 30 mm and 55 mm from the lenslet array, respectively. First we recorded elemental images for the partially occluded ‘face’ object, using a lenslet array with 6×6 lenslets, each 5 mm in diameter. With this lenslet array we obtained elemental images with 900×900 pixels, as shown in Fig. 5(b).

    The recorded elemental images were employed in the second process of occlusion estimation, to get the binary weight mask pattern for the subsequent image inpainting. Here we generated two sets of plane images using two different CIIR methods, based respectively on the square and round mapping models. The weight mask pattern from our depth estimation process is shown in Fig. 5(c). Next the image inpainting technique was applied to the elemental images, on the basis of the weight mask pattern. Figures 7(a) and 7(c) show the results of the original elemental images for both ‘tree’ and ‘ball’ occlusions, while the elemental images obtained from the proposed image inpainting technique are shown in Figs. 7(b) and 7(d) respectively. With these elemental images we reconstructed the 3D plane images using the CIIR methods.

    Figure 8 shows plane images for the first ‘face’ image, for two different occlusions. In the results in Fig. 8(a) reconstructed from the elemental images with occlusion using the original CIIR [3], we can see that both occlusions produce serious image noise in the reconstructed images. For comparison, we reconstructed 3D plane images with the occlusion-removed elemental images using the CIIR method of Ref. [20], which utilized a modified CIIR method with a normalization process after occlusion removal. The performance of that CIIR method is very sensitive to the occlusion pattern, and thus may produce serious image errors if the occlusion is heavy or complex. The plane images reconstructed using the method proposed in Ref. [20] are shown in Fig. 8(b). When the ‘tree’ occlusion was used, the reconstructed image was similar to that from our method. However, we can see serious distortions in the reconstructed image when the ‘ball’ occlusion was used. This is because the normalization process used in Ref. [20] cannot reconstruct a proper plane image for a totally occluded spot in the elemental images. On the other hand, our proposed method can reconstruct plane images of the good visual quality, as seen in Fig. 8(c).

    To evaluate our method objectively, the peak signal-to-noise ratio (PSNR) was calculated for all reconstructed plane images. Figure 9 shows the average PSNR results for ten face images with two different occlusions. Four different cases are presented in Fig. 9. We obtained a high PSNR improvement of approximately 5.66 dB on average, compared to using elemental images with occlusion. In addition, the PSNR was improved by approximately 0.88 dB over that for the method of Ref. [20]. The results in Figs. 8 and 9 show that our method is superior to the conventional methods, in terms of visual quality.

    IV. CONCLUSION

    In conclusion, we have proposed an improved method using an image inpainting technique to obtain the CIIR image of a partially occluded object. Since the occlusion produces serious image noise in the reconstructed image, we removed the occlusion and inpainted reliable boundary information into the occlusion holes using the image inpainting technique. By doing so, the proposed method dramatically improves the image quality of the reconstructed image. To verify our method, we performed preliminary experiments on images of faces. From the experimental results, we obtained a high PSNR improvement of approximately 10 dB on average. We expect that the proposed method can be useful in numerous applications, such as 3D object visualization and recognition.

참고문헌
  • 1. Stern A., Javidi B. 2006 “Three-dimensional image sensing, visualization, and processing using integral imaging,” [Proc. IEEE] Vol.94 P.591-607 google cross ref
  • 2. Park J.-H., Hong K., Lee B. 2009 “Recent progress in three-dimensional information processing based on integral imaging,” [Appl. Opt.] Vol.48 P.H77-H94 google cross ref
  • 3. Hong S.-H., Jang J.-S., Javidi B. 2004 “Three-dimensional volumetric object reconstruction using computational integral imaging,” [Opt. Express] Vol.12 P.483-491 google cross ref
  • 4. Shin D.-H., Kim E.-S., Lee B. 2005 “Computational reconstruction technique of three-dimensional object in integral imaging using a lenslet array,” [Jpn. J. Appl. Phys.] Vol.44 P.8016-8018 google cross ref
  • 5. Shin D.-H., Kim M.-W., Yoo H., Lee J.-J., Lee B., Kim E.-S. 2007 “Improved viewing quality of 3-D images in computational integral imaging reconstruction based on round mapping model,” [ETRI J.] Vol.29 P.649-654 google cross ref
  • 6. Shin D.-H., Kim E.-S. 2008 “Computational integral imaging reconstruction of 3D object using a depth conversion technique,” [J. Opt. Soc. Korea] Vol.12 P.131-135 google cross ref
  • 7. Yoo H. 2011 “Artifact analysis and image enhancement in three-dimensional computational integral imaging using smooth windowing technique,” [Opt. Lett.] Vol.36 P.2107-2109 google cross ref
  • 8. Jang J.-Y., Ser J.-I., Cha S., Shin S.-H. 2012 ”Depth extraction by using the correlation of the periodic function with an elemental image in integral imaging,” [Appl. Opt.] Vol.51 P.3279-3286 google cross ref
  • 9. Jang J.-Y., Shin D., Kim E.-S. 2014 “Optical three-dimensional refocusing from elemental images based on a sifting property of the periodic δ-function array in integral-imaging,” [Opt. Express] Vol.22 P.1533-1550 google cross ref
  • 10. Shin D.-H., Yoo H. 2008 “Scale-variant magnification for computational integral imaging and its application to 3D object correlator,” [Opt. Express] Vol.16 P.8855-8867 google cross ref
  • 11. Hwang D.-C., Shin D.-H., Kim S.-C., Kim E.-S. 2008 “Depth extraction of three-dimensional objects in space by the computational integral imaging reconstruction technique,” [Appl. Opt.] Vol.47 P.D128-D135 google cross ref
  • 12. Kim S.-C., Park S.-C., Kim E.-S. 2009 “Computational integral-imaging reconstruction-based 3-D volumetric target object recognition by using a 3-D reference object,” [Appl. Opt.] Vol.48 P.H95-H104 google cross ref
  • 13. Cho M., Javidi B. 2010 “Three-dimensional visualization of objects in turbid water using integral imaging,” [J. Display Technol.] Vol.6 P.544-547 google cross ref
  • 14. Zhang M., Piao Y., Kim E.-S. 2010 “Occlusion-removed scheme using depth-reversed method in computational integral imaging,” [Appl. Opt.] Vol.49 P.2571-2580 google cross ref
  • 15. Shin D.-H., Lee B.-G., Lee J.-J. 2008 “Occlusion removal method of partially occluded 3D object using sub-image block matching in computational integral imaging,” [Opt. Express] Vol.16 P.16294-16304 google cross ref
  • 16. Lee B.-G., Shin D. 2010 “Enhanced computational integral imaging system for partially occluded 3D objects using occlusion removal technique and recursive PCA reconstruction,” [Opt. Commun.] Vol.283 P.2084-2091 google cross ref
  • 17. Lee J.-J., Lee B.-G., Yoo H. 2011 “Image quality enhancement of computational integral imaging reconstruction for partially occluded objects using binary weighting mask on occlusion areas,” [Appl. Opt.] Vol.50 P.1889-1893 google cross ref
  • 18. Jung J.-H., Hong K., Park G., Chung I., Park J.-H., Lee B. 2010 “Reconstruction of three-dimensional occluded object using optical flow and triangular mesh reconstruction in integral imaging,” [Opt. Express] Vol.18 P.26373-26387 google cross ref
  • 19. Jang J.-Y., Hong S.-P., Shin D., Lee B.-G., Kim E.-S. 2014 “3D image correlator using computational integral imaging reconstruction based on modified convolution property of periodic functions,” [J. Opt. Soc. Korea] Vol.18 P.388-394 google cross ref
  • 20. Lee J.-J., Shin D., Yoo H. 2013 “Image quality improvement in computational reconstruction of partially occluded objects using two computational integral imaging,” [Opt. Commun.] Vol.304 P.96-101 google cross ref
  • 21. Garcia D. 2010 “Robust smoothing of gridded data in one and higher dimensions with missing values,” [Comput. Stat. Data An.] Vol.54 P.1167-1178 google cross ref
  • 22. Patwardhan K. A., Sapiro G., Bertalmio M. 2006 “Video inpainting under constrained camera motion,” [IEEE Trans. Image Process.] Vol.16 P.545-553 google
OAK XML 통계
이미지 / 테이블
  • [ FIG. 1. ]  Proposed method using an image inpainting technique.
    Proposed method using an image inpainting technique.
  • [ FIG. 2. ]  Principles of computational integral imaging: (a) pickup, (b) reconstruction.
    Principles of computational integral imaging: (a) pickup, (b) reconstruction.
  • [ FIG. 3. ]  Occlusion estimation using two different CIIR methods.
    Occlusion estimation using two different CIIR methods.
  • [ ] 
  • [ ] 
  • [ ] 
  • [ ] 
  • [ FIG. 4. ]  Relationship between the depth map u, the regions Ω and Ω+, and the boundary ∂Ω+.
    Relationship between the depth map u, the regions Ω and Ω+, and the boundary ∂Ω+.
  • [ ] 
  • [ ] 
  • [ ] 
  • [ ] 
  • [ ] 
  • [ ] 
  • [ FIG. 5. ]  (a) Experimental setup, (b) elemental images of the first ‘face’, and (c) estimated weight mask pattern.
    (a) Experimental setup, (b) elemental images of the first ‘face’, and (c) estimated weight mask pattern.
  • [ FIG. 6. ]  Test images: (a) two occlusion images, (b) ten ‘face’ images.
    Test images: (a) two occlusion images, (b) ten ‘face’ images.
  • [ FIG. 7. ]  Elemental images (a) before and (b) after image inpainting when the ‘tree’ occlusion is used, and (c) before and (d) after image inpainting when the ‘ball’ occlusion is used.
    Elemental images (a) before and (b) after image inpainting when the ‘tree’ occlusion is used, and (c) before and (d) after image inpainting when the ‘ball’ occlusion is used.
  • [ FIG. 8. ]  (a) Reconstructed plane images from elemental images with occlusion using the original method of Ref. [3]. (b) Reconstructed plane images using the method of Ref. [20]. (c) Reconstructed plane images using the proposed method.
    (a) Reconstructed plane images from elemental images with occlusion using the original method of Ref. [3]. (b) Reconstructed plane images using the method of Ref. [20]. (c) Reconstructed plane images using the proposed method.
  • [ FIG. 9. ]  Average PSNR results for ten test images.
    Average PSNR results for ten test images.
(우)06579 서울시 서초구 반포대로 201(반포동)
Tel. 02-537-6389 | Fax. 02-590-0571 | 문의 : oak2014@korea.kr
Copyright(c) National Library of Korea. All rights reserved.