3D Image Correlator using Computational Integral Imaging Reconstruction Based on Modified Convolution Property of Periodic Functions
 Author: g Jang JaeYoun, Shin Donghak, Lee ByungGook, Hong SukPyo, Kim EunSoo
 Organization: g Jang JaeYoun; Shin Donghak; Lee ByungGook; Hong SukPyo; Kim EunSoo
 Publish: Current Optics and Photonics Volume 18, Issue4, p388~394, 25 Aug 2014

ABSTRACT
In this paper, we propose a threedimensional (3D) image correlator by use of computational integral imaging reconstruction based on the modified convolution property of periodic functions (CPPF) for recognition of partially occluded objects. In the proposed correlator, elemental images of the reference and target objects are picked up by a lenslet array, and subsequently are transformed to a subimage array which contains different perspectives according to the viewing direction. The modified version of the CPPF is applied to the subimages. This enables us to produce the plane subimage arrays without the magnification and superimposition processes used in the conventional methods. With the modified CPPF and the subimage arrays, we reconstruct the reference and target plane subimage arrays according to the reconstruction plane. 3D object recognition is performed through crosscorrelations between the reference and the target plane subimage arrays. To show the feasibility of the proposed method, some preliminary experiments on the target objects are carried out and the results are presented. Experimental results reveal that the use of plane subimage arrays enables us to improve the correlation performance, compared to the conventional method using the computational integral imaging reconstruction algorithm.

KEYWORD
Integral imaging , 3D correlation , Elemental images , Lenslet array

I. INTRODUCTION
Occluded objects have been an interesting research area for the field of threedimensional (3D) object visualization and recognition. Recently, several approaches employed a 3D imaging technique to solve the occlusion problem [19]. Among them, integral imaging can be considered as one of the promising solutions, and thus some techniquesbased on integral imaging have been proposed for 3D recognition of partially occluded objects. Integral imaging is a method of recording and displaying techniques for 3D scenes with the lenslet array. The recorded elemental images consist of several twodimensional (2D) images with different perspectives [1017]. In 2007, a 3D image correlator using the computational integral imaging reconstruction (CIIR) method was proposed [6]. In CIIR, the plane images were generated by the magnification and superimposition process of elemental images. Each plane image contains both focused and defocused images for occlusion and target objects. This enables us to recognize target objects with occlusion. However, the CIIR process may produce the addition of blur noises in plane images due to the interpolated images and high computation loads. To overcome this problem, the modified CIIRbased 3D correlator using smart pixel mapping, which can reduce the magnification factor and thus improve the correlation performance, was proposed in 2008 [7]. However, this method also utilized the magnification process to generate the depthconverted plane images.
Recently, a depth slicing method using a convolution property between periodic functions (CPPF) has been proposed to produce the plane image array, which is similar to plane images from the previous CIIR method [18, 19]. In integral imaging with the lenslet array, an elemental image has periodic property depending on object depth. This method takes advantage of the periodic property of spatial frequency of an elemental image. The CPPF is performed by convolution between the elemental image and the 2D
δ function array whose spatial period corresponds with target depth. As a result, the targeting depth image can be reconstructed as the structure of the plane image arrays with fast calculation time compared with the previous CIIR method. In addition, CPPF technique can be applied to optical 3D refocusing display with the lenslet array [20]. Here, 3D objects with their own perspectives can be reconstructed to be refocused on their depth in the space for viewers.In this paper, we propose a 3D image correlator using the computational reconstruction based on the modified CPPF for partially occluded object recognition. We introduce a modified CPPF (MCPPF) method for subimages to produce the plane subimage arrays (PSAs) without magnification and superimposition processes used in the conventional methods. In the proposed correlator, elemental images of the reference and target objects are captured by the image sensor through the lenslet array. And, the recorded elemental images are transformed to the subimages, which contain different perspectives according to the viewing direction. The proposed MCPPF method can be applied to these subimage arrays for reconstruction of the 3DPSAs. Only the target PSA is reconstructed on the right plane where the target object was originally located and contains clearly focused perspectives. On the other hand, other target PSAs are reconstructed on the various reconstruction planes where the focused and blurred images are mixed. 3D object recognition is performed through crosscorrelations between the reference and the target PSAs. To show the feasibility of the proposed method, some preliminary experiments on the target objects are carried out and the results are presented.
II. PROPOSED MCPPFBASED 3D IMAGE CORRELATOR
Figure 1 shows the schematic diagram of the proposed method. The proposed method is divided into four processes; (1) pickup, (2) subimage transform, (3) computational reconstruction using MCPPF and (4) recognition processes.
2.1. Pickup Process
The first process of the proposed method is pickup of 3D objects. This is the same as the conventional pickup process in integral imaging. The lenslet array is used to capture 3D objects. We assume that a reference object
R (x_{r} ,y_{r} ,z_{r} ) is located at the known distance ofz_{r} in front of a lenslet array in the pickup system as shown in Fig. 1(a). And a target objectO (x_{o} ,y_{o} ,z_{o} ), to be recognized is partially occluded by the occlusion object and located at an arbitrary distance ofz_{o} in front of the lenslet array. Then elemental images of the reference and target objects are recorded by the lenslet array and the 2D image sensor. The recorded elemental images of the reference and target objects are stored for the use of the next process.The conventional pickup process of an integral imaging system by using the direct pickup method is based on ray optics. The geometrical relationship between a point object and its corresponding point images on the elemental image plane is shown in Fig. 2(a). In the conventional integral imaging system with planar lenslet array, the geometrical relation of 2D form is given by
In the Fig. 2(a) and Eq. (1), the origin of the coordinates is the edge of the elemental lens located at the bottom of the lenslet array. The object point (
x_{o} ,y_{o} ,z_{o} ) is the position along thex ,y , andz axes, respectively.P represents the distance between the optical centers of the neighboring elemental lenses and is assumed to be the same as the lateral length of an elemental image.f is the focal length of an elemental lens in a planar lenslet array.x_{Ek} andy_{El} are the image point by thek th andl th elemental lenses, respectively. The imaging distance of a point object measured from lenslet array isz_{E} =z_{O } f / (z_{O} +f ).X_{Zo} is depthdependent spatial period. However, due to the limit of resolution of a pickup device, Eq. (1) should be converted into pixel units. Thus the pixelated version of Eq. (1) is denoted bywhere
N ,P , andk _{max} are lateral resolution of the elemental image array in onedimensional (1D) condition, the diameter of an elemental image and the lateral number of elemental lenses, respectively.From the geometrical relationship, the 1D form of spatial period depending on object depth is given by 
x_{E} _{i}–x_{E} _{(i1)}, 2 ≤ i≤k _{max}. Then the depth dependent spatial period is to be x_{E} _{i}–x_{E} _{(i1)}= z_{O}P / (z_{O} +f ). Hence, the pixelated version is denoted byEquation (3) implies that the intensity of the elemental image corresponding with an object locating specific depth is imaged with the same interval on the imaging plane as described in Fig. 2(b). Thus, a lenslet array in an integral imaging system may be regarded as the convertor which converts depth information of 3D space changes into 2D periodic information and vice versa.
2.2. Subimage Transform
The recorded elemental images do not have the characteristic of the shift invariance. That is, the projected small images in elemental images are scalechanged if the 3D object is located at different locations. For this reason, we apply the subimage transform to the recorded elemental images. The subimage array can provide both shift invariance and the whole projected image in elemental images [5]. The subimage transform can be performed based on either single pixel extraction or multiplepixel extraction. In this paper, our intention is to apply the concept of conventional CPPF to subimages for the proposed 3D image correlator. Therefore, the conventional CPPF technique can be modified for subimages.
We explain the principle of the proposed MCPPF method for subimage array. The lateral resolution of the elemental image array
N is given byN =n ×k _{max} when an elemental image is composed ofn pixels. Thex_{CEk} may be represented as ordered pair (k ,m ), where k is index of an elemental image andm is given bym =x_{CEk} – (k – 1)n .m is order of pixel ink th elemental image and 1≤m ≤n . The lateral resolution of subimage array is the same as that of the elemental image array,N . If the number of subimage array isj _{max} and each subimage is composed ofξ pixels, then theN can be described byN =ξ ×j _{max}. A pixel in the subimage array may be also represented as ordered pair (j ,η ), wherej is a subimage number andη is order of the pixel inj th subimage and 1≤η ≤ξ . In this variable correspondence, we can find the relationship as followswhere
c is the number of pixels to be extracted for subimage transform.To show the periodic property of subimage array adequately, let us consider the condition
c =1 as shown in Fig. 3. The pixel correspondence between elemental image array and subimage array may be represented as followsA pixel position on the subimage array can be represented as the ordered pair (
j ,η ) byHence, the 1D form of spatial period in subimage domain is given by 
s _{i}–s _{(i1)}, 2 ≤ i≤j _{max}. The detailed version of spatial period is given byFor example, in Fig. 3, suppose
n andk _{max} are 10 pixels and 9, respectively. Then,ξ andj _{max} are 9 pixels and 10, respectively. In this condition, if the depth dependent spatial period of the elemental image,X_{Zo} , is 11 pixels, then that of the subimage array,X_{sub} , is 10 pixels.In this paper, the multiplepixel extraction method is used to increase the resolution of subimages [4]. Since the multiplepixel extraction is based on the block combination from each elemental image, the spatial period of each pixel within the block is the same with that of the single pixel extraction method. Therefore, we can apply Eq. (7) to subimages generated by multiplepixel extraction.
2.3. Computational Reconstruction using MCPPF
In the third process of the proposed method, we apply the MCPPF technique to the recorded subimages in order to generate 3D PSAs for object recognition. In the transformed subimage array, the subimage has a spatially periodic property, and the spatial period is dependent on the depth of an object.
Mostly, depth extraction or recognition process in various methods is followed in the reverse process of obtaining multiple 2D images from 3D scenes. On the other hand, the MCPPF is to use the superimposing property of convolution between a set of periodic functions and periodic
δ function array to discriminate target periodic spatial information from the input function. The 3D image reconstruction method using MCPPF is as follows.The intensity of 2D image may be written as
g (x_{E} ) =f (x_{E} ) ⊕h (x_{E} ), where ⊕ denotes the convolution,x_{E} represents the x coordinate on the imaging plane,f (x_{E} ) represents the scaled object intensity, andh (x_{E} ) represents the intensity impulse response, respectively. Moreover, depthdependent image intensity of a 3D object can be represented asg (x_{E} )_{Zo} =f (x_{E} )_{Zo} ⊕h (x_{E} )_{Zo} . Assuming the geometrical optics condition (λ →0), the intensity impulse responseh (x_{E} )_{Zo} can be represented by theδ function. Thus, the intensity of the subimage array at the corresponding object depthz_{O} may be written asAlthough the object intensity is continuously distributed along the
z axis, the intensity impulse response of the subimage array is represented as a discrete spatial period because of the limited resolution of a pickup device sensor. Hence, the subimage image array may be represented asG (x_{E} )=Σg (x_{E} )_{Zo} . To discriminate the depth information in a subimage image array, the MCPPF method can perform,G (x_{E} ) ⊕h (x_{E} )_{Zo} , more specifically the convolution property between periodicδ function arrays.The 3D image reconstruction process using MCPPF is illustrated in Fig. 4. The pickup condition of the elemental image is as follows. The focal length and the diameter of an elemental lens are 15 mm and 5 mm, respectively. The size of the lenslet array is 30×30. The resolution of the elemental image is 900×900 pixels. The 3D object is located along the
z axes. The right side of the arrow mark in Fig. 4 represents the convolution process between the subimage array and aδ function array. The pixel extraction factor in Eq. (4) is applied byc =5. The resolution of a subimage is 150×150 pixels. TheX_{sub} is spatial period ofδ function array, and the spatial period varies depending on target depth. The images arranged in the middle of Fig. 4 are output results of the MCPPF according to spatial period with 1 pixel interval. To observe the results clearly, the enlarged images are prepared at the bottom of Fig. 4.2.4. Recognition Process
From the process of computational reconstruction based on MCPPF, we can reconstruct the reference and target PSAs. First, the reference PSA is reconstructed on the known
z_{r} plane for the reference object. This is called the reference template. Next, for the partially occluded target object, a set of target PSAs are reconstructed on each reconstruction plane by changing the distancez from the lenslet array as shown in Fig. 1(c). Among them, the target PSA reconstructed at thez_{o} plane of the original position of the target object during the pickup process has highly focused images of the target object. On the other hand, the other target PSAs reconstructed on thez planes far away from thez_{o} plane become out of focus.Once a target PSA
P_{O} (x ,y ,z ) of the target object reconstructed at the arbitrary distance ofz using the MCPPF technique is generated, the correlation process can be performed between the target PSA ofP_{O} (x ,y ,z ) and the reference template ofP_{R} (x ,y ,z_{r} ). Accordingly, we calculate the correlation result using the following correlation operation as given bywhere * denotes the complex conjugate. It is seen that the correlation peak
C (z ) is a function of distance z and the behavior ofC (z ) such as sharpness can be a performance measure for 3D object recognition. This correlation process is repeated for all target PSAs reconstructed onz planes by varying the distancez within the desired depth range. In fact, since the target PSA with the most highly focused target image is reconstructed at thez_{o} plane, a sharp correlation peak may be expected to be occurred on thez_{o} plane through the correlations between the known reference template and target PSAs of the target object.III. EXPERIMENTAL RESULTS AND DISCUSSIONS
To demonstrate the feasibility of our proposed 3D image correlator, the preliminary experiment on the 3D objects in space was performed. In the experiment, a 3D object (car) is employed as reference test object as illustrated in Fig. 5(a). And, the occlusion (a bundle of plants) is shown in Fig. 5(b). Figure 5(c) shows the camera view image as the object and the occlusion are arranged inthe object field.
The experimental structure is shown in Fig. 6. The 3D object and occlusion are located at 200 mm and 100 mm from the lenslet array, respectively. Here the lenslet array is composed of 30×30 lenslets, and the lenslet diameter and the focal length are 5 mm and 15 mm, respectively.
Using the experimental setup of Fig. 6, we first recorded the elemental images of reference objects which were located at a known position
z =200 mm in front of a lenslet array. And, we captured the elemental image array of the unknown target object, which was located at an arbitrary position behind occlusion. Figure 7(a) and (b) show the captured elemental images for reference object and partially occluded object, respectively. And the corresponding subimages are shown in Fig. 7(c) and (d) after subimage transform. The resolutions of the elemental images and subimages are 900×900 pixels.Figure 8(a) shows the reconstructed PSA of reference object using the computational reconstruction based on MCPPF for subimages of Fig. 7(c), which the reconstruction depth was 200 mm. Using the subimages of the partially occluded object shown in Fig. 7(d), we generated their target PSAs according to the distances from 170 mm to 230 mm with the step of 5 mm. Some examples are shown in Fig. 8(b).
With the reference PSA and target PSAs, the correlation performances of the proposed method were measured using the correlation operation of Eq. (9). Correlation results were calculated by conducting the correlation between the reference PSA and a series of the target PSAs. The correlation results are shown in Figs. 9 and 10. Figure 9 shows the autocorrelation results of the reference object shown in Fig. 5(a). The correlation array was obtained due to the image array structure of PSAs. Among them, the maximum correlation peak was measured for object recognition. The curve of correlation peaks from the proposed method along the distance
z was shown in Fig. 9(b). This result indicates that the ‘car’ object is located atz =200 mm because the maximum of correlation peaks is obtained at this distance.And the correlation result for the partially occluded objects as shown in Fig. 5(c) was shown in Fig. 10(a). The curve of maximum correlation peaks for two different cars (True car and False car) were calculated and presented in Fig. 10(b).
In addition, we compared the proposed method with the conventional method based on CIIR [7]. The comparison experiments were performed under the same condition. The recorded elemental images were used to generate the CIIR images. The CIIR images for the reference object and the unknown target object were shown in Fig. 11. We compared correlation peaks of partially occluded object between proposed and conventional methods as shown in Fig. 12. Each correlation peak was normalized using the value of the maximum correlation peak obtained in the proposed method. For the conventional method, the maximum value of correlation peak was approximately 0.94. It is seen that the proposed method provides not only higher correlation peak but also sharper characteristic of correlation peaks than the conventional method. This is because our method can remove the blurring effect due to the magnification and superimposing processes of the CIIR process.
IV. CONCLUSIONS
In conclusion, we proposed a new 3D image correlator using the computational reconstruction based on MCPPF for partially occluded object recognition. In the proposed correlator, PSAs generated from MCPPF were used for 3D object recognition and thus we can recognize the partially occluded 3D object using the maximum peaks correlated with the reference object. Some preliminary experiments show the usefulness of the proposed method by comparison with the conventional method using CIIR.

[FIG. 1.] Schematic diagram of the proposed method: (a) pickup, (b) subimage transform, (c) computational reconstruction using MCPPF, and (d) recognition process.

[Fig. 2.] Schematic diagram of the computational reconstruction using CPPF.

[Fig. 3.] Schematic diagram of subimage transform.

[FIG. 4.] 3D image reconstruction process using MCPPF applied in a subimage array.

[FIG. 5.] Experimental conditions (a) 3D object to be recognized (b) Occlusion object (c) Observed scene for occlusion and 3D object.

[FIG. 6.] Experimental setup.

[Fig. 7.] (a) Elemental images for 3D object (b) Elemental images for partially occluded 3D object (c) Subimages for 3D object (d) Subimages for partially occluded 3D object.

[FIG. 8.] Reconstructed images using computational reconstruction based on MCPPF (a) Reference PSA, (b) Target PSAs at different depths.

[FIG. 9.] (a) Autocorrelation output result of the reference object (b) Graph of maximum correlation peaks according to the different reconstruction distances.

[FIG. 10.] (a) Correlation output result of unknown object (b) Graph of maximum correlation peaks of unknown true and false object according to the different reconstruction distances.

[Fig. 11.] Reconstructed plane images using the conventional CIIR method.

[Fig. 12.] Comparison of correlation peaks of partially occluded object between the proposed and conventional methods.