Resolution Analysis of Axially Distributed Image Sensing Systems under Equally Constrained Resources

  • cc icon
  • ABSTRACT

    In this paper, a unifying framework to evaluate the depth resolution of axially distributed image sensing (ADS) systems under fixed resource constraints is proposed. The proposed framework enables one to evaluate the system performance as a function of system parameters such as the number of cameras, the number of pixels, pixel size, and so on. The Monte Carlo simulations are carried out to evaluate ADS system performance as a function of system parameters. To the best of our knowledge, this is the first report on quantitative analysis of ADS systems under fixed resource constraints.


  • KEYWORD

    3D imaging , Axially distributed image sensing,Depth resolution,Multiple images,Elemental images

  • I. INTRODUCTION

    Axially distributed image sensing (ADS) systems are capable of acquiring 3D information for 3D objects or partially occluded 3D objects [1-6]. The ADS scheme can provide simple recording architecture by translating a single camera along its optical axis. The recorded high-resolution elemental images generate the clear 3D slice images for the partially occluded 3D objects compared with the conventional integral imaging method [6-16]. The capacity of this method to collect 3D information is related to the object location from the optical axis. The 3D information cannot be collected when the coordinates of the object are close to the optical axis (i.e., on axis).

    Recently, the resolution analysis methods to evaluate the performance of a 3D integral imaging system under the equally-constrained resources have been reported [17-19]. The lateral and longitudinal resolutions of the given 3D integral imaging systems by considering several system parameters such as the number of sensors, pixel size, imaging optics, relative sensor configuration, and so on.

    In this paper, we present a framework for performance evaluation of ADS systems under the equally-constrained resources such as a fixed number of pixels, a fixed moving distance, a fixed pixel size and so on. For the given ADS framework, we analyze depth and lateral resolutions based on the two point sources resolution criterion [17] where two point sources and a spatial ray projection model are used. The Monte Carlo simulations are carried out for this analysis and the simulation results are presented.

    II. RESOLUTION ANALYSIS FOR ADS SYSTEM

    The typical ADS system is shown in Fig. 1 where a single camera is moved along the optical axis. The different elemental images are captured along the optical axis if 3D objects are located at a certain distance. Each elemental image has the different scales for 3D objects. On the other hand, the digital reconstruction process of 3D objects is the inverse process of pickup. 3D images can be obtained using computational reconstruction based on an inverse mapping procedure through a virtual pinhole model [3]. In fact, ADS pickup structure is highly related to the resolution of 3D reconstructed images. In this paper, we want to evaluate the performances of ADS according to the pickup structure.

    Figure 2 illustrates the general framework of ADS system with N-cameras for the resolution analysis. To obtain equally-constrained resources regardless of N, the number of cameras, we assume that the total number of pixels (K), the size of the pixel (c), and the moving range (R) are fixed. Thus, the pixel number of each camera becomes K/N. Here, we consider that the imaging lenses, whose focal length and the diameter are f and A, respectively, are identical because the same camera is moving along the optical axis.

    In the ADS framework as shown in Fig. 2, we can vary the number of cameras N to implement the generalized ADS systems. For example, when N = 2 (2 cameras), the conceptual design is shown in Fig. 3 where the image sensor is composed of K/2 pixels and the moving step between cameras is R. On the other hand, the N-camera based ADS system can be designed as shown in Fig. 2 where each camera has an image sensor with K/N pixels and the moving step is R/(N-1).

    In order to analyze the resolution performance for the ADS framework of Fig. 2, we utilize the resolution analysis method based on two point sources resolution criteria [17], which is defined as the ability to distinguish two closely spaced point sources. The principle of the resolution analysis used can be explained through three steps as shown in Fig. 4. We consider only one-dimensional notation for simplicity. We assume that there are two close point sources in space.

    The first step is to find the mapping pixel index for the first point source located at (x1, z1) as shown in Fig. 4(a). Then, the point spread function (PSF) of the first point source is recorded by each image sensor. We can calculate the center position of the PSF of the first point source in the ith image sensor. This becomes

    where f is the focal length of imaging lens and Pi is the position of the ith imaging lens. After that, the recorded point source into pixel is pixelated and its pixel index can be calculated from the following equation:

    where is the rounding operator.

    The second step is to calculate the unresolvable pixel area for the mapping pixels calculated in the first step. The unresolvable pixel area means that two point sources can be separated or not. That is, when the position of the second point source is closely located to the first point source, we can resolve two point sources if the second point source is registered by at least one sensor so that it is in a pixel that is adjacent to the pixel that registered the first PSF. But, if the second PSF falls on the same pixel that recorded the PSF of the first point source, we cannot resolve them. In this analysis, the unresolved area is considered to analyze the resolution. Thus, the possible mapping pixel range of the second point source is given by

    We can calculate the unresolvable pixel area of the second point source for the mapping pixel using ray back-projection into the plane of two point sources. It is presented by

    In the third step of resolution analysis, we will find the common area for the unresolvable pixel areas calculated from all cameras. The depth resolution can be considered as the common intersection of all unresolvable pixel ranges. Thus, the depth resolution becomes

    III. SIMULATIONS AND RESULTS

    The computational experiments are carried out for the depth resolution of various ADS frameworks. The two point source resolution analysis method was used to calculate their depth resolutions as described in the previous section.

    Based on two point sources resolution analysis, the Monte Carlo simulations are performed to statistically compute the depth resolution for various ADS frameworks. Two point sources are placed longitudinally to determine the depth resolution. The first point source is located randomly in space and the second point source is moved in the longitudinal direction. The experimental conditions for the Monte Carlo simulation are shown in Table 1.

    The Monte Carlo simulations were repeated for 2,000 trials where the locations of two point sources were selected randomly. After that, the depth resolutions were calculated as the sample mean. The simulation results of depth resolution for ADS frameworks are obtained while changing several system parameters. First, the simulation results of depth resolution according to the number of cameras are shown in Fig. 6(a). Here, several distances (g) between optical axis and point source plane were investigated. The results indicate that the depth resolution is improved, as both N and g are larger. This is because more 3D perspective information can be recorded when large N and g are used. And, the calculation results of depth resolution are shown in Fig. 6(b)-(d) when the focal length of imaging lens, sensor pixel size, and total number of sensor pixels are varied. In Fig. 6(b), the focal length (f) of the imaging lens was varied. For a large focal length, we obtained the small depth resolution (∆z) due to the small ray sampling interval. When the pixel size (c) of the image sensor is varied, we obtained the improved depth resolution in small pixel size. This is because the smaller pixel size provides more perspective information for 3D objects. And, in the Fig. 6(d), we investigated the effect for the total number of sensor pixels. However, it is seen that total number of sensor pixels is not related to the depth resolution.

    IV. CONCLUSION

    In conclusion, a resolution analysis of various ADS frameworks under fixed-constrained resources has been presented. The system performance in terms of depth resolution as a function of system parameters such as the number of cameras, the number of pixels, pixel size, and focal length were evaluated. It has been shown that the depth resolution in an ADS system can be improved with the large number of cameras and the large distance between optical axis and point source plane. We expect that the proposed method can be useful to design a practical ADS system.

  • 1. Schulein R., DaneshPanah M., Javidi B. 2009 “3D imaging with axially distributed sensing,” [Opt. Lett.] Vol.34 P.2012-2014 google
  • 2. Xiao X., Javidi B. 2011 “Axially distributed sensing for three-dimensional imaging with unknown sensor positions,” [Opt. Lett.] Vol.36 P.1086-1088 google
  • 3. Shin D., Javidi B. 2011 “3D visualization of partially occluded objects using axially distributed sensing,” [J. Disp. Technol.] Vol.7 P.223-225 google
  • 4. Shin D., Javidi B. 2012 “Visualization of 3D objects in scattering medium using axially distributed sensing,” [J. Disp. Technol.] Vol.8 P.317-320 google
  • 5. Hong S.-P., Shin D., Lee B.-G., Kim E.-S. 2012 “Depth extraction of 3D objects using axially distributed image sensing,” [Opt. Express] Vol.20 P.23044-23052 google
  • 6. Shin D., Javidi B. 2012 “Three-dimensional imaging and visualization of partially occluded objects using axially distributed stereo image sensing,” [Opt. Lett.] Vol.37 P.1394-1396 google
  • 7. Stern A., Javidi B. 2006 “Three-dimensional image sensing, visualization, and processing using integral imaging,” [Proc. IEEE] Vol.94 P.591-607 google
  • 8. Park J.-H., Hong K., Lee B. 2009 “Recent progress in threedimensional information processing based on integral imaging,” [Appl. Opt.] Vol.48 P.H77-H94 google
  • 9. Kim S.-C., Kim C.-K., Kim E.-S. 2011 “Depth-of-focus and resolution-enhanced three-dimensional integral imaging with non-uniform lenslets and intermediate-view reconstruction technique,” [3D Research] Vol.2 P.6 google
  • 10. Yeom S.-W., Woo Y.-H., Baek W.-W. 2011 “Distance extraction by means of photon-counting passive sensing combined with integral imaging,” [J. Opt. Soc. Korea] Vol.15 P.357-361 google
  • 11. Kakeya H. 2011 “Realization of undistorted volumetric multiview image with multilayered integral imaging,” [Opt. Express] Vol.19 P.20395-20404 google
  • 12. Yoo H. 2011 “Artifact analysis and image enhancement in threedimensional computational integral imaging using smooth windowing technique,” [Opt. Lett.] Vol.36 P.2107-2109 google
  • 13. Jang J.-Y., Ser J.-I., Cha S. 2012 “Depth extraction by using the correlation of the periodic function with an elemental image in integral imaging,” [Appl. Opt.] Vol.51 P.3279-3286 google
  • 14. Kavehvash Z., Martinez-Corral M., Mehrany K., Bagheri S., Saavedra G., Navarro H. 2012 “Three-dimensional resolvability in an integral imaging system,” [J. Opt. Soc. Am.] Vol.A 29 P.525-530 google
  • 15. Luo C.-G., Wang Q.-H., Deng H., Gong X.-X., Li L., Wang F.-N. 2012 “Depth calculation method of integral imaging based on gaussian beam distribution model,” [J. Display Technol.] Vol.8 P.112-116 google
  • 16. Li G., Kwon K.-C., Shin G.-H., Jeong J.-S., Yoo K.-H., Kim N. 2012 “Simplified integral imaging pickup method for real objects using a depth camera,” [J. Opt. Soc. Korea] Vol.16 P.381-385 google
  • 17. Shin D., Daneshpanah M., Javidi B. 2012 “Generalization of 3D N-ocular imaging systems under fixed resource constraints,” [Opt. Lett.] Vol.37 P.19-21 google
  • 18. Shin D., Javidi B. 2012 “Resolution analysis of N-ocular imaging systems with tilted image sensors,” [J. Display Technol.] Vol.8 P.529-533 google
  • 19. Cho M., Javidi B. 2012 “Optimization of 3D integral imaging system parameters,” [J. Display Technol.] Vol.8 P.357-360 google
  • [FIG 1.] Typical ADS pickup and reconstruction.
    Typical ADS pickup and reconstruction.
  • [FIG. 2.] Framework for ADS with N cameras.
    Framework for ADS with N cameras.
  • [FIG. 3.] ADS system when N = 2.
    ADS system when N = 2.
  • [FIG. 4.] (a) Calculation of pixel position for each camera of the first point source and (b) Calculation of unresolvable pixel area using second point source.
    (a) Calculation of pixel position for each camera of the first point source and (b) Calculation of unresolvable pixel area using second point source.
  • [FIG. 5.] Procedure for calculation of the depth resolution using two point sources resolution analysis.
    Procedure for calculation of the depth resolution using two point sources resolution analysis.
  • [TABLE 1.] Experimental constraints for Monte Carlo simulation.
    Experimental constraints for Monte Carlo simulation.
  • [FIG. 6.] Simulation results according to (a) the distance between optical axis and point source plane, (b) the focal length of imaging lens, (c) the pixel size of image sensor, and (d) total number of sensor pixels.
    Simulation results according to (a) the distance between optical axis and point source plane, (b) the focal length of imaging lens, (c) the pixel size of image sensor, and (d) total number of sensor pixels.