Reference Functions for Synthesis and Analysis of Multiview and Integral Images
 Author: Saveljev Vladimir, Kim SungKyu
 Organization: Saveljev Vladimir; Kim SungKyu
 Publish: Current Optics and Photonics Volume 17, Issue2, p148~161, 25 Apr 2013

ABSTRACT
We propose one and twodimensional reference functions for processing of integral/multiview imaging. The functions provide the synthesis/analysis of the integral image by distance, as an alternative to the composition/decomposition by view images (directions). The synthesized image was observed experimentally. In analysis confirmed by simulation in a qualitative sense, the distance was obtained by convolution of the integral image with the reference functions.

KEYWORD
Integral imaging , Multiview image , Image synthesis , Image analysis , Depth extraction

I. INTRODUCTION
In multiview and integral imaging, the image in the image plane can be treated as a form of a twodimensional representation of threedimensional objects to be displayed. This image is called an integral image [1]; it consists of multiple elemental images [1] (image cells) and, with a proper optics, it can be observed from different directions [2]. Each cell corresponds to a light source or its optical equivalent (lens, pinhole, barrier stripe, etc.) In its turn, the content of a cell can be treated as a set of directional (view) images [3,4]. Correspondingly, basing on the concept of view images, an integral image can be decomposed into a set of view images, in addition, a set of view images can be combined into a single integral image [5]. This is the known technique to expand the integral image into view images; this is schematically illustrated in Fig. 1, where the cells are shown by segments of bold lines and the directions indicated by lines with arrowheads. Note that throughout this paper, the projected coordinates are used; however, Figs. 1 and 2 are the only illustrations regular (nonprojected) coordinates are in use.
Due to the similarity [68], and in spite of possible geometric distortions [9], in this paper, we intentionally do not distinguish between the integral imaging and the multiview imaging, two methods of autostereoscopic imaging. Particular features of each method do not seem to be essential for the current content. The similarity allows us to describe an integral image in terms of the multiview imaging and vice versa. Thus, in this paper we prefer considering the common features of the integral imaging and the multiview imaging. In particular, the image in the image plane is referred to as the integral image in both cases.
On the other hand, the useful and known concept of view images [10], [11] is not the only possible idea to describe the integral image. Generally, alternative principles to represent the same phenomenon are not impossible and may also exist. In this paper, an alternative concept for the
integral image will be proposed and illustrated in examples.
Considering the integral image itself, one can recognize similar patterns in various integral images. For instance, some peculiar patterns consisting of regularly repeated patches can often be recognized in various integral images, see, e.g., the illustrations in papers of other authors: Fig. 12(b) in [12], Figs. 2 and 3 in [13], Fig. 2 in [14], Fig. 9(f) in [15], and Fig. 7(a) in [16]. Among them, most patterns seem to represent a point of an object; however, Fig. 12(b) in [12] shows a line. These and many other integral and multiview images may inspire someone that the integral image may consist of some “elemental particles”.
Basing on that idea, a patternbased representation can be also built in parallel to the existing concepts of view images in multiview displays and elemental images in integral imaging. In particular, we propose the reference functions which represent basic threedimensional objects (points) as they are mapped onto the twodimensional image plane. Such functions (their physical meaning is the brightness or transparency distribution across the image plane) look like a set of repeated patches distributed across several cells.
The paper is organized as follows. The Sections II and III introduce one and twodimensional reference functions for a square (rectangular) layout of the image cells. Two general applications of the reference functions, the image analysis and the synthesis, together with the numerical experiments are presented in Sections IV and V. The related effect of the discrete (pixelated) integral image is considered in Sec. VI. The next Sec. VII contains discussions covering, in particular, the reference functions for the hexagonal and random layouts. The conclusions and acknowledgements complete the paper.
II. ONEDIMENSIONAL REFERENCE FUNCTIONS
In this section, the onedimensional construction blocks (the units, bricks, or even “elemental particles”) of the integral and multiview imaging are introduced. This description is based on the linear geometry model [17]. The model describes the geometry of a threedimensional autostereoscopic display based on two planes (the image plane and the light source plane) and two regions (the image region and the observer region). The former region is located near the image plane (a screen), the latter at certain distance in front of it. In the Euclidean coordinates, the quasihorizontal crosssections of regions have a deltoid shape, and their geometrical structure is not uniform.
To obtain an extendable form, the projective transformations are used. It turns out that it is possible to build a transformation which yields a squareshaped region with a periodic structure. In this case, the target plane is perpendicular to the quasihorizontal source plane with the centers of projection located at the normals to the centers of the light source array and the sweet spot (where the best visual image can be observed), as shown in Fig. 2. Note that the plane
xz , which will be used in the current Section, coincides with the planex _{0}y _{0} in Fig. 2.The projective form is especially convenient for the collinear points on the parallel lines; in particular, it ensures the uniform locations of discrete distance planes (depth planes), as shown in Fig. 3, where
k is the index of the distance plane. The physical meaning of the index is the distance. The planek = 0 (z = 0) is the plane of light sources, while one of planesk = ±1 is the image plane; the integral image used to be located at this plane. In this paper,x ,y , andz denote the projected coordinates; the corresponding regular coordinates (x _{0},y _{0}, andz _{0} in Figs. 1 and 2) can be found with using the inverse matrix [17].Mathematically, these “particles” are described through the reference functions (in two dimensions, the patterns, see Sec. III) of the integral image. The proposed reference functions (in this paper, the step functions with two levels) are based on the rectangular unit impulse function. Essential in this description is that the number of the unit impulses in a pattern distinctly depends on the distance. For sake of simplicity, only the shapes are considered, but not amplitudes nor colors. Therefore throughout this paper, all the reference functions are of unity height, and all images are also step functions with two levels, i.e. blackandwhite images without intermediate grades.
Similarly to the view images, the representation by the reference functions also works in two directions, composition and decomposition. These operations can be expressed through the convolution of the integral image with the reference functions. The convenience of this approach is in providing both synthesis and analysis of the integral image in a quite similar way.
A threedimensional object is “split” in slices by discrete distance planes defined in the model. In the projected form [17], all distance planes are equidistant.
The integral image can be considered as a result of a transformation which maps threedimensional objects from the image region onto the twodimensional surface of the image plane. Once mapping is explicitly found, the reference functions can be built.
A spatial point can be mapped onto the image plane in various ways, e.g., basing on the visibility of the light sources. It can give a key to build the impulse mapping functions [18,19] consisting of rectangular impulses of the identical amplitude, see Fig. 4. The shape of the functions depends on the index
k of the distance plane, i.e. on the distance. The similar geometry is described in [13]. The discrete distance planes located at the nodal lines [17] are only considered here, supposing that any distance can be approximated by discrete planes with certain accuracy. In this paper, the distance between the discrete planes is assumed to be less than the required accuracy.Basing on the geometry Fig. 3, the visibility of the light sources can be estimated as follows. A displayed object is located in the image region. A given location in a threedimensional space within the projected image region between the distance planes
k 1 andk , the light sources visible from that location are represented in the image plane byk separate points. This is because these light sources are visible from the mentioned location throughk cells in the image plane, see Fig. 3. Therefore, the mapped points are distributed in the image plane acrossk image cells.Generally, the points of an object themselves are infinitely small, they have zero effective area in the crosssection of the image region; and the amount of required locations is
virtually infinite. In order to reduce this amount, a discrete representation based on the line segments can be used instead of the continuous representation. In this case, the effective area becomes larger but remains limited (finite), and the amount of required locations is reduced. This way, the functions with different numbers of separated points for different distance planes can be built.
Under the above assumptions, the
k th function is a set of k  impulses (impulse train). This looks like a sequence of a finite number of the squeezed rectangular unit impulsesr (x ) across several cells.For example, the second function
F _{2} (Figs. 3(a), 4) represents the line segment of the second distance plane; the corresponding pattern occupies two cells in the image plane. More generally, thek th reference function represents a segment of thek th distance plane as shown in Fig. 3(b) fork = 3.The width of all impulses is identical, as well as the gap between them. Fig. 4 shows examples of the onedimensional reference functions for 
k  ≤ 3. It should be mentioned here that in the description of the functions in this paper, the local coordinates are used, which are biased as compared to the model. Namely, the biased index of the distance planek runs from m tom , where the incrementedm (i.e.,m +1) is equal to the number of the light sources; while in the model, the index of the nodal line i runs from 0 to 2m . Also, for each function, the origin lies in its center as shown in Fig. 4; the position of the origin in the model [17] is shown in Fig. 2.The period of impulses of the
k th series can be obtained from the geometry Fig. 3,where
is the period of projected light sources,
c =b _{0}/a _{0} is constant,b _{0} is the width of the observer region (sweet spot),a _{0} is the width of the light source array. Note that here,a _{0},b _{0}, andc relate to the regular (nonprojected) coordinates. In the projected form, the period of nodesw is the same for all projected planes [17], and particularly, it is equal to the periods of the projected screen cells and the light sources.The width of the impulse involves an adjustment parameter
σ ,where 0 <
σ ≤ 1 is a positive parameter which controls the accuracy of the representation of spatial objects in the image plane;σw is the adjusted width of a cell.The center of
i th cell and the center ofi th impulse lie at the following distances from the local origin (from the center of the impulse pattern),where
i = 0, …, k 1.Basing on Eqs. (1)  (4), the
k th function can be written as a sum of impulses of the unit amplitude, see Fig. 4,where
r (x ) is the rectangular unit impulse function (the unit impulse of the unit width centered at the origin). The depth dependence in (5) is determined byk , which the physical meaning can be explicitly expressed as follows,where
is the distance between the projected nodal planes,
g _{0} =b _{0}/2, and the square brackets [] denote the integer part. The lateral displacement of an object point byw causes the equal lateral shift of the reference function. In this paper, we imply such discrete lateral displacement with its step equal tow (the cell size in projected form); this will be used in Sections IV and V.The total area of all impulses of any function is the same and equal to
σw , i.e. to the adjusted cell width of the single impulse withk = 1. Examples are shown in Fig. 4 forσ = 0.75 and 1, maxk  = 3.From (5) and Fig. 4, it can be seen that the positive and negative functions only differ in locations within cells, while the number of cells and the width of the impulses are the same in both cases
k =k _{0} andk = k _{0}.The influence of the parameter
σ basically looks like follows. Whenσ is small (σ ≪ 1), the mapping actually represents a continuous object, and therefore requires very many distance planes and a tiny lateral displacement; however, with a largerσ ？ 1 the mapping represents the discrete effective areas in the image region (mapping units with the segments of nodal lines being their diagonals) and much fewer distance planes and displacements. This is a sort of a discrete approximation. For example, a halfσ requires a half lateral displacement and a double amount of the distance planes; otherwise there would be some uncovered gaps between the mapping units. In the case ofσ = 1, the nodal lines and the lateral displacementw areonly needed; this particular case is considered in Sections IV and V. In such a way, the size of the mapping unit is controlled; the spatial objects are split in smaller or larger parts and thus can be described with different accuracy for different
σ . This is schematically shown in Fig. 5, where the effective areas in the image region are shown for two values ofσ . Their area size can be roughly estimated as (σw )^{2}, which tends to zero withσ → 0, and to the area of the cellw ^{2} withσ → 1.The onedimensional reference functions (5) can be used for transparency
T (in a reflecting screen, or brightnessB in a light emitting screen) of points in the image plane of the onedimensional (horizontal parallax only, HPO) integral/ multiview imaging. Functions (5) represent the discrete spatial points (voxels) in the image plane. In principle, basing on (5), any threedimensional object can be represented in the image plane point by point. The resulting image makes the threedimensional objects available for the visual observation in an autostereoscopic display device.III. TWODIMENSIONAL REFERENCE FUNCTIONS
In displays, the onedimensional functions describe the stereoscopic images with the onedimensional parallax (horizontal parallax only). For the twodimensional (full parallax) imaging, twodimensional reference functions are needed. A straightforward extension of (5) into two dimensions is the multiplication of the onedimensional functions, as graphically shown in Fig. 6.
This extension is based on the 90°rotation symmetry of the optical element, for example, the crossed lenticular plates of identical periods arranged orthogonally. Another example is a lens array with lenses arranged in a square matrix. Fig. 6 shows examples of the patterns 3 × 3 and 7 × 7 cells, which will be used for synthesis and analysis of the integral images in Sections IV and V.
For the twodimensional functions (7), the width, period and displacement of impulses in
x  andy directions are the same as in (1)  (4).In this paper, we intentionally use the simple patterns like Fig. 6 for explanation of the principle. In Fig. 6 and related illustrations below, the white color means a zero value (zero brightness or opaque area, depending on the type of a particular screen), while the black means one unit (maximum brightness, completely transparent area). For instance, the functions in Figs. 6 (b), 6 (c) have zero values at the perimeter and one unit in the center.
IV. IMAGE SYNTHESIS
What can be done using these elemental particles? It appears that both synthesis and analysis of integral images are possible using the same functions. It means that the composition and decomposition of the integral image can be conveniently expressed through the same operation, the convolution of the integral image with the proposed reference functions. Preliminary results were reported in [1820]. The particular significance of the proposed approach is in providing both synthesis and analysis of the integral image in a quite similar way.
The twodimensional patterns (7) (see also Fig. 6) allow synthesizing images. Since assumed in this paper is the model of brightness based on the step functions with two levels (zero and one), the occlusion cannot actually be supported, and the brightness of the resulting image can be written as the pointbypoint logical summation (logical OR) of the reference twolevel patterns,
where the integer numbers
k ,l , andm run through all points of the projected object, while ∪ denotes the logical summation. In particular,l andm affect the lateral displacements, whilek affects the distance.By using the patterns (7) as construction blocks, an integral image of any threedimensional object can be built basing on, e.g., its wireframe model. The merged patterns (8) form the resulting integral image.
In simulation examples, the spatial objects are constructed from two nonintersecting skew lines in different planes. For the first object, the first line connects points in the first plane (4, 8, 1) and (8, 4, 1), the second line (4, 4, 3) and (8, 8, 3) in the third plane; for the second object the same
x  andy locations are used for the third and seventh planes (the above integer coordinates are expressed in cells and nodal lines). The testing images are built pointbypoint using the patterns (7) merged by (8). An integral image may look like a complicated mixture of dots of various sizes. Fig. 7 shows the synthesized integral images of two mentioned objects; the image Fig. 7(a) is composed from the patterns Fig. 6(a) and 6(b), while Fig. 7(b) from Fig. 6(b) and 6(c). The integral image of any objects (two lines in this numerical experiment) always lies in the same plane (k = 1 in this example).To confirm that the testing image represents an intended threedimensional object, the synthesized image can be printed on the paper and displayed with using a pinhole array, or a lens array, or two crossed lenticular plates. In our experiments, two lenticular plates 25 lenticules per inch with the focal distance 3.63 mm were used. Such layout is typical for autostereoscopic displays.
In the displayed testing image, the two skew lines can be clearly localized visually by distance; although, they look crossed in each photograph of Fig. 8. The visible regular grid in Fig. 8 is formed by lenticules comprising the square cells.
V. IMAGE ANALYSIS
The reference functions make it also possible to analyze the integral image by distance; in other words, to find the distance from the image plane to this or that point of an object. Similarly to the patterns of certain threedimensional objects, the reference functions can be treated as the patterns of the most basic objects, the points lying at the predefined distances. Together with the known methods, e.g., [21,22], the reference functions can be applied to extract the distance to threedimensional objects or their parts basing on the integral image. This can be made, for instance, by convolution of the integral image with the reference functions. The convolution is a mathematical operation on two functions defined by an integral [23]; in the onedimensional case it looks like follows,
In twodimensional convolution analysis, the function
f in (9) is the integral image under test, whileg is thereference function (7). Generally, the convolution shows the similarity between the image and the reference pattern. The convolution analysis of the synthesized testing images Fig. 7 can be made with using twodimensional reference functions across a set of fixed discrete locations, i.e., with the lateral step equal to the cell size
w . Each plane is analyzed separately with its own pattern (the corresponding reference function).The resulted surface of convolution can be represented in various ways, in crosssections, in threedimensional projections, or in shades of gray. In the latter case, the values of functions are encoded by gray levels similarly to Fig. 6 with additional levels which represent intermediate values. Note that the gray levels are only used in this paper to represent the results of the convolution analysis in the current section and to draw the example of partial pixels in Sec. VI; the reference functions and the images are always the step functions with two levels. An example of the image analysis within a distance plane is shown in Fig. 9.
Furthermore, the image can be analyzed across several planes with using several patterns corresponding to the different distance planes. An example is shown in Fig. 10 for five planes (and five patterns, three of which are shown in the illustration). In this case, except for features of each plane, the local maxima of the convolution across several planes indicate the distances of the threedimensional points of interest.
In the numerical simulation, the testing integral images of skew lines (like Fig. 7) were analyzed by convolution
as described above. In this analysis, the reference functions (7) with
k = = 1, …, 7 were used. Examples of convolution analysis for two planes are given in Fig. 11.The crosssections of the convolution surface along diagonals ±45° (i.e., along the original lines) are shown in Fig. 12 for the planes 1 through 7. The area, where the calculated convolution was not identically zero, appeared to be 9 × 9 cells for the first testing image of this simulation and 11×11 cells for the second.
As shown in Fig. 12, the convolution with the proper reference function gives a relatively flat reply (its nonflatness is about 2  7%). This could be a distinctive feature in recognizing the line segments within a plane.
The convolution with a neighboring reference function (i.e. the first pattern for the third plane, the fifth pattern for the third and seventh planes, etc.) may sometimes exceed a certain critical value (for example, the level 60% of the maximum convolution was used in this numerical experiment). Generally, this may lead to inexact recognition. However, the indication of an improper distance plane caused by this reason has happened in the numerical experiments not so often; namely in 2 cells for the first image and in 4 for the second, i.e., only in 2.5% and 3.3% of the whole tested area, resp. In this example, the cell size was 53 pixels, which value may produce some inexactness due to the pixelated structure, see Sec. VI.
The convolution of the integral image with the reference functions suggests the points which lie in particular planes by maximum convolution; this effectively separates the distance planes. The original planes of the two lines seem
to be restored correctly in both testing images Fig. 7. Similarly looking spatial distributions (see Figs. 4, 69) can be found, e.g., in [13, 16]. In common with the patterns of the digital holography [24, 25], the proposed reference functions depend on the distance only; the discrete displacement within a plane does not affect their shape. Especially important for this analysis is the projection, because the period of cells is identical for all planes in the projected form but varies from plane to plane in the regular coordinates.
In order to confirm the above statements additionally, a supplementary computer simulation was performed. In it, two integral images of the diagonals of a cube 8x8x8 were synthesized for 
k  ≤ 5 as described in Sec. IV; these diagonals do not lie in the same distance plane as it was in the previous simulation example (but similarly to it, the above integer dimensions of this cube are expressed in cells and nodal lines). Two synthesized images were: four diagonals of faces in the first image and two spatial diagonals crossing each other in the center of the cube in the second image, see Fig. 13. In both images, the lines connect the positive and the negative planes, so as the positive and the negative reference functions are involved. The corresponding integral images are shown in Fig. 14. It is essential that in this example, the cell size is 60 pixels; the number 60 is divisible by all numbers from 2 to 5, which eliminates the effect of the pixilated structure completely (see Sec. VI).For the case of spatial diagonals, the results of the convolution analysis (restored crosssections) are shown in Fig. 15 by planes.
Figure 15 shows the crosssections of the cube, and two crossing diagonals can be recognized in these successive crosssections. Alternatively, several crosssections superposed with some artificial displacements are shown together in Fig. 16 the front and rear faces of the cube are highlighted, as well as the spatial diagonals.
In Fig. 16, the spatial structure of the crossed spatial diagonals of the cube is clearly recognizable.
In this example of the convolution analysis, we intentionally used the exact integer patterns (see Sec. VI), and no errors or displaced locations were found; all points of face/spatial
diagonals were restored correctly in both images, refer to Figs. 15, 16. The related effect of noninteger patterns may produce some errors as it happens in the previous example of this section. The effect of the integer and noninteger patterns is partially covered in the next Section.
Nevertheless, the convolution results can be estimated in terms of the signal to noise ratio (SNR). In particular, the estimated average SNR of the restored face diagonals is 5.3 (varying from 4.42 to 7.16 in particular planes); for the restored image of the spatial diagonals, the SNR is 5.8 (between 5.18 and 6.43). These values show that in this simulation, the desired signal is notable above the noise level and wellrecognizable therefore.
For the practical analysis, it is essential that the total area of all impulses of any reference function is the same (see Sec. II). Therefore, the maximum value of the convolution (meaning the absolute match) is the same for all planes. In recognition of points, lines, etc., this looks convenient, because the same criterion can be applied to any plane. To consider a location as a recognized point in the second (supplementary) simulation of this section, the level 80% of the maximum convolution was used for all planes.
VI. PIXELATED STRUCTURE
In the previous sections, the description of the reference functions was given with no relation to the pixels of a screen, where the images are displayed. At the present time, a typical case is a relatively small number of pixels along one dimension of the cell [26], and therefore the discreteness of the digital screen may become significant.
To represent a connected threedimensional object occupying a volume between 
k th and +k th depth planes, all intermediate planes are needed. The 2k +1 functions for these planes consist of rectangular impulses which width is in inverse ratio withk by (2). In the twodimensional case, a cell of the +k th and k th patterns is split ink ^{2} equalparts (impulses) distributed across
k ^{2} cells, see Fig. 6. Also, in Fig. 17(a), the partitionk = 4 is shown. An image cell of exactly 4 pixels is shown separately in Fig. 17(b). The patterns for positive and negative planes are divided into the same number of smaller parts but are different in the local inversion of coordinates within the cells. Therefore, there are exactlyk principally different ways to split the cell in parts (partitions).To be displayed in a digital screen, all parts must be expressed in the screen units, the pixels. How to implement the
k partitions of the cell (1, 1/2, …, 1/k ) in a digital screen? A possible solution could be to choose the cell sizeN divisible by all integer numbers between 1 andk . In this case,N must be the lowest common multiple (LCM) of numbers 1, 2, …,k .For instance, the sequence A003418 [27] has the value LCM(1, 2, …, 10) = 2520 for
k = 10. Therefore, in order to represent all patterns up to k  = 10 exactly (without distortions), the image cell must contain 2520 pixels in one dimension. This requires 2,520^{2} = 6,350,400 pixels for the single cell. Practically, the number of pixels in a single cell cannot be this large, in any existing screen. Therefore using LCM functions for cell sizes looks not practical, although this is the exact formal solution of the partition problem.It means that strictly speaking, it is impossible to build all the pulses (2) in a digital screen exactly. Therefore, an approximate solution should be found, and care should be taken about nonexact partitions. An example nonexact 1/3 of cell and a possible way to approximate partial pixels by gray levels are given in Fig. 18.
The inexactness may lead to visual distortions (a less sharp threedimensional visual image). The partitions should be evaluated from this perspective, expecting that some numbers could produce less distortion.
Among all
k partitions, there could be exact and inexact ones (depending onN , divisible byk or not). The divisor functionσ _{0}(n ) characterizes the divisors of an integernumber and gives the total number of divisors including 1 and itself, refer to [27,28]. Thus, the number of exact partitions is given by
σ _{0}(n ). For example, forn = 12,The behavior of the divisor function
σ _{0}(n ) is illustrated in Fig. 19.To estimate the number of exact partitions for various cell sizes in approximately equal manner, the following relative divisor function can be used
This function is derived from the Dirichlet’s formula for the average order of the divisor function [28]; it gives an asymptotic estimate of the share of exact partitions among all partitions for a given
n , see Fig. 20.In Fig. 20, the relatively high local maxima are indicated by bold dots; for them,ive divisor function (11) is higher than 2. The corresponding list of abscissas of the relatively high maxima of
σ_{r} (n ) looks like followsThe preferable values (12) provide the highest share of exact partitions; this leads to more accurate reference functions, and therefore, to less visual distortions. From (12) and Fig. 20, one may observe that the listed values are 4fold and 6fold numbers, at least in the interval from 3 to 240. Consequently, the 12fold numbers are automatically included in the list. However, not all 4 and 6 fold numbers are in the list (12), because for some of them, the value of
σ_{r} (n ) is less than 2. Basing on that, the preferable values can be expressed by the following formulawhere
p ≤ 20 is integer. (The above numerical value 20 is not a principal limit; it only represents the length of the investigated interval.) The formula (13) describes the preferable cell size of an autostereoscopic display device.VII. DISCUSSION
Strictly speaking, the definition of the reference functions (5) does not provide a mechanism to distinguish between the +1st and 1st planes. As strange as it seems, this is correct, because the ±1st functions are actually defined in a separate way than others. The similar situation is with the 0th function which is assigned to the identical zero by definition. This inexactness may result in some ambiguity of the image analysis near
k = 0. Nevertheless, this formal issue is almost of no practical value, because it is really difficult to distinguish visually between the plane of light sources and the screen from the optimal viewing distance, and therefore the three mentioned planes are visually attributed to 0th plane. In other words, the mentioned ambiguity just spread the light sources visually between the +1st and 1st planes, two possible locations of the screen.The twolevel brightness model is used here for sake of simplicity. In practice, the number of levels can be increased. Then, in image generation, the patterns from different planes should overlap each other (replace point by point), starting from the farthest distance plane. This will guarantee the occlusion conditions. The logical summation in (8) is the formal representation of the absence of overlapping in a twolevel model.
The pixilated structure is a serious limitation of discontinuous rectangular pulses [13]. For instance, not more than 20% of all partitions (for image cells less than 60 pixels) can be exact; the other 80% cannot be expressed in integer numbers and are principally inexact. Thus, most partitions are essentially noninteger, even using the preferable numbers (13). It means that the problem of discreteness is only partially solved here and needs a further investigation.
The formula (13) clearly states that the 12fold image cells are preferable. Generally speaking, the formulas similar to (13) can be written for 6 and 4fold numbers too, but they would be valid only conditionally. From the perspective of fewer visual distortions, an autostereoscopic display device with 12 or 24 pixels (or subpixels) per cell width is definitely better than another hypothetical threedimensional device with all other parameters identical but, say, 11, 13, or 23 pixels per cell. At the same time, it should not be forgotten that sometimes there also exist some local maxima for some of 6 and 4fold numbers which satisfy the criterion (the relative divisor function higher than 2).
When designing the twodimensional functions, we implicitly relied upon the 90°rotation symmetry which, of course, is not the general case. This means that other layouts of optical elements in threedimensional displays may require different functions, although we expect them to be similar to the reference functions (5). The basic properties (such as the area) of functions are kept for any layout of microlenses or pinholes. The number of split impulses only depends on the distance, but not on the layout of the cells (light sources). It is a permanent distinctive feature of the geometry of the reference functions.
A particular geometry of the pinhole array affects the shape of the twodimensional reference functions. As it can be seen from (3) and (4), the distances from the center of the pattern to the center of
i th cellc_{i,k} and to the center ofi th impulsex_{i,k} are proportional to each other for any givenk , because their ratio does not depend oni or the variables other thank , as follows,According to (14), the layouts of the cells and of the impulses (namely, the centers of cells and pulses) are geometrically similar to each other. It means that one of them can be transformed into another by the uniform scaling (resizing) with the coefficient (14). This suggests the concept of the relative distance. The relative distance in the image plane is the ratio of lengths of two collinear vectors from the origin to the center of a cell and to the center of the impulse; this ratio is the same for all spatial points from the same distance, i.e. for the same
k , as illustrated in Fig. 21(a). In this Figure, the vectors to the centers of cells are shown by thick lines with arrowheads, while the collinear vectors to the centers of impulses by thin lines. The principle of the relative length is valid for the rectangular layout and can be applied to other layouts. Specifically, this is shown in Fig. 21(b) for the hexagonal layout.Furthermore, the relative distance gives a key to build the reference functions for any arbitrary (irregular) layout of cells, as shown in Fig. 22, where the circular cells of equal area are distributed randomly across the plane. In this example, roughly speaking, the radial displacement of impulses within the cells is equal approximately to the cell size for cells crossing the larger reference circle, whereas to one half of that for the cells crossing the smaller circle (half radius).
For future work, we suppose to investigate more complicated grayscale testing images and the influence of the adjustment parameter
σ ; then, a solution of the problem of discreteness should be found, the occlusion issues should be considered as well. Each topic is worth examining in details. For a while, we may only assume that the discreteness problem could require a modified representation of the reference functions on the integer grid.VIII. CONCLUSION
In order to describe an integral image by elemental building blocks, we suggest the one and twodimensional reference functions. The proposed functions provide the synthesis/analysis of an integral image by distance with the controllable accuracy, as an alternative of the known technique of composition/decomposition by view images (directions). The results are confirmed in simulation. It is experimentally confirmed that the synthesized image can be displayed in an autostereoscopic display. In simulation and in experiments, the step functions with two levels are used for the reference functions and for the testing geometric objects. Beyond the general interest (structural elements of multiview and integral images), the proposed reference functions can be used in practical applications like depth extraction [14], threedimensional shape extraction [15], transformations of the integral images and so on. The effect of discreteness due to the finite size of pixels is analyzed, and the preferable sizes of cells are determined. Layouts other than rectangular are also discussed.
The proposed analysis can probably substitute the search of corresponding points in rectified images aimed to the same goal, to reconstruct a threedimensional structure from images. Also, a direct measurement of depth in a threedimensional scene photographed through a lens array (instead of a regular camera lens) could be useful.

[FIG. 1.] Cells and directions of the integral image. The composition of the integral image is performed from the view images, the decomposition to the view images. The location and the amount of the light sources are also shown.

[FIG. 2.] Layout of centers and planes of projections. Here, a0 is the array of light sources, b0 is the sweet spot lying at the optimal viewing distance; a and b are their projected counterparts.

[FIG. 3.] Equidistant planes in the projected image region. Mapping of points from the second and third distance planes (in (a) and (b) for k = 2 and 3) onto the image plane (k = 1).

[FIG. 4.] Onedimensional reference functions for distance planes k = 3, …, +3 with σ = 0.75 in (a) and σ = 1.0 in (b). Some of pk, ck,i , and xk,i are indicated for Eqs. (1), (3), and (4).

[FIG. 5.] The difference in accuracy for different σ: 0.5 and 0.9 in (a) and (b), resp. The smaller σ requires smaller discrete step in both directions. The effective areas are schematically shown by solid squares.

[FIG. 6.] Twodimensional reference functions for first, third, and seventh distance planes (3×3, 3×3, 7×7 cells, resp.): (a) k = 1, (b) k = 3, (c) k = 7. The boundaries between the cells are shown for illustration purpose only.

[FIG. 7.] Synthesized testing integral images 13×13 cells of two spatial skew lines ±45° in the first and third planes in (a), and in the third and seventh planes in (b); the image built with using the reference functions Fig. 6.

[FIG. 8.] Photographs of the displayed synthesized image of two spatial skew lines ±45° Fig. 7(a) in the first and third planes (13×13 cells). The image is displayed with using lenticular plates. The photographs are taken by a camera moved horizontally.

[FIG. 9.] Image analysis in a single plane based on the single pattern. The result is represented in two forms, the surface and the shades of gray.

[FIG. 10.] The convolution analysis for 5 planes (with 5 patterns, 3 of which are shown).

[FIG. 11.] Convolution analysis of testing image Fig. 7(a) (13×13 cells) with the reference functions from Fig. 6 (σ = 1.0, 3×3 cells each): (a) for the first plane (k = 1), (b) for the third plane (k = 3). The boundaries between the cells are not shown, only the cell numbers.

[FIG. 12.] Crosssections of the surface of convolution for the image Fig. 7(a) along diagonals +45° in (a) and 45° in (b), and for Fig. 7(b) along the same diagonals ±45°, in (c) and (d), resp.

[FIG. 13.] The scheme of the simulated diagonals of the cube: (a) four face diagonals, (b) two spatial diagonals.

[FIG. 14.] Synthesized integral images: (a) face diagonals, (b) spatial diagonals.

[FIG. 15.] The convolution analysis of the integral image with spatial diagonals. Shown are the planes k = 5, 3, 3, 5.

[FIG. 16.] The restored structure of the spatial diagonals.

[FIG. 17.] (a) A pattern 4 × 4 cells with cells 4 × 4 pixels; (b) one exact 1/4 of cell of 4 pixels at the corner magnified.

[FIG. 18.] (a) Inexact 1/3 partition of cell 4×4 pixels; (b) its representation by gray levels.

[FIG. 19.] The divisor function σ0(n).

[FIG. 20.] The relative divisor function defined by (11).

[FIG. 21.] Twodimensional reference functions: (a) for rectangular crossed lenticular plates and (b) for hexagonal spherical lenses based on the relative displacement.

[FIG. 22.] The reference functions for a random pinhole screen based on the relative displacement.