검색 전체 메뉴
PDF
맨 위로
OA 학술지
Color Pattern Recognition with Recombined Single Input Channel Joint Transform Correlator
  • 비영리 CC BY-NC
  • 비영리 CC BY-NC
ABSTRACT
Color Pattern Recognition with Recombined Single Input Channel Joint Transform Correlator
KEYWORD
Color pattern recognition , Joint transform correlator , Color difference , (070.0070) Fourier optics and signal processing , (070.5010) Pattern recognition
  • I. INTRODUCTION

    Pattern recognition can be applied to various kinds of fields, especially facial appearance, fingerprints, handwriting,and character recognition [1-7]. The joint transform correlator(JTC) has shown remarkable achievements for real time pattern recognition and target tracking applications [8-16]. The introduction of color information in the pattern recognition has become increasingly important, especially because of the wide spread of electronic image-acquisition devices such as color CCD cameras. The introduction of color information in pattern recognition by optical correlation is usually made by means of a multi-channel correlation technique that decomposes the source and the target color images in three red, green, and blue RGB channels. The correlation is made separately for each channel, and arithmetic or logical point-wise operations can be used to derive the final output [17-19]. The concept of separation into color channels led to the development of multi-channel optical color pattern-recognition systems. The multi-channel methods perform the correlation process in parallel for all the color channels that compose the image by the utilization of coherent optical correlators that are illuminated simultaneously with a few coherent sources, each having a different wavelength. The output plane consists of a set of superimposed correlation distributions that must be analyzed independently and composed together to render the detection decision. A common way in which objects are optically recognized is by use of a multi-channel JTC in which a filter matched to the target is used in each channel. The typical advantages of the JTC are that it uses a type of real time optical system which quantitatively analyzes and compares the color images by measuring a correlation peak and peak signal to-noise ratio (PSNR). A disadvantage of this approach is that it requires three different channels, which increases the system cost. For example, a spatial light modulator (SLM) is needed at each channel, two lenses are required at each channel, and an extra beam splitter and a mirror are also needed. The concept of multi-channel single output color JTC configuration was initially proposed by Deutsch to overcome such disadvantages[20]. However, the multi-channel single output color JTC has to take into account the separation between the input target image and the reference image. Deutsch [21] found that the separation between the input target image and the reference image for the same color must be the same and also larger than the sum of the widths of these images in order to prevent any unwanted overlapping between correlation outputs. To yield sharp correlation peaks, Alam et al. proposed a fringe-adjusted JTC based on Newton-Raphson algorithm [22]. However, they still used multi-channel and single output JTC to improve correlation discrimination.

    We propose a new technique of color pattern recognition by decomposing the color image into three color components and recombining those components into a single gray image in the input plane. This new technique needs single input channel instead of three input channel and single output CCD camera, thus a simple JTC can be used. We present various kinds of simulated results to show that our newly proposed technique can accurately recognize and discriminate color difference. In the following section II, conventional multichannel color pattern recognition JTC and its basic theory is presented. In the section III, the proposed new method of single input channel and single output color pattern recognition JTC system are described in detail. Section IV describes simulation results for the color pattern recognition and, finally some comments are contained in the conclusion.

    II. MULTI-CHANNEL SINGLE OUTPUT JTC

    The arrangement of the multi-channel single output JTC is shown in Fig. 1. The input joint image is a monochromatic image that contains six gray-scale images. The red image(rR) for the red channel, the green image (rG) for the green channel, and the blue image (rB) for the blue channel of the color reference image are placed on the left side of the input join image. The red image (tr) for the red channel,the green image (tg) for the green channel, and the blue image (tb) for the blue channel of the color target image are placed on the right side of the input join image as three other gray-scale images. The input joint image, i(x,y)can be defined as

    image

    where rm represents the three color components (rR,rG,rB)

    of the reference image, and tn represents the three color components (tr,tg,tb) of the target image. After the input joint color image on the LCD1 passes through the lens L1, i(x,y) will be Fourier transformed as Eq.(2).

    image

    The intensity of this Fourier transformed signal forms an interference fringe pattern and it is called the joint power spectrum (JPS). JPS will be acquired with a light detector CCD on the Fourier plane. Therefore, JPS can be expressed as Eq. (3).

    image

    Here the symbol * denotes the complex conjugate. The cross-correlation output can be obtained by inverse Fourier transforming the JPS through the lens L2. There exist at the output plane 36 correlation terms grouped in 15 correlation locations, each location consisting of the coherent addition of a few overlapping cross-correlation terms. Figure 2 shows these 36 correlation outputs and 15 different locations. If we remove the zero order, DC components, the cross-correlation output displayed on the output plane through the inverse

    Fourier transform lens L2 can be expressed as

    image

    where cmn=rm(x,y) ⊙tn(x,y), the symbols ⊙ and ? denote correlation and convolution respectively. The correlation locations that are of interest are the areas around (-2x0,0)and (2x0,0), and these 6 correlation peaks represent the coherent addition of the three cross-correlation terms between the corresponding RGB color channels of the reference image and the input target image. For example, the correlation peaks located at (2x0,0) have a field distribution expressed as Eq. (5).

    image

    III. SINGLE-CHANNEL SINGLE OUTPUT JTC

    We present new single-channel single output JTC for the color pattern recognition of the color image. This method can achieve the color pattern recognition by decomposing the color image into three color components (red, green,and red) and recombining those components into a single gray image in the input plane instead of using these three color components. This new technique needs a single input channel instead of three input channels and single output CCD camera, thus a simple JTC can be used. Figure 3 shows the optical structure of the basic JTC system for the single-channel and single output color pattern recognition.Thus, the input joint images i(x,y) can be expressed in a simple form as

    image

    where rRGB is the recombined gray image of the three color components rR,rG,rB of the reference image and trgb is the recombined gray image of the three color component tr,tg,tb of the target image. After the input joint image on the LCD1 passes through the lens L1, i(x,y) will be Fourier transformed as Eq. (7).

    image

    Therefore, JPS, the intensity of the interfered light is captured by a light detector on the Fourier plane. The JPS contains noise and DC components which degrade the correlated signal obtained on the output plane in the second stage. The DC components should be blocked to obtain a well-correlated signal. DC components are blocked and non-linearity parameter k, compensating the non-linear component problem such as a digital camera are included to obtain a well-correlated signal. The blocking of the DC components may be implemented by use of a Fourier-plane image-subtraction technique. This paper also uses the non-linear JTC which introduces a non-linearity parameter k to compensate the non-linear component problem caused by elements such as the digital camera [15]. Equation (8)expresses the JPS without the DC components and with the non-linearity parameter k.

    image

    The cross-correlation output can be obtained by inverse Fourier transforming the JPS through the lens L2. The cross-correlation output displayed on the output plane through the inverse Fourier transform lens L2 can be expressed as Eq. (9).

    image

    There exist only two correlation terms at the output plane in comparison with those 36 correlation terms for the conventional three input-channel JTC showed on Fig. 2.

    IV. SIMULATED RESULTS

    To evaluate the performance of the proposed single input channel and single output JTC for the color pattern recognition, we prepared a 256×256 color image of the fruit. We did several simulations to present detailed results concerning the performance of our new method. We grouped the color images of the fruit for the simulation of the discrimination of the color when the hue, saturation, and brightness of the color images change. First, in Fig. 4 three target images of the fruit are shown for the discrimination of the color when their hue changes. Figure 4(a) is the reference image, and sample 1 (Fig. 4(b)) and sample 2 (Fig. 4(c))are the target images which are different from the reference images in hue. The color of the pepper on the right is red for the match case, and its color was modified into green and blue for the mismatch case. The rectangle in white color indicates the 32×32 pixels for the calculation of the average color value of the three color components (R, G,and B) and the RGB color difference. The RGB color difference was calculated from the formula

    image

    where

    image

    are the average values of the three color components R, G, and B of the reference color image respectively, and

    image

    are the average values of the

    three color components R, G, and B of the target color image respectively. We introduced a correlation peak difference ratio ΔCP to determine the threshold value of match and mismatch. ΔCP can be defined in a simple form as

    image

    where CPr and CPt are the correlation peaks of the reference and target color image, respectively. Table 1 shows the correlation peaks, correlation peak difference ratio, average color value, and color difference of the color images when their hue changes. Table 1 indicates that the correlation peaks decrease considerably when the red pepper of the fruit image shown on Fig. 4(a) changes into green and blue. The color difference of the green (sample 1) and blue pepper (sample 2) images compared with the red pepper (reference) image is about 96.16 and 149.36,respectively. The correlation peak of the match case is about 1.1487×107, but the correlation peak of the mismatch case when green (sample 1) and blue pepper (sample 2)images are compared with the red pepper image (reference)is about 1.0908×107and 0.8948×107, respectively. ΔCP is about 0.0504 and 0.2210, for the green and blue images,respectively. Second, 6 target color images of the fruit are shown in Fig. 5 for the discrimination of the color when their saturation changes. Figure 5(a) is the same as the reference image, and the others are the target images and different from the reference image in saturation. The color of the pepper on the right of the Fig. 5(a) is red for the match case, and its color was modified into saturated red step by step for the mismatch case. Again, 32×32 pixels of the images were selected for the calculation of the average color value of the three color components (R, G, and B)and the RGB color difference. Table 2 shows the correlation peaks, correlation peak difference ratio, average color value, and color difference for 6 color images when their saturation changes. Table 2 indicates that the correlation peaks decrease slowly when the red pepper of the fruit image shown on Fig. 5(a) changes to a different saturation.Moreover, it is clear that the amount of change of the correlation peak according to the variation of the saturation of the color is almost linear. Also, the color difference of the 5 target images compared with the reference image shown on Fig. 5(a) is indicated in Table 2. Third, we prepared 6 sets of input joint color images to evaluate the

    [Table 1.] Correlation peak, average color value, and color difference when the saturation of color changes

    label

    Correlation peak, average color value, and color difference when the saturation of color changes

    [Table 2.] Correlation peak, average color value, and color difference when the saturation of color changes

    label

    Correlation peak, average color value, and color difference when the saturation of color changes

    [Table 3.] Correlation peak and color difference when the brightness of color changes

    label

    Correlation peak and color difference when the brightness of color changes

    discrimination of the color images when the brightness of the input joint color images changes. In this paper, we propose the single input channel technique, and the input joint images consist of reference and target gray images.In other word, the three color values of the color image are converted into the brightness of each color. Thus, we want to know if when the brightness of the color images goes high, the discrimination of the color images fails. The color image shown on Fig. 6(a) is the same as the reference image (reference image in Table 1 and 2). We increased the brightness of the color images of Fig. 6(a) to a considerable level as shown on Fig. 6 from (b) to (f).These 6 images will be used as the reference images for each brightness level. Also, we prepared another 6 target images obtained by increasing the brightness of the target image (sample 3 in Table 2) shown on Fig. 5(b). Table 3 represents the correlation peaks, correlation peak difference ratio, and color difference of the color images according to each brightness level. In Table 3, the match case represents the case when the reference image is also used as a target image, and the mismatch case represents the case when the images obtained from sample 3 are used as target images for each brightness level. Again, 32×32 pixels of the images were selected for the calculation of the color difference. Table 3 indicates that the discrimination shows successful results according to a various brightness level.In addition, as we expect, Table 3 indicates that the color difference is almost the same in spite of the change of the brightness of the input joint color images. However, ΔCP approaches zero little by little if we increase the brightness level step by step, thus discrimination may be difficult at considerably high brightness levels. Therefore, we can conclude that it would be better to decrease the brightness level of input joint images of very bright color before the color recognition. We obtained 3-D diagrams of the correlation peaks for all kinds of simulation, and Fig. 7 shows the 3-D diagram of the correlation peaks for the simulation of our first experiment when the hue changes.The correlation peak of the red pepper (match case) is shown on Fig. 7(a) and the correlation peak of the mismatch case when green (sample 1) and blue pepper(sample 2) image is used as the target image is shown on Fig. 7(b) and 7(c), respectively.

    V. CONCLUSION

    We proposed a new technique of color pattern recognition by decomposing the input color image into three color components (red, green, and red) and recombining those components into a single gray image in the input plane.Conventional color JTC needs three input channels, thus 36 correlation outputs appears at 15 different locations.However, our new single input channel JTC has only 2 correlation outputs, thus a simple JTC can be used. We presented various kinds of simulated results concerning the variation of the correlation peaks according to the hue,saturation, and brightness of the color images. Simulated results showed that the proposed technique can successfully achieve the color pattern recognition in wide dynamic range of color images.

참고문헌
  • 1. Gianino P. D, Horner J. L 1984 Phase-only matched filtering [Appl. Opt.] Vol.23 P.812-816 google cross ref
  • 2. Mu G. G, Wang X. M, Wang Z. Q 1988 Amplitudecompensated matched filtering [Appl. Opt.] Vol.27 P.3461-3463 google cross ref
  • 3. Weaver C. J, Goodman J. W 1966 A technique for optically convolving two functions [Appl. Opt.] Vol.5 P.1248-1249 google cross ref
  • 4. Goodman J. W 1996 Introduction to Fourier Optics google
  • 5. Lee H. C, Gaensslen R. E 1991
  • 6. Jain A, Hong L, Bolle R 1997 On-line fingerprint verification [IEEE Trans. Pattern Analysis and Machine Intell.] Vol.19 P.302-314 google cross ref
  • 7. Jeong M. H 2009 Analysis of fingerprint recognition characteristics based on new CGH direct comparison method and nonlinear joint transform correlator [J. Opt. Soc. Korea] Vol.13 P.445-450 google cross ref
  • 8. Alam M. S, Awwal A. A. S, Karim M. A 1991 Improved correlation discrimination using joint Fourier transform optical correlator [Microwave & Opt. Technol. Lett.] Vol.4 P.103-106 google cross ref
  • 9. Javidi B, Li J, Fazollahi A. H, Horner J 1995 Binary nonlinear joint transform correlator performance with different thresholding methods under unknown illumination conditions [Appl. Opt.] Vol.34 P.886-890 google cross ref
  • 10. Jeong M. H 2010 New random and additional phase adjustment of joint transform correlator [J. Opt. Soc. Korea] Vol.14 P.90-96 google cross ref
  • 11. Urcid-S G, Padilla-V A, Cornejo-Rodriguez A, Ba'ez-Rojas J 2001 “Analysis of the joint Fourier spectrum for dualinput single channel rotation” [Proc. SPIE] Vol.4419 P.620-623 google
  • 12. Chen X. W, Karim M. A, Alam M. S 1998 Distortion invariant fractional power fringe adjusted joint transform correlation [Opt. Eng.] Vol.37 P.138-143 google cross ref
  • 13. Bal A, El-Sada A. M, Alam M. S 2005 Improved fingerprint identification with supervised filtering enhancement [Appl. Opt.] Vol.44 P.647-654 google cross ref
  • 14. Jeong M. H 2010 New iterative filter for fringe adjustment of joint transform correlator [J. Opt. Soc. Korea] Vol.14 P.33-37 google cross ref
  • 15. Javidi B 1989 Nonlinear joint power spectrum based optical correlators [Appl. Opt.] Vol.28 P.2358-2367 google cross ref
  • 16. Jeong M. H 2010 Binary nonlinear joint transform correlator with sinusoidal iterative filter in spectrum domain [J. Opt. Soc. Korea] Vol.14 P.357-362 google cross ref
  • 17. Yu F. T. S, Yang Z, Pan K 1994 Polychromatic target identification with a color liquid-crystal-TV-based joint-transform correlator [Appl. Opt.] Vol.33 P.2170-2172 google cross ref
  • 18. Mendlovic D, Garcia-Martinez P, Garcia J, Ferreira C 1995 Color encoding for polychromatic single-channel optical pattern recognition [Appl. Opt.] Vol.34 P.7538-7544 google cross ref
  • 19. Corbalan M, Millan M. S 1996 Color image acquisition by charge-coupled device cameras in polychromatic pattern recognition [Opt. Eng.] Vol.35 P.754-758 google cross ref
  • 20. Deutsch M, Garcia J, Mendlovic D 1996 Multichannel single-output color pattern recognition by use of a joint transform correlator [Appl. Opt.] Vol.35 P.6976-6982 google cross ref
  • 21. Deutsch M, Garcia J, Mendlovic D 1996 “Multichannel single-output color pattern recognition by use of a joint transform correlator” [Appl. Opt.] Vol.35 P.6976-6982 google cross ref
  • 22. Alam M. S, Goh S. F, Dacharaju S Three-dimensional color pattern recognition using fringe-adjusted joint transform correlation with CIE lab coordinates [IEEE Trans. on Instru. and Measur.] Vol.59 P.2176-2184 google cross ref
이미지 / 테이블
  • [ FIG. 1. ]  Arrangement of the basic JTC for the multi-channelsingle output color pattern recognition.
    Arrangement of the basic JTC for the multi-channelsingle output color pattern recognition.
  • [ FIG. 2. ]  Correlation outputs and their locations for themulti-channel single output JTC.
    Correlation outputs and their locations for themulti-channel single output JTC.
  • [ FIG. 3. ]  Arrangement of the basic NJTC for the single-channeland single output color pattern recognition.
    Arrangement of the basic NJTC for the single-channeland single output color pattern recognition.
  • [ FIG. 4. ]  256×256 color images for the discrimination of thecolors when their hue changes ; (a) reference (b) sample 1 (c)sample 2.
    256×256 color images for the discrimination of thecolors when their hue changes ; (a) reference (b) sample 1 (c)sample 2.
  • [ FIG. 5. ]  256×256 color images for the discrimination of thecolor when their saturation changes ; (a) reference (b) sample3 (c) sample 4 (d) sample 5 (e) sample 6 (f) sample 7.
    256×256 color images for the discrimination of thecolor when their saturation changes ; (a) reference (b) sample3 (c) sample 4 (d) sample 5 (e) sample 6 (f) sample 7.
  • [ Table 1. ]  Correlation peak, average color value, and color difference when the saturation of color changes
    Correlation peak, average color value, and color difference when the saturation of color changes
  • [ Table 2. ]  Correlation peak, average color value, and color difference when the saturation of color changes
    Correlation peak, average color value, and color difference when the saturation of color changes
  • [ Table 3. ]  Correlation peak and color difference when the brightness of color changes
    Correlation peak and color difference when the brightness of color changes
  • [ FIG. 6. ]  256×256 color images for the discrimination of thecolor when their brightness changes ; (a) sample 8 (b) sample9 (c) sample10 (d) sample 11 (e) sample 12 (f) sample 13.
    256×256 color images for the discrimination of thecolor when their brightness changes ; (a) sample 8 (b) sample9 (c) sample10 (d) sample 11 (e) sample 12 (f) sample 13.
  • [ FIG. 7. ]  3-D diagram of the correlation peaks when the hue changes ; (a) reference (b) sample 1 (c) sample 2.
    3-D diagram of the correlation peaks when the hue changes ; (a) reference (b) sample 1 (c) sample 2.
(우)06579 서울시 서초구 반포대로 201(반포동)
Tel. 02-537-6389 | Fax. 02-590-0571 | 문의 : oak2014@korea.kr
Copyright(c) National Library of Korea. All rights reserved.