검색 전체 메뉴
PDF
맨 위로
OA 학술지
Precise Detection of Car License Plates by Locating Main Characters
  • 비영리 CC BY-NC
  • 비영리 CC BY-NC
ABSTRACT
Precise Detection of Car License Plates by Locating Main Characters
KEYWORD
License plate detection , Character localization , Segmentation , Multiple thresholds , Neural network , (100.5010) Pattern recognition and feature extraction , (330.1880) Detection
  • I. INTRODUCTION

    Automatic license plate recognition (ALPR) is used in various applications for vehicle identification, e.g., traffic enforcement, parking management and electronic toll collection,in intelligent transportation systems (ITS) [1, 2]. In ALPR systems, car images are acquired from a camera,and the images are processed by various image processing and optical pattern recognition methods [3]. Commonly, an ALPR system consists of three main processes: detection of plate regions, character isolation and character recognition.Prior to the character recognition step, the license plate (LP) regions must be detected and the character regions must be isolated. Since overall performance and processing speed are influenced by these tasks, they are the most significant step in ALPR systems [3].

    In the literature, there are a large number of methods to detect LPs, where features such as texture, frame edges and color are used [4]. The texture-based methods use distinctive gray levels such as local variances [5], gradient density[6-8], frequencies [9] and uniformity [3]. These methods show high detection performance, but many non plate regions are detected. To overcome this drawback, combined methods with morphological steps were proposed in[10-12].

    The methods using frame edges [13-17] detect frame boundaries by edge or line information. These methods may detect frame regions correctly; however the plates which have boundaries with low contrast cannot be detected.

    The color-based methods [18-22] use a unique color or color sequence of the plate regions. These colors vary greatly due to different illumination conditions, and vehicles may have a color similar to that of the LPs. Thus, color information cannot be directly employed to detect LP regions.

    In most applications, character isolation is performed by projections [8] or connected region analyses [22, 23] of a binarized LP image after the LP detection. So, even if LPs were correctly detected, characters may not be isolated.

    To develop the best appropriate LP detection method for ALPR, we propose a novel method which can detect LP regions precisely. In the proposed method, we detect LP regions by locating main characters (MCs), which are printed with large font size. To detect MCs, we binarize an input image with multiple thresholds by a criterion of size and compactness. MCs are, then, grouped as candidates, and a neural network using the relation of the MCs and the intensity statistics is employed to reject non LP regions.Therefore, our method can detect LP regions precisely, and is the most appropriate method for character recognition because MC regions are isolated in the detection process.

    II. ORGANIZATION OF PROPOSED METHOD

    The overall scheme of our method is shown in Fig. 1.We first apply a DC notch filter, which compensates the variation of illumination, as shown in Fig. 1. The DC notch filtering is calculated by

    image

    where f and g are the original and the filtered images,respectively, and fm is the local DC (direct current, average intensity) component at (x, y). The size of the window that calculates the DC may be somewhat larger than the largest character expected to be detected [24].

    MC regions are detected by selecting from isolated binary regions by multiple thresholds and shape information.Then, we group the MC regions into main character group(MCG) candidates, where an MCG is a group of MCs on a LP. MCGs are detected without any character matching,so that many MCG candidates are detected. Characters of MCG, however, are arranged with predefined regulation,so we use a neural network to reject non MCGs from candidates, where the relation of the character regions and

    the intensity statistics are used as input features of the neural network.

    III. DETECTION OF MCG CANDIDATES

    Vehicle images, which contain LPs, are acquired in various places at various times; thus, a simple threshold cannot segment character regions correctly. For this reason, we apply multiple thresholds, and then select character regions.

    To detect MCGs, we first segment isolated MC candidates as shown in Fig. 1. Binary images are generated using multiple thresholds, and we select character regions, which are the largest regions less than a limited size for detecting characters, from among the binary images. In addition,we use a compactness to reject non character regions. The process proceeds according to the following steps.

    Step 1: Initialize threshold levels T[0], T[1], …, T[n]associated with higher percentages of the intensity histogram of the image.

    Step 2: Binarize the image with T[0], T[1], …, T[n].

    Step 3: (For each kth binary image) Delete long horizontal lines.

    Step 4: (For each kth binary image) Perform the image labeling on the binary image. Then store the bounding boxes of connected regions.

    Step 5: (For each kth binary image) Select the largest regions satisfying the size of a MC. If the selected region has a lower compactness than the region which is constructed by the lower threshold level (k ?1) and included in the selected regions, select the region constructed by k ?1 instead of the selected region.

    Step 6: Generate a binary image from the labels selected from all binary images at Step 5.

    Step 7: Label the binary image and then store the bounding boxes in bright character regions (C i b).

    Step 8: For threshold levels associated with lower percentages of the histogram, repeat step 1 to step 7, and store dark character regions ( C i b).

    The characters may be observed with frames of LPs because of the camera angle. Thus, we delete long horizontal lines connected pixels whose length is greater than τl in step 3 before labeling. In step 5, the compactness(α ) is defined as the ratio of the character regions to the bounding box regions. The compactness is calculated by

    image

    where N is the number of pixels in the segmented regions, and M is the area of the bounding box enclosing the segmented region. The label merged with the boundary of the plate has a low compactness, so it is rejected by comparing to the compactness of the included region constructed in the lower threshold level.

    In Fig. 2 (c) to (h), the gray color shows the regions deleted as long horizontal lines, and the regions surrounded by dotted boxes show the rejected regions as non character regions because they have the compactness lower than the compactness of the included region constructed in the lower threshold levels.

    Figure 3 shows an example for detection of MC candidates.Figures 3(c) and 3(d) are the generated binary images with higher and lower percentages of the histogram, respectively.

    In common LPs including those of Korea, the MCG is four isodistant numbers. Thus, we detect MCGs by grouping four horizontally adjacent MCs as follows:

    Step 1: For all bright character regions ( C ib ), extend group regions to EC ib such as

    image

    where H ECib and W ECib denote the height and the width of ECib, respectively, and HCib is the height of Cib. ECib is aligned left of the vertical middle Cib position.

    Step 2: Detect the three nearest character regions included in ECib, and group the union of the three rectangle regions and Cib as a kth MCG candidate for bright characters ( Gkb ).

    Step 3: For all dark character regions ( Cid ), repeat step 1 and step 2, and create a kth MCG candidate for dark characters ( Gkd ).

    The examples of detection of MCG candidates are shown in Fig. 4. Cb2 does not have three regions to be included in ECb2 , so the MCG cannot be created; however, three MCGs

    are created by C1b, Cb3 and Cb4.

    A number of MCG candidates may be detected by many horizontally adjacent regions as shown in Fig. 5. For this reason, we apply a neural network to reject non MCGs.

    IV. MCG CLASSIFICATION BY NEURAL NETWORK

    Although the LP is oriented with small angles in the image, the horizontal distance between neighbor characters is seen to have regular arrangement. Therefore, we extract the ratios of the horizontal distance between adjacent characters and the width of the MCG, and they are used as the input features of classification and are calculated by

    image

    where W denotes the width of the MCG, and Dih denotes the horizontal distances between adjacent characters, as shown in Fig. 6(a). Because the vertical positions of the characters have similar characteristics, we use also three ratios ( I4, I5 and I6 ) of the vertical distances between adjacent characters and the height of the MCG as the input features of a neural network. The ratios are calculated by

    image

    where H denotes the height of the MCG, and Diw denotes the vertical distances between adjacent characters.

    The intensities of the background and the character regions are different, and both the background and the character regions are characterized by uniform intensity, as shown in Fig. 6(b). Therefore, we extract the average intensity ( I7 )and the standard deviation ( I8 ) of the background region,and the average intensity ( I9 ) and the standard deviation( I10 ) of the character region. The type of the MCG (1 for Gb and 0 for Gd) is used as the feature ( I11 ) to determine whether MGs are bright or dark.

    By using extracted input features, we employ a neural network to reject non MCGs, as shown in Fig. 7, where the input nodes are the features, and the output nodes classify whether the region is a MCG or not.

    V. EXPERIMENTAL RESULTS

    Our method was tested on 1000 vehicle images taken in Korea. Four hundred forty-five images were taken as training images: 115 LP images and 330 non LP images, where many non LP images are needed for high performance,because non LP images which are similar to LP images are very varied. The original image size is 1024×1024. Then the images are down sized to 512×512 and the upper region is truncated, so the final image has a size of 512×256 when it is processed. Figure 8 shows an example of the results.Figure 8(a) shows non MCGs rejected by a neural network,and Fig. 8(b) shows a detected MCG. Figure 9 shows correctly detected MCGs with whole plate regions for various types of LPs.

    In the experiments, various types of LP regions with poor quality were correctly detected as shown in Fig. 10.Out of the 5664 candidates detected, 4650 candidates were rejected, and 34 non MCGs were not rejected (false posi

    tive rate: 0.6%). Out of the 987 MCGs detected, 980 MCGs were correctly detected (detection rate: 98.0%), and 7 false rejections were observed (true negative rate: 0.1%). The rejected non MCGs are shown in Fig. 11. The images which have undetected MCGs have poor image quality, as shown in Fig. 12. The number of false detected MCGs is 34. The false detected MCGs are exemplified in Fig. 13. These regions may be rejected by detecting whole license characters. Figure 14 shows MCGs falsely rejected by the neural network. Character isolation and recognition can be correctly performed by additional processes for these images; however,overall performance may worsen because the additional

    processes may influence clean images. Although various types of plates are being used in Korea as shown in Figs.9 and 10, high detection performance was achieved because the employed neural network nonlinearly classifies the feature space.

    On a Pentium PC with 2.4 GHz, the average processing time of a single image was about 57.5 ms. This is fast enough to realize a commercial ALPR system.

    There are many methods to detect LPs; however, their performance rates are often overestimated for promotional reasons. In the literature, the detection rates are often introduced as shown in Table 1, but the data sets are limited with small quantities and restricted image qualities.For this reason, it is very difficult to objectively compare performance. Although the detection rate of the proposed method is less than the results of some methods, the proposed method has advantages over other methods as follows; stained LP regions which cannot be processed for character isolation were not detected, and MC regions are located in LP detection. Therefore, our method may be robust and precise.

    [TABLE 1.] Comparison of detection performance.

    label

    Comparison of detection performance.

    VI. CONCLUSIONS

    In this paper, we proposed a novel method which detects car LPs by locating MCGs. For considering various operating conditions, multiple thresholds are used to detect MCs.To reject non MCGs for candidates, the relation of the character regions and the intensity statistics are applied to input nodes of a neural network.

    If characters cannot be segmented, it is impossible to recognize a LP even though the LP is detected. Character isolation is the most important part of ALPR. Common ALPR systems employ both detection of LPs and character isolation. However, we directly detect MCGs by detecting LP regions. The detection rate is 98.0%; however, almost all non-detected plates are so stained that their characters cannot be isolated for character recognition, and the average processing time is about 57.5 ms. Therefore, the proposed method can be applied to ALPR systems with high performance and fast processing time. Moreover, when the number of grouping characters and training data are modified, this method is appropriate for multinational LP detection.

참고문헌
  • 1. Park S. W, Hwang S. C, Park J. W 2003 The extraction of vehicle license plate region using edge directional properties of wavelet subband [IEICE Trans. Inf. & Syst.] Vol.E86-D P.664-669 google
  • 2. Anagnostopoulos C.-N. E, Anagnostrpoulos I. E, Psoroulas I. D, Loumos V, Kayafas E 2008 License plate recognition from still images and video sequences: a survey [IEEE Trans. Intell. Transp. Syst.] Vol.9 P.377-391 google cross ref
  • 3. Jia W, Zhang H, He X 2007 Region-based license plate detection [J. Netw. Comput. Appl.] Vol.30 P.1324-1333 google cross ref
  • 4. Jia W, He X, Piccardi M Jun. 2004 Automatic license plate recognition: a review [Proc. Int. Conf. Imaging ScienceSystems and Technology] P.43-49 google
  • 5. Gao D. S, Zhou J Aug. 2000 Car license plates detection form complex scene [Proc. 5th Int. Conf. Signal Processing] P.1409-1414 google
  • 6. Zhang H, Jia W, He X, Wu Q 2006 Real-time license plate detection under various conditions [Lecture Notes in Computer Science] Vol.4159 P.192-199 google cross ref
  • 7. Shapiro V, Dimov D, Bonchev S, Velichkov V, Gluhchev G Jun. 2004 Adaptive license plate image extraction [Proc.5th Int. Conf. Computer Systems and Technologies] P.1-7 google
  • 8. Zheng D, Zhao Y, Wang J 2005 An efficient method of license plate location [Pattern Recognit. Lett.] Vol.26 P.2431-2438 google cross ref
  • 9. Parisi R, Claudio E. D. D, Lucarelli G, Orlandi G May 1998 Car plate recognition by neural networks and image processing [Proc. IEEE Int. Symp. Circuits and Systems] P.195-198 google cross ref
  • 10. Martin F, Garcia M, Alba J. L Jun. 2002 New methods for automatic reading of VLP’s (Vehicle License Plates) [Proc. IASTED Int. Conf. SPPRA] google
  • 11. Hongliang B, Changping L Aug. 2004 A hybrid license plate extraction method based on edge statistics and morphology [Proc. Int. Conf. Pattern Recognition] P.831-834 google cross ref
  • 12. Kasaei S. H. M, Kasaei S. M. M, Monadjemi S. A 2009 A novel morphological method for detection and recognition of vehicle license plates [Amer. J. of Appl. Sci.] Vol.6 P.2066-2070 google cross ref
  • 13. Abolghasemi V, Ahmadyfard A 2007 A fast algorithm for license plate detection [Lecture Notes in Computer Science] Vol.4782 P.468-477 google cross ref
  • 14. Yanamur Y, Goto M, Nishiyama D Jun. 2003 Extraction and tracking of the license plate using Hough transform and voted block matching [Proc. IEEE Intelligent Vehicles Symp.] P.9-11 google cross ref
  • 15. Sarfraz M, Ahmed M. J, Ghazi S. A Jul.2003 Saudi Arabian license plate recognition system [Proc. 2003 Int. Conf.Geometric Modeling and Graphics] P.36-41 google cross ref
  • 16. Yu M, Kim Y. D Oct. 2000 An approach to Korean license plate recognition based on vertical edge matching [Proc 2000 IEEE Int. Conf. Systems Man and Cybernetics] P.2975-2980 google cross ref
  • 17. Duan T. D, Du T. L. H, Phuoc T. V, Hoang N. V Feb. 2005 Building an automatic vehicle license-plate recognition system [Proc Int. Conf. Computer Science] P.59-63 google
  • 18. Kim K. I, Jung K, Kim J. H 2002 Color texture-based object detection: an application to license plate localization [Lecture Notes in Computer Science] Vol.2388 P.321-335 google cross ref
  • 19. Kim S. K, Kim D. W, Kim H. J Sep. 1996 A recognition of vehicle license plate using a genetic algorithm based segmentation [Proc. Int. Conf. Image Processing] P.661-664 google cross ref
  • 20. Cao G, Chen J, Jiang J Nov.2003 An adaptive approach to vehicle license plate localization [Proc 29th Annual Conf.IECON] P.1786-1791 google cross ref
  • 21. Deb K, Lim H, Kang S.-J, Jo K.-H 2009 An efficient method of vehicle plate detection based on HSI color model and histogram [Lecture Notes in Computer Science] Vol.5579 P.66-75 google cross ref
  • 22. Chang S.-L, Chen L.-S, Chung Y.-C, Chen S.-W 2004 Automatic license plate recognition [IEEE Trans. Intell.Transp. Syst.] Vol.5 P.42-53 google cross ref
  • 23. Naito T, Tsukada T, Yamada K, Kozuka K, Yamamoto S 2000 Robust license plate recognition method for passing vehicles under outside environment [IEEE Trans.Vehic. Techn.] Vol.49 P.2309-2319 google cross ref
  • 24. Minor L.G, Sklansky J 1981 The detection and segmentation of blobs in infrared images [IEEE Trans. Sys. Man and Cybern.] Vol.SMC-11 P.194-201 google cross ref
OAK XML 통계
이미지 / 테이블
  • [ FIG. 1. ]  Overall scheme of proposed method.
    Overall scheme of proposed method.
  • [ FIG. 2. ]  Multiple thresholds: (a) original image (b) binaryimage and detected MC regions (c) (d) (e) (f) (g) and (h)binary images by thresholds associated with higher 1.0%2.5% 5.0% 7.5% 10.0% and 20.0% of the intensityhistogram respectively.
    Multiple thresholds: (a) original image (b) binaryimage and detected MC regions (c) (d) (e) (f) (g) and (h)binary images by thresholds associated with higher 1.0%2.5% 5.0% 7.5% 10.0% and 20.0% of the intensityhistogram respectively.
  • [ FIG. 3. ]  Detection of MC candidates: (a) original image (b)filtered image (c) and (d) generated binary images withhigher and lower percentages of the intensity histogramrespectively.
    Detection of MC candidates: (a) original image (b)filtered image (c) and (d) generated binary images withhigher and lower percentages of the intensity histogramrespectively.
  • [ FIG. 4. ]  Detection of MCG candidates.
    Detection of MCG candidates.
  • [ FIG. 5. ]  Detected MCG candidates of Fig. 3(a).
    Detected MCG candidates of Fig. 3(a).
  • [ FIG. 6. ]  Features for classification.
    Features for classification.
  • [ FIG. 7. ]  Classification of MCGs.
    Classification of MCGs.
  • [ FIG. 8. ]  MCG classification of Fig. 5: (a) non MCGs and (b)detected MCG.
    MCG classification of Fig. 5: (a) non MCGs and (b)detected MCG.
  • [ FIG. 9. ]  Correctly detected MCGs.
    Correctly detected MCGs.
  • [ FIG. 10. ]  Detected MCG images.
    Detected MCG images.
  • [ FIG. 11. ]  Rejected MCG candidate images.
    Rejected MCG candidate images.
  • [ FIG. 12. ]  Non-detected MCG images.
    Non-detected MCG images.
  • [ FIG. 13. ]  False detected MCG images.
    False detected MCG images.
  • [ FIG. 14. ]  False rejected MCG images.
    False rejected MCG images.
  • [ TABLE 1. ]  Comparison of detection performance.
    Comparison of detection performance.
(우)06579 서울시 서초구 반포대로 201(반포동)
Tel. 02-537-6389 | Fax. 02-590-0571 | 문의 : oak2014@korea.kr
Copyright(c) National Library of Korea. All rights reserved.