검색 전체 메뉴
PDF
맨 위로
OA 학술지
An Automatic Road Sign Recognizer for an Intelligent Transport System
  • 비영리 CC BY-NC
  • 비영리 CC BY-NC
ABSTRACT
An Automatic Road Sign Recognizer for an Intelligent Transport System
KEYWORD
Intelligent transport system , RGB , Threshold techniques , Traffic sign recognition
  • I. INTRODUCTION

    In traffic environments, traffic sign recognition (TSR) is used to identify traffic signs, warn the driver, and command or prohibit certain actions. A fast real-time and robust automatic traffic sign detection and recognition system can support and disburden the driver, and thus, significantly increase driving safety and comfort. Generally, traffic signs provide the driver various types of information for safe and efficient navigation. Automatic recognition of traffic signs is, therefore, important for automated intelligent driving vehicles or driver assistance systems. However, identification of traffic signs with respect to various natural background viewing conditions still remains a challenging task. TSR systems usually have been developed in two specific phases [1,2]. The first is normally related to the detection of traffic signs in a video sequence or image using image processing. The second one is related to recognition of these detected signs. The detection algorithms are normally based on shape or color segmentation. The segmented potential regions are extracted to be input in the recognition stage. The efficiency and speed of the detection play an important role in the system. To recognize traffic signs, various methods for automatic traffic sign identification have been developed and have shown promising results.

    In recent studies, the detection and recognition of traffic signs has been under development in many research centers. A vision system for TSR and an integrated autonomous vehicle was developed as part of the European research project PROMETHEUS at Daimler-Benz Research Center [3-6]. Moreover, many techniques have been developed for road sign recognition; for example, Pacheco et al. [7] used special colored barcodes under road signs to detect the road signs in a vision-based system.

    A tough problem in classifying traffic signs is image degradation. However, it is difficult and unrealistic to collect all the training data under various conditions, so Ishida et al. [8] proposed a generative learning method to automatically generate the training data in accordance with the actual degradation through the estimation of the generative parameters.

    Nunn et al. [9] proposed a novel region of interest concept that was based on the known real world positions of traffic signs in different driving situations.

    In [10], Escalera et al. developed a model that represented a sign at a fixed distance, perpendicular to the optical axis, located at the centre of the image.

    Fleyeh et al. [11] developed a fuzzy-based approach to recognizing traffic signs combining color and shape information.

    The majority of recently published sign detection approaches make use of color information [7,12-15]. They share a common two-step strategy. First, pre-segmentation is employed by a thresholding operation on the individual authors’ favorite color representation. Some authors perform this directly in RGB space; others apply linear or nonlinear transformations of it. Subsequently, a final detection decision is obtained from shape-based features, applied only to the pre-segmented regions.

    This paper proposes developing segmentation techniques such as line segmentation and single sign segmentation for the efficiency and speed of the system. In the detection phase, the acquisition image is processed, enhanced, and finally segmented according to the color and shape properties of the sign. The segmentation accuracy depends on the number of correct segments obtained and the number of correct segments expected. In single sign segmentation, the segmentation accuracy is almost 99.04%. Therefore, automatic road sign recognizer systems can easily detect traffic sign images from a complex background.

    The remainder of this paper is organized as follows: In Section II, we present a system overview. In Section III, we give detailed description of the image processing stages. The descriptions of line segmentation and single sign segmentation are given in Sections IV and V, respectively. In Section VI and VII, we present the representative experimental results and conclusion, respectively.

    II. SYSTEM OVERVIEW

    The overall system contains two major parts: image segmentation and object recognition [13,16], as shown in Fig. 1. We perform the overall task in four phases. In the first phase, a list of traffic signs are processed and stored in the database.

    In the second phase, detection of the location of the sign in the image is performed, and the image extracted by a video camera. In the third phase, the images are preprocessed through an image processing technique such as the threshold technique. In the fourth phase, recognition of the sign takes place. Recognition of the traffic road sign patterns uses concavity measurement techniques.

    III. IMAGE PROCESSING

    Two types of image processing are discussed in this paper. In the first type, video camera will select image processing with which a list of road signs is processed and then the selected image is processed through several steps. The image input from the video sequence containing the natural background with a view of the image is fed into the system, as shown in Fig. 2.

    Steps:

    1. Color segmentation.

    2. Detect edges with edge detector algorithm.

    3. Remove objects with less than 30 red pixels.

    4. Mark the bounding boxes of the objects.

    5. Remove objects whose highest red pixel is located below row 310 of the original images, setting the origin (0, 0) of the coordinate system to the upper-left corner.

    6. Remove objects with height/width ratios not in the range.

    7. Check existence of the corners of each object.

    7.1. Find the red pixel with the smallest row number.

    Grayscale conversion is one of the simplest image enhancement techniques. Grayscale conversion can be performed using the following function as given below:

    image

    where x is original image and g is the converted gray level image.

    Linear conversion is given as

    image

    where a is the gain, and c is the offset.

    A contrast stretch is further performed by linear conversion such that we have

    image

    where xmax and xmin are the maximum and minimum value of original image, respectively. gmax and gmin are the maximum and minimum values of the gray level image, respectively.

    When there are many such pixels, we choose the pixel with the smallest column number, as shown in Fig. 3.

    7.2. Find the red pixels with the smallest and the largest column numbers. If there are multiple choices, choose those with the largest row numbers.

    7.3. Mark locations of these three pixels in the imaginary nine equal regions, setting their corresponding contain-ing regions to 1, as shown in Fig. 4.

    7.4. Remove the object if these pixels do not form any of the patterns listed in the database.

    8. For each surviving bounding box, extract the corresponding rectangular, circle, triangle area from the original image and save it into the region of interest (RoI) list, as shown in Fig. 5.

    IV. LINE SEGMENTATION

    In searching for the beginning of a line, the system searches for the occurrence of a 0 in the bitmap. If one or more 0 are found in the row in consideration, then this row is accepted as the starting row of the line. The process is continued through the consecutive rows in this way until it has been found that there are no 0s in a row and the row immediately before this row is considered to be the ending row of the line, as shown in Fig. 6.

    To search for the starting column of the line, the system scans the binary file vertically beginning from the starting row and ending at the ending row of the line. In this case, the scanning process will start from the starting column of the binary file and continue through the consecutive columns. If the system finds the occurrence of one or more 0’s in a column, this column is considered to be the starting column of the line.

    V. SINGLE SIGN SEGMENTATION

    Single sign segmentation is performed by scanning vertically, column-by-column. If a column contains one or more 0’s, this column is considered to be the starting column position of the sign. The process is continued through the consecutive columns in this way until it has been found that there are no 0s in a column and the column immediately before this column is considered to be the ending column of each sign. Fig. 7 shows the starting column and ending column of each sign.

    VI. EXPERIMENTAL RESULTS

    In our experiments, the developed system was tested against a collection of 200 sign images. The objective of the section is to study the segmentation and recognition performance of the system as like [17].

    Each type of traffic sign is consisted of an original and a binary image with a specific pattern. If the original image of the traffic sign is effectively extracted, it can provide a stable basis for traffic sign recognition. Within detection of the traffic sign region in the original image, the binary image of the traffic sign can be effectively extracted by line and single sign segmentation. The formulas of image segmentation for extraction of the binary image are given as follows:

    image
    image

    where Rtop and Cleft represent row and column coordinates of the top and left corner of the traffic sign region, respectively. top left R(p + Rtop , q + Cleft) shows pixels R channel gray values on location (p + Rtop , q + Cleft) of the original image. Ravg and Rmin represent the average value and the minimum value of all pixel channels’ gray value in the traffic sign region. α and β are coefficients that define values in the interval [0,1].

    The results of different modules of our system are below: The percentage of accuracy of single sign segmentation η was calculated by using following equation:

    image

    where Nobt and Nexp represent the number of correct segments obtained and expected, respectively. Table 1 shows that the proposed scheme can provide 99.04% of single segmentation accuracy.

    The percentage of accuracy of the extracted sign λ was calculated using the following equation:

    image

    where Nextract sign and Nclipping image represent the number of the extracted signs and clipped images, respectively. Table 2 shows that we can obtain 87.3% of sign extraction accuracy through the proposed scheme.

    The percentage of accuracy of recognition δ was calculated using the following equation:

    image

    where Nreg_sign and Ntest_sign represent the recognized sign and tested sign, respectively. Table 3 finally shows that we can obtain 94% of recognition accuracy through the proposed scheme.

    [Table 1.] Single segmentation accuracy results

    label

    Single segmentation accuracy results

    [Table 2.] Sign extraction accuracy result

    label

    Sign extraction accuracy result

    [Table 3.] Recognition accuracy results

    label

    Recognition accuracy results

    VII. CONCLUSIONS

    Road signs are deliberately designed and positioned to make it easy for humans to detect them and recognize what they mean. It is still a challenge to enable a computer to perform as well as humans, especially over the full range of possible signs. While sign detection methods are robust and accurate, sign recognition suffers because the extracted sign regions are small and often blurred. In the paper, we have implemented the proposed scheme and obtained numerical results. Firstly, we have taken a list of traffic sign, processed the signs, and stored them in a database. If the two signs are closely related, the segmentation process is not accurate because overlap may occur. In single sign segmentation, the segmentation accuracy is almost 99.04%. Traffic signs are extracted accurately from the selected image from the camera. Therefore, automatic road sign recognizer systems easily detect traffic sign images from a complex background.

참고문헌
  • 1. Vicen-Bueno R., Gil-Pita R., Jarabo-Amores M. P, Lopez-Ferreras F. 2005 “Complexity reduction in neural networks applied to traffic sign recognition tasks” [in Proceedings of the 13th European Signal Processing Conference] google
  • 2. Liu H. X., Ran B. 2001 “Vision-based stop sign detection and recognition system for intelligent vehicles” [Transportation Research Record] Vol.1748 P.161-166 google
  • 3. Gavrila D. M. 1999 “Traffic sign recognition revisited” [in Proceedings of the 21st DAGM Symposium for Mustererkennung] P.86-93 google
  • 4. Fleyeh H., Dougherty M. 2005 “Road and traffic sign detection and recognition” [in Proceedings of the 10th Meeting and 16th Mini EURO Conference of the Euro Working Group on Transportation] google
  • 5. Yuille A. L., Snow D., Nitzberg M. 1998 “Signfinder: using color to detect, localize and identify informational signs” [in Proceedings of the 6th International Conference on Computer Vision] P.628-633 google
  • 6. Huang C. L., Hsu S. H. 2000 “Road sign interpretation using matching pursuit method” [in Proceedings of the 15th International Conference on Pattern Recognition] P.329-333 google
  • 7. Pacheco L., Batlle J., Cufi X. 1994 “A new approach to real time traffic sign recognition based on colour information” [in Proceedings of the Intelligent Vehicles Symposium] P.339-344 google
  • 8. Ishida H., Takahashi T., Ide I., Mekada Y., Murase H. 2006 “Identification of degraded traffic sign symbols by a generative learning method” [in Proceedings of the 18th International Conference on Pattern Recognition] P.531-534 google
  • 9. Nunn C., Kummert A., Muller-Schneiders S. 2008 “A novel region of interest selection approach for traffic sign recognition based on 3D modeling” [in Proceedings of the IEEE Intelligent Vehicles Symposium] P.654-659 google
  • 10. de la Escalera A., Armingol J. M., Pastor J. M., Rodriguez F. J. 2004 “Visual sign information extraction and identification by deformable models for intelligent vehicles” [IEEE Transactions on Intelligent Transportation Systems] Vol.5 P.57-68 google
  • 11. Fleyeh H. 2008 “Traffic sign recognition by fuzzy sets” [in Proceedings of the IEEE Intelligent Vehicles Symposium] P.422-427 google
  • 12. de la Escalera A., Armingol J. M., Mata M. 2003 “Traffic sign recognition and analysis for intelligent vehicles” [Image and Vision Computing] Vol.21 P.247-258 google
  • 13. de la Escalera A., Moreno L. E., Salichs M. A., Armingol J. M. 1997 “Road traffic sign detection and classification” [IEEE Transactions on Industrial Electronics] Vol.44 P.848-859 google
  • 14. Paclik P., Novovicova J., Pudil P., Somol P. 2000 “Road sign classification using Laplace kernel classifier” [Pattern Recognition Letters] Vol.21 P.1165-1173 google
  • 15. Torresen J., Bakke J. W., Sekanina L. 2004 “Efficient recognition of speed limit signs” [in Proceedings of the 7th International IEEE Conference on Intelligent Transportation Systems] P.652-656 google
  • 16. Zadeh M. M., Kasvand T., Suen C. Y. 1997 “Localization and recognition of traffic signs for automated vehicle control systems” vol. 3207 [Intelligent Transportation Systems, Proceedings of the SPIE] P.272-282 google
  • 17. Be´nallal M., Meunier J. 2003 “Real-time color segmentation of road signs” [in Proceedings of IEEE Canadian Conference on Electrical and Computer Engineering] P.1823-1826 google
이미지 / 테이블
  • [ Fig. 1. ]  Block diagram of overall system.
    Block diagram of overall system.
  • [ Fig. 2. ]  The picture selected by the camera.
    The picture selected by the camera.
  • [ Fig. 3. ]  Gray level image.
    Gray level image.
  • [ Fig. 4. ]  Mark location of pixels.
    Mark location of pixels.
  • [ Fig. 5. ]  Extraction traffic sign.
    Extraction traffic sign.
  • [ Fig. 6. ]  Binary representation of a sign by using line segmentation.
    Binary representation of a sign by using line segmentation.
  • [ Fig. 7. ]  Single sign separation from line representation binary image.
    Single sign separation from line representation binary image.
  • [ Table 1. ]  Single segmentation accuracy results
    Single segmentation accuracy results
  • [ Table 2. ]  Sign extraction accuracy result
    Sign extraction accuracy result
  • [ Table 3. ]  Recognition accuracy results
    Recognition accuracy results
(우)06579 서울시 서초구 반포대로 201(반포동)
Tel. 02-537-6389 | Fax. 02-590-0571 | 문의 : oak2014@korea.kr
Copyright(c) National Library of Korea. All rights reserved.