검색 전체 메뉴
PDF
맨 위로
OA 학술지
Lane Detection Algorithm for Night-time Digital Image Based on Distribution Feature of Boundary Pixels
  • 비영리 CC BY-NC
  • 비영리 CC BY-NC
ABSTRACT
Lane Detection Algorithm for Night-time Digital Image Based on Distribution Feature of Boundary Pixels
KEYWORD
Lane detection , Hough transformation , Pattern Recognition , Machine vision , Intelligent transportation system
  • I. INTRODUCTION

    Lane detection is a well-known research field of computer vision with wide applications in autonomous vehicles and advanced driver support systems. Several researchers around the world have been developing vision-based systems for lane detection. Most of the typical systems, such as ARCADE [1], GOLD [2], and RALPH [3], present limitations in situations involving shadows, varying illumination conditions, bad conditions of road paintings and other image artifacts.

    Much research on lane detection by machine vision has been carried out. As to such research, broadly speaking, the algorithms related to lane detection can be divided into two categories: feature-based and model-based. The featurebased methods extract and analyze local lane features, for instance, color features and intensity gradient features, from a digital image acquired by CCD camera to identify lane marker or lane boundary. Otsuka et al. [4] introduce a lane recognition algorithm which can find lane markers regardless of their types such as white lines, pavement markers. Turchetto et al. [5] present an algorithm for the visual-based localization of curbs in an urban scenario. The algorithm of curb localization depended on both photometry and range information. In [6], Wang et al. present a lane edge feature extraction algorithm. The method is based on the observation that by zooming towards the vanishing point and comparing the zoomed image with the original image allows authors to remove most of the unwanted features from the lane feature map. In [7], Teng et al. propose an algorithm that integrates multiple cues, including a bar filter which is efficient to detect features of bar-shaped objects like lane, color cue.

    On the other hand, the model-based methods just utilize a number of parameters to represent the lanes’s boundaries. Compared with the feature-based technique, the method is much more robust against noise. Much research on modelbased lane detection has been conducted. In Ref. [8], Lakshmanan addresses the problem of locating two straight and parallel road edges in images that are acquired from a stationary millimeter-wave radar platform positioned near ground-level. In Ref. [9], Truong et al. use Non Uniform B-Spline (NUBS) interpolation method to construct the boundaries road lane. A canny filter is employed to obtain edge map and a parallel thinning algorithm is introduced to acquire the skeleton image. To estimate the parameters of lane model, the likelihood function [10], Hough transform [11], and the chi-square fitting [12], etc. are applied to the lane detection. However, most of lane models are only focused on a certain shape of lanes. Therefore, they lack the flexibility to model the arbitrary shape of road.

    The proposed methods above references can detect the lane boundary or lane marker with good results under specified conditions. However, there are still some limitations, such as the failure of road recognition due to severe change of light, especially at nighttime. In details, the average gray value in images acquired in night time are notably lower than in daytime, which leads to low contrast between lane boundary (lane marker) and road background. In addition, the distribution of gray value is very nonuniform because of other influence from various light sources, for example, headlights of automobiles and street lamps. Consequently, the lane detection algorithm discussed earlier will be invalid in these conditions. Motivated by these drawbacks, a detection algorithm is introduced to recognize in nighttime the lane markers painted on the road. The algorithm studies

    the imaging characteristics and analyzes distribution features of the boundary pixels in order to detect lane markers with much robustness. Combining intensity map and edge map processed by directional Sobel operator, we construct 4 feature sets for these pixels. Then, a multiple-directions searching method is conducted to suppress the false lane boundary points. Finally, adapted Hough transformation is used to obtain the feature parameters of the lane edge.

    This article is organized as follows. Following the introduction, the lane detection algorithm for night-time images is presented to extract features of lane markers in Section II. Section III illustrates the lane detection results and Section IV concludes the paper. The overall algorithm flowchart is shown in Fig. 1.

    II. LANE DETECTION

    The whole lane detection process is divided into 5 main steps as follows: lane modeling, image filter, image segmentation, feature extraction and parameter estimation for lane boundaries. Each of the steps will be discussed in detail in the following subsections.

       2.1. Characteristics of Night Lane Images

    In complicated situations where the lane is cluttered with heavy shadows or noise from other objects, the image content in various locations share similar properties with the lane boundaries.At night it becomes worse. If the feature extraction algorithm for lane detection is only allowed for a local image region, unexpected feature points of lane boundaries will be detected. These features will then be interpolated during the parameter estimation process that leads to inaccuracies and errors. Therefore, the feature points are not desired candidates and should be removed before the parameter estimation stage. And it is vitally important that we analyse characteristics of night lane images before lane detection. Compared to optics imaging in daylight, input images from a Charge-Coupled Device (CCD) in a night scenario have notably complicated characteristics as follows.

    [1] The influence of other light sources. It is well known that various human-made light sources, such as street lamps, headlights, advertising lamps and so on, have been exploited in order to ensure safe driving in night due to improvement of drivers’ vision and obstacles’ visibility. However, the illumination of these light sources is much less than that of sunshine in daytime. Additionally, the intensity contrast between lane boundaries and background become much lower. It is easy to notice that the mean value of nighttime image intensity is normally so much lower that lane boundaries are blurring, which makes feature extraction more difficult.

    [2] More uneven intensity distribution. Intensity distribution

    of images taken from CCD in nighttime scenario is much more uneven because of a large number of external light sources. Specifically, the uneven intensity distribution that the region near the middle of nighttime images has a higher average value of intensity and the others have lower one can be observed experimentally. Furthermore, what's even worse is that several light spots or bands will be formed in a nighttime image due to illumination in different directions from light sources.

    [3] The influence of billboards or traffic signs. As shown in Fig. 2, there exist large-sized billboards or traffic signs coated with reflective material or a reflective layer on either side of a lane, which leads to several large-sized light spots in nighttime images.

       2.2. Analysis for Lane Modeling

    So far all kinds of lane models have already been introduced, ranging from straight line segments model to fl exible splines model. However, simplified models cannot describe the lane shapes accurately, especially in the far fi eld. In another aspect, complex lane models lead to much heavier computational costs and may even increase the amount of detection error.

    Actually, the imaging characteristics discussed above that inevitably introduce a large amount of noise will increase difficulty for lane detection. In order to make our proposed algorithm practical, it is necessary to define beforehand reasonable assumptions and appropriate constraints.

    [1] The case that almost all pixels in the area of interest (AOI, discussed in Section II.6) are white due to light saturation is omitted in this paper.

    [2] The case that almost all pixels in AOI are black because of no illumination is ignored.

    [3] Lane markers are mainly painted in white or yellow with better reflective material, which distinguishes them from the background. Hence, lane markers painted on a lane have a brighter intensity level than the mean intensity of the background.

    [4] To ensure safe driving at high speed, the lane generally has a much greater curvature radius in China, especially for the freeway. Accordingly, the orientation of lane markers varies smoothly along the lane. From frame to frame, the lane markers are moving backwards when a vehicle travels forward, but since the color and the width of the lane markers are similar, the driver does not feel the phenomena and considers the lane markers as static objects [13]. Additionally, the edge orientation of the lane should not be close to horizontal or vertical.

    According to these assumptions, the left and right lane boundaries can be approximately regarded as a straight line in our system. The line model has been utilized most widely in the existing lane detection systems [14,15] because it combines parallel line and planar ground surface constraints, which are suitable for most of the freeway applications. In addition, the model requires lesser parameters which lead to more accurate and faster parameter estimation. Therefore, the line model for lane detection is also exploited in our work.

    Assuming a flat lane, a lane image inputted from the CCD has M rows and N columns and the intensity level of the pixel (m, n) is indicated as I(m, n). The transformed line model for lane detection is shown in Eq. (1) and Eq. (2).

    image
    image

    Where k and b stand for the slope and intercept of the lane boundaries, respectively. In addition, the subscripts ‘l’ and ‘r’ represent left and right, respectively. So far the lane detection is converted into line recognition in a night-time image. The line model for lane detection is shown as in Fig. 3.

       2.3. Neighborhood Average Filtering

    As discussed in the above section, the imaging environment at nighttime is very sophisticated with respect to image processing. There are too many negative factors making image processing difficult to apply consistently. Specifically, the uneven characteristics of illumination result in irregular light spots or light bands in nighttime lane images, where a large amount of noise is included. Furthermore, other kinds noise such as that caused by sensors, transmission channels, voltage fluctuations, quantization, etc., is extremely strong. As a result, in such a complicated imaging environments if noise cannot be effectively suppressed or even eliminated, the final results for lane detection will become inaccurate. In terms of the experimental observation, noise influence on pixels in night-time images can be treated as isolated. Hence, the intensity level of the pixels exposed to noise will be significantly different from those around them. Based on the just stated analysis, a neighborhood average filter (NAF) capable of suppressing and reducing noise has been utilized in this paper. Shown as Eq. (3), I represents the input image, and g is the output image. Where I(x, y) is the arbitrary pixel whose horizontal and vertical coordinate in lane coordinate system are x and y, respectively. M is the size of neighborhood S. Namely, as for 5×5 neighborhood, M is defined as 25. Typically, the neighborhood, S, is rectangular, centered on (x, y), and much smaller in size than the whole nighttime lane image. As shown in Fig. 4, we move the origin of the neighborhood from pixel to pixel and apply the Eq. (3) to the pixels in the neighborhood to yield the output image at that location. Thus, for any specific location (x, y), the value of the output image g at those coordinates is equal to the result of using Eq. (3) in the neighborhood with origin at (x, y) in lane image I. Typically, the process starts at the top left of the input image and proceeds pixel by pixel in a horizontal scan, one row at a time. For calculation convenience, neighbor average filter is transformed

    into a spatial mask operation that is illustrated in detail in Fig. 5.

    image

       2.4. Lane Boundaries Extraction Based on Directional Sobel Operator

    The subsection focuses on extracting the lane boundaries from lane background. It is generally known that edge is the most fundamental feature in an image, which widely exists between targets and background. In our work, we demonstrate the effect of the proposed algorithm operating on the feature map- Sobel gradient map.

    Generally, in image processing, edge is mathematically defined as the gradient of the intensity function. For an original lane image I(x, y), the gradient at coordinate (x, y) is defined as the two-dimensional column vector, as Eq. (4) shows. The vector has the important geometrical properties that it points in the direction of the greatest rate of change of I(x, y) at location (x, y).

    image

    The components of the vector G are approximately represented with Eq. (5) and Eq. (6), by using a 3 by 3 Sobel spatial mask [16].

    image
    image

    To reduce the computation of the vector G, we approximate the square root operations with absolute value. Hence, the vector G is reassigned by Eq. (7).

    image

    In addition, the orientation of gradient is illustrated as Eq. (8).

    image

    For the gradient map, based on the content mentioned in the last paragraph, we seek an effective and robust algorithm for lane boundaries extraction. Actually, many operators of edge extraction, such as Robert, Sobel, Prewitt, Laplacian, Canny operator [17] and so on, are widely utilized to pick up points on lane boundaries from cluttered background. Although the Sobel operator capable of enhancing the vertical and horizontal features of edge is one of the most classic algorithms exploited to produce gradient map for images acquired in daytime, the algorithm increases the high inaccuracy of edge extraction or even completely fails due to much more noise included in nighttime lane images. In order to address the algorithm’s failure, the directional Sobel operator (DSO) capable of highlighting the image edge feature in eight directions is introduced. It is composed of 8 spatial operators, Mi(i =0, 1, 2, …, 7), which take the edge characteristics in 8-directions into consideration. Fig. 6 illustrates the definitions of DSO. Though the DSO just needs a little more computational costs compared to the other operators of edge extraction, it can significantly remove most of the noise and contribute to the accuracy of lane detection in nighttime.

    The experimental results of the proposed feature extraction approach based DSO and the comparison between DSO based and the other filter based feature maps can be found in Section IV.1.

       2.5. Thresholding Based on Maximum Entropy

    The pixels from boundaries in a nighttime lane image have a large magnitude, and they are minorities compared with other pixels. Hence, it is necessary to eliminate pixels with a smaller magnitude than a threshold value.

    An image, IDSO(x, y), handled by DSO, is given in this paper. One obvious way to extract the lane boundaries from background is to search for a certain threshold, T, that selects points on lane boundaries from background. Specifically, the image segmented, Iseg(x, y), is defined by Eq. (9).

    image

    In our work, we use OTSU’s method, which is one of the most widely utilized thresholding technologies in image analysis. It has showed great success in image enhancement and segmentation [18]. As demonstrated in [19], it is an automatic thresholding strategy; we exploit the automatic capability for designing the unsupervised classification strategy justifying its choice.

    Let {0,1,2,…,255} set denote 256 distinct intensity levels in lane image of size M × N pixels, and let stands for the number of pixels with intensity level ni. The total number of pixels, i, in a lane image is MN = n0 + n1 + … + n255.

    The normalized histogram has components pi = ni/MN, from which we can conclude

    image

    Where pi is a probability of gray level i in an image. Suppose that a threshold T(k), 0 < k < 256 is selected and the input image is thresholded into two classes, C1 and C2, where C1 includes all pixels with intensity values ranging from 0 to k and [C2] contains the ones in the range k+1,256. Then the probability distribution, P1(k), that a pixel is selected to class C1 is given by the cumulative sum

    image

    Similarly, the probability distribution of class C2 is shown as Eq. (12).

    image

    The mean intensity value of the pixel assigned to class C1 is

    image

    Similarly, the mean intensity value of the pixel assigned to class C2 is

    image

    The cumulative mean up to level k is given by

    image

    And the average intensity of the entire image is given by

    image

    In order to evaluate the ‘goodness’ of the threshold at level k we use the normalized and dimensionless metric

    image

    Where σ2G is the global variance

    image

    And σ2B is the between-class variance, defined as

    image

    Eq. (19) can be rewritten also as Eq. (20).

    image

    Reintroducing k, we have the final results

    image

    and

    image

    Then the optimum threshold is the value, k*, that maximizes σ2B(k).

    image

    Subsequently, once the value, k* , has been obtained, the input image IDSO(x, y) is segmented to Iseg(x, y).

    image

    Finally, after the thesholding stated above, most of the unexpected feature points of lane boundaries are excluded from cluttering background and the rest of the feature points are labeled with ‘1’s.

       2.6. AOI Construction

    This section focuses on reducing the computational costs using techniques based on area of interest (AOI). Also, this will compensate for the extra computation that the feature extraction based on DSO introduces.

    In our work, we install a digital CCD on a test vehicle. The optical axis of the CCD coincides with the centerline of the car body. Meanwhile, roll and tilt angles of installment for CCD are close to 0 degrees. Obviously, the intersection point of two lane boundaries presents in the center of the vertical direction. Now that visible lane boundaries generally lie in the lower part of an image, it is reasonable to restrict the processing area below the intersection point of two lane boundaries. In addition, assume that no horizontal or vertical lane boundaries appear in night-time lane images.

    Actually, due to the deliberate installation requirements of CCD, the AOI techniques composed of the coarse AOI (C-AOI) and refined AOI (R-AOI) are realized. Specifically, we design the C-AOI for lane boundaries detection within the gray areas as shown in Fig. 7. By construction of C-AOI, we successfully make a reduction in searching time for feature extraction and sharply improve the robustness of lane location in nighttime images.

    On the other hand, the R-AOI is built up in order to further suppress the influence of too much noise due to uneven illumination from various light sources and reduce the computational power. The R-AOI is composed of 4 searching regions, Zi, (i =1, 2, 3, 4), for lane detection, each of which yields a threshold value k* using the segmentation algorithm based on OTSU’s depicted in section 2.5.

    Since an important observation is that there are inside

    and outside boundaries in terms of any lane marker as shown in Fig. 9, 4 sets of boundary feature, Blo, Bli, Bro and Bri, are selected in prudence as features parameters. Where the uppercase letter ‘B’ denotes boundaries of lane makers, the subscript ‘l’ and ‘r’ represent left and right boundaries. In addition, the subscript ‘o’ and ‘i’ mean outside and inside boundaries, respectively. Fig. 8 demonstrates the definition of the boundaries sets.

       2.7. Lane Detection Based on Distribution of Boundaries Characteristics

    So far we have completed the preprocessing for lane detection, and construction of AOI and the boundaries set, which plays a significant role in lane recognition. However, how to explicitly detect lane boundaries from background and precisely describe the lane features detected with parametrization are yet unmentioned. Accordingly, it is vitally helpful, in fact, to analyse how pixels in or on boundaries distribute. In this section, we propose a processing method based on distribution features of boundaries’s pixels.

    If the current pixel is l,whose location is χ in the horizontal direction in nighttime lane image. We calculate the coefficient, E, using the following equation.

    image

    Where n is the number of pixels which are involved in computing in Eq. (25), Ii is the intensity value around the current pixel I. wi is a weighting coefficient, which is related to the distance from Ii to I. The further the distance to I, the less the weight. From experimental analysis, we conclude if I is a possible candidate, the absolute value of the coefficient E should be greater than or equal to some constant Cx. In light of that, our algorithm separates each candidate pixel on boundaries according to the distribution characteristics of the lane boundaries into the following categories. The sign Gx and Gy play an important role in points classification on boundaries.

    [1] For the lane boundary pixels on the left plane:

    A. if E ≥ Cx ? Gx > 0 ? Gy > 0 then I ∈ Blo.

    B. if -E ≥ Cx ? Gx < 0 ? Gy < 0 then I ∈ Bli.

    [2] For the road boundary pixels on the right plane:

    A. if E ≥ Cx ? Sx > 0 ? Sy < 0 then I ∈ Bri.

    A. if -E ≥ Cx ? Sx < 0 ? Sy > 0 then f(x, y) ∈ Bro.

    Further experiments indicate that a possible candidate on lane boundaries may be still misclassified by the method stated above, which leads to false boundary points and even failure to detect lane. To address the misjudgment and improve the robustness of boundaries searching, an algorithm, called multi-directions searching, is performed. For example, assuming that an arbitrary pixel, I, lies in the left outer boundary set, Bio. I should be checked and validated since some false boundary pixels exist. Specifically, we seek a corresponding possible candidate in a certain range, which is usually the width of lane in the vertical, horizontal and diagonal directions. The searching directions are shown as Fig. 9. If I is a true possible candidate on lane boundaries, we will struggle to find out a corresponding point in the left inner boundary set, Bli. Otherwise, I should be removed from Blo (marked by ‘0’s) because it is a false boundary point. Similarly, whether a pixels lies in

    boundary sets Bli, Bro and Bri or not can be verified according to the searching rule mentioned above.

       2.8. Lane Marker Detection Based on Hough Transformation

    After feature extraction, the lane model parameters require to be optimised to generate an accurate representation of the lane boundaries. Finally, the results detected are then characterised by a few mathematical model parameters. In our work, the parameter optimisation stage applies the Hough Transformation (HT) [20,21], widely implemented in the other lane detection system. Given boundary points in a lane image, we expect to find the subsets of these pixels that lie on straight lines. One possible method is to search all lines determined by every pair of points and then find all subsets of points that are close to particular lines. However, this is a computationally prohibitive task in all but the most trivial applications. Suppose a boundary point (xi, yi) and the general equation of a straight line in slope-intercept form, yi = ax1 + b. Infinity many lines pass through yi = ax1 + b, but they all satisfy the equation for varying values of a and b. For this, a parameter space is established. In fact, all the boundary points on a certain line have lines in parameter space that intersect at the same point. So, for each area divided, the HT is conducted by (26).

    image

    Where (x, y) is the coordinate value of a specific pixel. Figure 10 illustrates the geometrical interpretation of the parameters ρ and θ. A horizontal line has θ=0°, with ρ being equal to the positive -intercept. Similarly, a vertical line has θ=90°, with ρ being equal to the positive y-intercept, or θ=-90°, with ρ being equal to the negative y-intercept.

    Firstly, the coordinate system is established. We divide the R-AOI into left and right region, fix the origin on the center of the bottom line and construct the two coordinate systems as shown in Fig. 11. (ρmin, ρmax) and (θmin, θmax) are expected ranges of the parameter value: -90°≤θ≤90 and -DρD, in which D is the maximum distance in diagonal direction in lane images. For an arbitrary boundary point (xk, yk) we let θ equal each of the allowed subdivision values in the set θ and solve for the corresponding ρ

    using the equation ρ= xkcosθ + yksinθ.

    In order to enhance computational efficiency, the searching constraints are given as follows.

    [1] Left lane boundary lies in the left half-plane of a lane image, and the right lane occupies the right half-plane. Therefore, left and right lane boundaries are respectively sought in the corresponding half-plane.

    [2] Since the edge orientation should not be close to horizontal or vertical, as illustrated in Fig. 11, the angle between left lane boundary and χ-axis is α . Similarly, the angle between right lane boundary and χ-axis is defined as β. The calculation ranges of the two parameters, α and β, are restricted to 90°∼180° and 0°∼90°, respectively.

    [3] The Eu. (26) is exploited to each pixel in the boundaries sets Blo, Bli, Bro and Bri, and obtains 4 accumulator values, Alo, Ali, Aro and Ari, corresponding to the 4 boundaries sets.

    [4] To further reduce computational capacity and and enhance computational velocity, we begin to search the boundary points down from the 1 / 3 height of lane images. By selecting a cell from local maxima statistics for lane boundaries, we obtain the feature vector with 8 parameters, (ρli, θli, ρlo, θlo, ρri, θri, ρro, θro)T, in which the meanings of subscripts are identical to the definition discussed in subsection 2.6

    III. EXPERIMENTAL RESULTS

    The algorithm of lane detection was tested with images acquired by a digital CCD installed on an experimental vehicle. Experiments of lane detection were performed on highways paved with asphalt and cement while the experimental vehicle drives at a velocity around 80 km/h. Meanwhile, tests in laboratory are also conducted by recording the video sequences of the nighttime lane scenario. The image size is 320×240 (pixels). The development platform is Windows XP operation system with an Intel Pentium4 3 GHz CPU, 2 G RAM.

       3.1. Lane Feature Extraction Results

    The feature extraction performance of our proposed algorithm is compared with the other traditional methods based on spatial operators, for instance Prewitt, Zero-cross, Log and so on, by which we are capable of obtaining various types of gradient maps. Some example feature extraction results are shown in Fig. 12. As the figures demonstrate, results of feature extraction acquired by using Prewitt, Zero-cross and Log operators include numerous unwanted feature points (as shown in Fig. 12(b), (c) and (d)). And it has been found that results achieved by Roberts and Canny operators can effectively extract lane features. However, they remove too much edge information that is useful for lane detection (as shown in Fig. 12(e) and (f)). Comparing these results with 8 spatial Sobel operators, significant improvement can be observed since it reserves the edge information as much as possible and includes fewer unwanted features (as shown in Fig. 12g).

       3.2. Lane Feature Extraction for Multi-images

    Figure 13 shows the results of 12 successive images, and Fig. 14 provides the feature parameters corresponding to the images detected. Lane representation based on a line

    model is illustrated in Fig. 15, where Fig. 15(a) is for distance ρ and Fig.15b is for angle θ. The processing time for the proposed algorithm is less than 60 ms. Once the R- AOI is constructed, the time will significantly reduce to 15 ms / frame. Furthermore, the correctness rate of lane detection is more than 90%. These results demonstrate the proposed approach to improve lane features in nighttime

    and produce high quality output.

    IV. CONCLUSION

    A lane detection algorithm is presented in this article. One of the significant contributions lie in the proposed lane feature extraction algorithm. Lane boundaries are modeled with straight line and neighbor average filtering is used to suppress the influence of noise in lane image taken from digital CCD. According to the discussion mentioned in preceding sections, the results of lane detection for nighttime images are good. The algorithm based on multi-directions Sobel operator is conducted to improve feature extraction of the lane edge. One of the most important results is to extract lane features based on the distribution feature of boundary points. It is capable of enhancing most of the existing feature maps by removing the irrelevant feature pixels produced by unwanted objects and sorting the extracted pixels into 4 sets of boundaries of lane, which supply sufficient data related to lane boundaries to recognize lane markers more robustly. We showed the successful results of the proposed algorithm with some real images at nighttime.

참고문헌
  • 1. Kluge K. (1994) “Extracting road curvature and orientation from image edge points without perceptual grouping into features” [Proc. Intelligent Vehicles Symposium] Vol.94 P.109-114 google
  • 2. Bertozzi M., Broggi A. (1998) “GOLD: a parallel real-time stereo vision system for generic obstacle and lane detection” [IEEE Transactions on Image Processing] Vol.7 P.62-81 google cross ref
  • 3. Pomerleau D. (1995) “RALPH: rapidly adapting lateral position handler” [Proc. Intelligent Vehicles Symposium] Vol.95 P.506-511 google
  • 4. Otsuka Y., Muramatsu S., Takenaga H., Kobayashi Y., Monj T. (2002) “Multitype lane markers recognition using local edge direction” [Proc. IEEE Intelligent Vehicle Symposium] P.604-609 google
  • 5. Turchetto R., Manduchi R. (2003) “Visual curb localization for autonomous navigation” [Proc. Intelligent Robots and Systems] P.1336-1342 google
  • 6. Wang Y., Dahnoun N., Achim A. (2012) “A novel system for robust lane detection and tracking” [Signal Processing] Vol.92 P.319-334 google cross ref
  • 7. Teng Z., Kim J. H., Kang D. J. (2010) “Real-time lane detection by using multiple cues” [Proc. International Conference on Control Automation and Systems (ICCAS)] P.2334-2337 google
  • 8. Lakshmanan S., Grimmer D. (1996) “A deformable template approach to detecting straight edges in radar images” [IEEE Transactions on Pattern Analysis and Machine Intelligence] Vol.18 P.438-443 google cross ref
  • 9. Truong Q. B., Lee B. R., Heo N. G., Yum Y. J., Kim J. G. 2008 “Lane boundaries detection algorithm using vector lane concept” [in Proc. 10th International Conference on Control, Automation, Robotics and Vision] P.2319-2325 google
  • 10. Andreone L., Antonello P. C., Bertozzi M., Broggi A., Fascioli A., Ranzato D. 2002 “Vehicle detection and localization in infra-red images” [in Proc. The IEEE 5th International Conference on Intelligent Transportation Systems] P.141-146 google
  • 11. Roushdy M. (2007) “Detecting coins with different radii based on hough transform in noisy and deformed image” [GVIP Journal] Vol.7 P.25-29 google
  • 12. Keyou G., Na L., Mo Z. 2011 “Lane detection based on the random sample consensus” [in Proc. International Conference on Information Technology, Computer Engineering and Management Sciences (ICM)] P.38-41 google
  • 13. Kim J., Oh J., Kang H., Lee H., Kim J. (2009) “Detection of tendon tears by degree of linear polarization imaging” [J. Opt. Soc. Korea] Vol.13 P.472-477 google cross ref
  • 14. Bertozzi M., Broggi A., Conte G., Fascioli A. (1997) “Obstacle and lane detection on ARGO” [Proc. IEEE Conference on Intelligent Transportation System] P.1010-1015 google
  • 15. Lopez A., Serrat J., Canero C., Lumbreras F., Graf T. (2010) “Robust lane markings detection and road geometry computation” [International Journal of Automotive Technology] Vol.11 P.395-407 google cross ref
  • 16. Song D., Chang J., Cao J., Zhang L., Wen Y., Wei A., Li J. (2011) “Airborne infrared scanning imaging system with rotating drum for fire detection” [J. Opt. Soc. Korea] Vol.15 P.340-344 google cross ref
  • 17. Li G., Kwon K.-C., Shin G.-H., Jeong J.-S., Yoo K.-H., Kim N. (2012) “Simplified integral imaging pickup method for real objects using a depth camera” [J. Opt. Soc. Korea] Vol.16 P.381-385 google cross ref
  • 18. Oh S., Hong J., Park J., Lee B. (2004) “Efficient algorithms to generate elemental images in integral imaging” [J. Opt. Soc. Korea] Vol.8 P.115-121 google cross ref
  • 19. Lee B., Hong J., Kim J., Park J. (2004) “Analysis of the expressible depth range of three-dimensional integral imaging system” [J. Opt. Soc. Korea] Vol.8 P.65-71 google cross ref
  • 20. Ghazali K., Xiao R., Ma J. 2012 “Road lane detection using H-maxima and improved hough transform” [in Proc. Fourth International Conference on Computational Intelligence, Modelling and Simulation (CIMSiM)] P.205-208 google
  • 21. Lee D., Park Y. (2011) “Discrete Hough transform using line segment representation for line detection” [Opt. Eng.] Vol.50 P.87004-87008 google cross ref
OAK XML 통계
이미지 / 테이블
  • [ FIG. 1. ]  Flowchart of the presented algorithm.
    Flowchart of the presented algorithm.
  • [ FIG. 2. ]  Original lane image in nighttime taken from CCD.
    Original lane image in nighttime taken from CCD.
  • [ FIG. 3. ]  Linear model for lane detection in image coordinate system.
    Linear model for lane detection in image coordinate system.
  • [ FIG. 4. ]  A 5×5 neighborhood about a point (x, y) in a lane image.
    A 5×5 neighborhood about a point (x, y) in a lane image.
  • [ FIG. 5. ]  Spatial mask for neighborhood average filter.
    Spatial mask for neighborhood average filter.
  • [ FIG. 6. ]  8 spatial Sobel operators considering the characteristics of different directions.
    8 spatial Sobel operators considering the characteristics of different directions.
  • [ FIG. 7. ]  Coarse AOI for lane detection.
    Coarse AOI for lane detection.
  • [ FIG. 8. ]  Image searching space and definition of inside and outside boundary of lane.
    Image searching space and definition of inside and outside boundary of lane.
  • [ FIG. 9. ]  Searching orientation of false boundary point.
    Searching orientation of false boundary point.
  • [ FIG. 10. ]  (ρ, θ) parameterization of line in the lane plane.
    (ρ, θ) parameterization of line in the lane plane.
  • [ FIG. 11. ]  Coordinate system for Hough transformation.
    Coordinate system for Hough transformation.
  • [ FIG. 12. ]  Comparisons of features extraction achieved by applying classical gradient operators.
    Comparisons of features extraction achieved by applying classical gradient operators.
  • [ FIG. 13. ]  Lane features extraction for 12 successive images.
    Lane features extraction for 12 successive images.
  • [ FIG. 14. ]  Feature parameters corresponding to images detected.
    Feature parameters corresponding to images detected.
  • [ FIG. 15. ]  Lane representation based on line model.
    Lane representation based on line model.
(우)06579 서울시 서초구 반포대로 201(반포동)
Tel. 02-537-6389 | Fax. 02-590-0571 | 문의 : oak2014@korea.kr
Copyright(c) National Library of Korea. All rights reserved.