검색 전체 메뉴
PDF
맨 위로
OA 학술지
Development of an Edge-Based Algorithm for Moving-Object Detection Using Background Modeling
  • 비영리 CC BY-NC
  • 비영리 CC BY-NC
ABSTRACT
Development of an Edge-Based Algorithm for Moving-Object Detection Using Background Modeling
KEYWORD
Background modeling , Edge detector , Moving-object detection
  • I. INTRODUCTION

    Moving-object detection methods detect moving objects by subtracting the background from the current image. The performance of moving-object detection depends on background modeling. A background model should overcome the illumination variation of the background and the noises. Such modeling methods can be classified into two types, namely pixel-based and edge-based methods, depending on the features used for the detection of a moving object. Pixelbased background modeling considers the changes in illumination and noise. These methods can produce moving edges in the background, which affect the moving-object detection. There has been a large amount of work addressing the issues of background model representation and adoptation of pixel-based methods [1-5]. Edge-based methods use edges that are less sensitive to intensity changes [6-10]. These methods work with fewer pixels than pixel-based methods.

    Dailey et al. [6] presented an algorithm that uses the interframe differences of the three consecutive frames in order to obtain two difference images. Here, a Sobel edge detector was applied to the resulting images, and a threshold was applied on the resultant images to create binary images. Finally, the two binary images were intersected to obtain a moving-edge map. Kim and Hwang [7] presented an algorithm for the segmentation of moving objects with a robust double-edge map that is derived from the difference between two successive frames. After removing the edge points that belong to the previous frame, the remaining-edge map and the moving-edge map were combined to compute the final moving-edge map. Absolute background edges could be extracted from the first frame or by counting the number of edge occurrences for each pixel through the first several frames. This initialization of the background creates false-positive edges because it is impossible to obtain a background without moving objects in a real-world environment. Further, these two methods are sensitive to variations in the shape of the moving objects and to noise. These methods do not apply any background modeling; therefore, the detection of slow-moving objects is not possible. These limitations can be overcome by background modeling.

    In this paper, we present an edge-based background modeling method based on the edge map of an interframe difference image that overcomes illumination variation, moving objects, and edge problems. We use a Canny edge detector to map the edges in the image frames. We create two edge maps: changing moving edge and stationary moving edge. These two edge maps are combined to calculate the ultimate moving-edge map. A temporary background-edge map is created by selecting all the edge pixels of the current frame above the threshold of the ultimate moving edges. The frequencies of the temporary background edge pixels for several frames are calculated. If the frequencies are above the threshold, then these edge pixels are treated as the background edge pixels and stored for a new background. Using this updated background, we can detect a stationary moving edge efficiently.

    The rest of this paper is organized as follows: In Section II, we discuss the details of the proposed algorithm. Section III presents the results of the performance evaluation, followed by the overall conclusion in Section IV.

    II. PROPOSED METHOD

    The first stage of the proposed background modeling is edge detection. We used a Canny edge detector [11] in this stage and executed five separate steps: smoothing, finding gradients, non-maximum suppression, double thresholding, and edge tracking by hysteresis. In the smoothing step, the image is blurred to remove noise. A gradient operation is applied on the Gaussian convoluted image G*F in the finding gradient step. Non-maximum suppression is applied to the gradient magnitude to thin the edge. Double thresholding with hysteresis is applied to detect and link edges. The Canny edge maps can be expressed as follows:

    image

    The edge extraction from the difference image in the successive frames results in a noise-robust difference edge map DEn because the Gaussian convolution included in the Canny operator suppresses the noise in the luminance difference.

    image

    Fig. 1 shows the block diagram of the proposed movingobject detection algorithm. We extract the moving edge MEn of the current frame Fn using the difference edge map DEn, the current frame’s edge map En= ϕ(Fn), and the background-edge map Eb. Before modeling the background, the background-edge map can be extracted from the first frame.

    We define the edge map En = {e1, e2, e3, …, ek} as a set of all edge points detected by the Canny operator in the current frame Fn. Similarly, we denote the set of l moving-edge points MEn = {m1, m2, … …, ml}, where l ≤ k and MEnEn. The moving-edge points in MEn detect the edge points of the inside boundary edge and the boundary edge of the moving object. DEn denotes the set of all pixels belonging to the edge map from the difference image. The moving-edge map generated by edge changing is given by selecting all edge pixels within a small distance Tchange of DEn, i.e.,

    image

    For selecting the stationary moving edge, all the edge points are removed from the current frame, which belong to the previous moving-edge map. We can define the stationary moving edge as a set of all the edges that belong to the current edge frame within the distance Tstationary of the previous frame of the moving edge MEn-1.

    image

    The ultimate moving-edge map for the current frame is expressed by combining two maps.

    image

    The temporary background-edge map Etb is given by selecting all edge pixels of the current frame above the distance Tback of MEn, i.e.,

    image

    For modeling the background, we counted the frequencies of the temporary background-edge map’s pixels for 200 frames. If the frequencies of the edge pixels exceed the threshold, then these edge pixels are considered the background edge pixels and stored as a new backgroundedge map.

    image

    This updated background is used for detecting a stationary moving-edge map. For the extraction of moving objects, we consider the component connection algorithm [6]. After the extraction of the moving object, morphological operations are applied to remove the noise regions in the post-processing.

    III. RESULTS AND ANALYSIS

    We used the datasets from Performance Evaluation of Tracking and Surveillance (PETS) 2001 [12] dataset 3 (DS3) and dataset 4 (DS4), and PETS 2009 views 1, 5, and 6 [13]. PETS 2001 datasets are composed of five separate data sequences. All the datasets are multi-view (two cameras) and contain moving people and vehicles. DS3 has a more challenging sequence in terms of multiple targets and significant lighting variation. PETS 2009 datasets are multisensory sequences containing different crowd activities (walking around, standing, etc.). These datasets were captured from eight viewpoints. The PETS 2009 datasets lack ideal frames and have more challenging sequences for modeling backgrounds [14]. Using the ground truth [15], we built the ground-truth edges of these datasets. We compared our test results with the results of two other edge-based methods: Dailey et al. [6] and Kim and Hwang [7], as shown in Fig. 2.

    Fig. 2 shows the column-wise comparison result of the proposed method and the methods developed by Dailey et al. [6] and Kim and Hwang [7]. The first column shows the image frames; the second column, the ground-truth images; the third column, the object-edge maps detected by the method developed by Dailey et al. [6]; the fourth column, the object-edge maps detected using the method developed by Kim and Hwang; and the last column, the object-edge maps detected by using the proposed method. The comparison result shows that the methods developed by Dailey et al. [6] and Kim and Hwang [7] produce more scatter edges than the proposed method. The proposed method suppresses the false-positive edges and absorbs the scatter edges from the object-edge map by background modeling.

    IV. CONCLUSIONS

    We proposed an edge-based method to model the background for the detection of moving objects. We applied background modeling and updated the background after 200 frames. This method is more robust for the detection of slow-moving objects, shape changing, and noise suppression than the earlier methods. The proposed algorithm can be used in different applications, such as surveillance and content-based video coding.

참고문헌
  • 1. Gutches D., Tarjkovics M., Chohen-Solal E., Lyons D., Jain A. K. 2001 “A background model initialization algorithm for video surveillance,” [in Proceedings of the 8th International Conference on Computer Vision] P.733-740 google
  • 2. Wren C. R., Azarbayejani A., Darrell T., Pentland A. P. 1997 “Pfinder: real-time tracking of the human body,” [IEEE Transactions on Pattern Analysis and Machine Intelligence] Vol.19 P.780-785 google cross ref
  • 3. Elgammal A., Harwood D., Davis L. 2000 “Non-parametric model for background subtraction,” [in Computer Vision] P.751-767 google
  • 4. Haritaoglu I., Harwood D., Davis L. S. 2000 “W4: real-time surveillance of people and their activities,” [IEEE Transactions on Pattern Analysis and Machine Intelligence] Vol.22 P.809-830 google cross ref
  • 5. Li L., Huang W., Gu I. Y., Tian Q. 2004 “Statistical modeling of complex backgrounds for foreground object detection,” [IEEE Transactions on Image Processing] Vol.13 P.1459-1472 google cross ref
  • 6. Dailey D. J., Cathey F. W., Pumrin S. 2000 “An algorithm to estimate mean traffic speed using uncalibrated cameras,” [IEEE Transactions on Intelligent Transportation Systems] Vol.1 P.98-107 google cross ref
  • 7. Kim C., Hwang J. N. 2002 “Fast and automatic video object segmentation and tracking for content-based applications,” [IEEE Transactions on Circuits and Systems for Video Technology] Vol.12 P.122-129 google cross ref
  • 8. Jain V., Kimia B., Mundy J. 2007 “Background modeling based on subpixel edges,” [in Proceedings of the IEEE International Conference on Imaging Processing] P.321-324 google
  • 9. Yokoyama M., Poggio T. 2005 “A contour-based moving object detection and tracking,” [in Proceedings of the 2nd IEEE International Workshop on Visual Surveillance and Performance Evaluation of Tracking and Surveillance] P.271-276 google
  • 10. Yang Q. Y., Gao X. Y. 2009 “Tracking on motion of small target based on edge detection,” [in Proceedings of the 2009 WRI World Congress on Computer Science and Information Engineering] P.619-622 google
  • 11. Canny J. 1986 “A computational approach to edge detection,” [IEEE Transactions on Pattern Analysis and Machine Intelligence] Vol.8 P.679-698 google
  • 12. Performance evaluation of tracking and surveillance 2001 [Internet] google
  • 13. Performance evaluation of tracking and surveillance 2009 [Internet] google
  • 14. Ferryman J., Shahrokni A. 2009 “PETS2009: dataset and challenge,” [in Proceedings of the 12th IEEE International Workshop on Performance Evaluation of Tracking and Surveillance] P.1-6 google
  • 15. Laboratory for image and media understanding [Internet] google
OAK XML 통계
이미지 / 테이블
  • [ ] 
  • [ ] 
  • [ Fig. 1. ]  Block diagram of the proposed moving-object detection algorithm.
    Block diagram of the proposed moving-object detection algorithm.
  • [ ] 
  • [ ] 
  • [ ] 
  • [ ] 
  • [ ] 
  • [ Fig. 2. ]  Results of the comparison of the proposed method and the methods developed by Dailey et al. [6] and Kim and Hwang [7]. Each column represents one method: (a) frame 1397 from dataset 3 of Performance Evaluation of Tracking and Surveillance (PETS) 2001, (b) frame 1464 from dataset 3 of PETS 2001, (c) frame 1067 from dataset 4 of PETS 2001, (d) frame 468 from view 1 of PETS 2009, (e) frame 716 from view 1 of PETS 2009, (f) frame 12 from view 5 of PETS 2009, and (g) frame 32 from view 5 of PETS 2009.
    Results of the comparison of the proposed method and the methods developed by Dailey et al. [6] and Kim and Hwang [7]. Each column represents one method: (a) frame 1397 from dataset 3 of Performance Evaluation of Tracking and Surveillance (PETS) 2001, (b) frame 1464 from dataset 3 of PETS 2001, (c) frame 1067 from dataset 4 of PETS 2001, (d) frame 468 from view 1 of PETS 2009, (e) frame 716 from view 1 of PETS 2009, (f) frame 12 from view 5 of PETS 2009, and (g) frame 32 from view 5 of PETS 2009.
(우)06579 서울시 서초구 반포대로 201(반포동)
Tel. 02-537-6389 | Fax. 02-590-0571 | 문의 : oak2014@korea.kr
Copyright(c) National Library of Korea. All rights reserved.