검색 전체 메뉴
PDF
맨 위로
OA 학술지
Vision-Based Indoor Localization Using Artificial Landmarks and Natural Features on the Ceiling with Optical Flow and a Kalman Filter
  • CC BY-NC
  • CC BY-NC
ABSTRACT

This paper proposes a vision-based indoor localization method for autonomous vehicles. A single upward-facing digital camera was mounted on an autonomous vehicle and used as a vision sensor to identify artificial landmarks and any natural corner features. An interest point detector was used to find the natural features. Using an optical flow detection algorithm, information related to the direction and vehicle translation was defined. This information was used to track the vehicle movements. Random noise related to uneven light disrupted the calculation of the vehicle translation. Thus, to estimate the vehicle translation, a Kalman filter was used to calculate the vehicle position. These algorithms were tested on a vehicle in a real environment. The image processing method could recognize the landmarks precisely, while the Kalman filter algorithm could estimate the vehicle’s position accurately. The experimental results confirmed that the proposed approaches can be implemented in practical situations.


KEYWORD
Artificial landmark , Natural features , Localization , Navigation , Optical flow , Kalman filter
  • 1. Introduction

    Landmarks are essential for autonomous vehicle localization and navigation. Landmarks are used as references when a robot localizes or navigates within its environment. Many different types of landmark are used by autonomous vehicles or robots to locate their position. The positions of landmarks can be divided into vertical and horizontal views. The advantage of using ceiling landmarks or vertical views is the tolerance of dynamic obstacles and rearrangements, so many researchers have used ceiling landmarks as references for robot navigation [1-7].

    Vision is a good option for robot sensors because it is sufficiently flexible for detecting or recognizing any feature with any color and any size. A vision system or a combination of a vision system with sensors has been used in many localization and navigation systems [1,5,8-12]. Lee et al. [8] used a combination of vision and a range sensor for mobile robot localization. Park et al. [11] used a charge-coupled device (CCD) camera to calculate the zero moment point, which was measured from the reference object image (the camera was located on the robot’s head).

    Ceiling lamps in an office were used as a landmark by Panzieri et al. [5] because they are the same shape and are spaced in a regular pattern, while they can be observed easily without any obstacles blocking the robot vision system. However, ceiling light-based positioning methods assume that all of the lights are lit and there are problems when a light is damaged or off. In addition, a light might not appear in the camera view. Xu et al. [6] used natural features on ceilings to estimate the initial orientation and position of an autonomous vehicle in the word frame of a specific block on the ceiling using a perspective n-point-based positioning method. The global orientation was estimated based on a point feature on the ceiling. Wu and Tsai [7] attached a circular artificial landmark to the ceiling because the circular shape on the ceiling became an irregular ellipse when viewed from below, and the parameters of the ellipse were used to estimate the vehicle location with good precision during navigation.

    The Kalman filter (KF) and extended Kalman filter (EKF) have been used in many areas such as automotive engineering, communications, and simultaneous localization and mapping (SLAM) [1,2,11-16]. Myung et al. [13] used a KF to estimate the parameters given physical constraints using a general constrained optimization technique. Kim and Hong [14] used an EKF and an unscented KF for vehicle tracking in an automated container terminal. Rusdinar et al. [1] (our previous research) used EKF to estimate the vehicle position using artificial landmarks and odometry sensors. The EKF was used when the artificial landmark was not detectable by the vehicle. The EKF was also used in robot formation control to estimate the virtual target point position [15,16].

    In this paper, we propose a ceiling localization method based on a combination of artificial and natural landmarks with a KF. Previously, we used artificial landmark to localize a mobile robot and odometry sensors to track the vehicle displacement between one landmark and another landmark. To improve our previous method, we propose natural feature recognition and use it to track vehicle movements. Corner detection algorithms have been used to detect natural features on ceilings. Thus, we used a corner detection algorithm and removed the odometry sensor.

    The contributions of this study are as follows: 1) optical flow algorithms replaced the encoder and gyro sensor; 2) a robust and fast image processing algorithm was developed, which can detect and recognize landmarks in real-time in areas with uneven light; 3) natural landmarks were used when the artificial landmarks could not be detected. We consider that our proposed system will help engineers to develop simple and cheap systems without reducing the capacity for autonomous vehicle guidance during localization and navigation in an indoor environment.

    The paper is structured as follows. In Section 2, we describe the system configuration and image processing. In Section 3,

    we present the localization method based on optical flow and the KF. In Section 4, our experimental results are described. In Section 5, we present our conclusions.

    2. System Configuration and Image Processing

    Figure 1 shows the autonomous vehicle with an upward-facing digital camera, which was used in the experiments, and the landmark on the ceiling. A Compaq Presario CQ20 (Hewlett- Packard, Palo Alto, CA, USA) notebook running a C++ program was used to handle the image processing routines and to control the vehicle movements. Two direct current motors were used to propel the vehicle. The motors were controlled by the notebook via a USB DAQ (National Instrument, Austin, TX, USA) motor driver.

    During image processing, the object recognition algorithm used corner features on the ceiling as landmarks. The corner features were used as references to track the vehicle’s displacement. Harris corner detection was used to recognize the corner features of the ceiling. Optical flow was used to calculate the displacement distance.

       2.1 Corner Feature Detection and Optical Flow

    The point of a corner is defined by

    image

    where M is a 2 × 2 structure matrix and k is a scale constant with a range of 0.004 < k < 0.006. The determinant M and trace M are defined by the eigenvalue (λ) of matrix M, where det M = λ1λ2 and trace M = λ1 + λ2. The structure matrix M

    is defined by

    image

    where w(x,y) is a weight function and I is the image intensity. The camera movement can be determined using

    image

    where E is the difference between the original and the moved window. I1 and I2 are the intensity of the image position (x, y) and the intensity of the moved window, respectively. I2 is defined by

    image

    where u and v are the window displacements in the x and y directions, respectively.

       2.2 Calculation of the Angular Velocity

    The camera position is located exactly in the middle of the axle, so the vehicle movements are defined by two modes: straight and rotate. These movement modes simplify the angular velocity calculations. Figure 2 shows the robot movement modes and the camera position.

    To calculate the angular velocity of the mobile robot based on the image processing results, the image is divided into three Region of Interests(ROIs), i.e., the left, front, and right ROIs. Each ROI has an angular velocityω. Figures 3 and 4 show the angular velocity calculations relative to the image feature translation. The angular velocity in each ROI is defined by

    image

    where R is the distance between the center of the image and the center of the ROI and V⊥ is the cross-radial component or a component perpendicular to the radius R. R and V⊥ are defined by

    image
    image

    where (xr, yr) is center of the ROI position in the image frame, (x0, y0) is the center of the image, and Vi and φ are the speed of translation from point t-1 to t and the angle between Vi and the cross-radial component, respectively. The angle φis defined by

    image

    where β is defined by

    image

    The translation of the center point is used to calculate the translation distance. The translation distance in the image is defined by

    image

    where (xc, yc) is the center of the image defined by

    image_width/2 and image_height/2,

    while (xu, yu) is defined by (3). All of the calculations are performed in a pixel format. The artificial landmark from our previous study [1] is used to convert pixels to a metric format. The artificial landmark comprises circles arranged in a defined pattern. The pattern generates landmarks and directional information. A comparison of the real diameter of the circles and their diameter in the image is used to calculate the metric format during image processing.

    3. Vehicle Localization and Navigation

       3.1 Vehicle Movements

    The proposed motion model is defined by

    image

    where xpt, ypt, and θt the (x, y) vehicle position and the heading at each step, respectively; p indicates that x, y, and θ are derived from the optical flow algorithm; θt?1 is the robot heading at step t-1; ωt is the angular velocity calculated based on the optical flow during image processing; and s is the translation distance between one point and another point according to the optical flow algorithm. The translation distance is defined by

    image

    where l is defined by (11) and vp is the vehicle speed, which is determined based on the speed of translation from one point to another point in the image. The vehicle speed is defined by

    image

    where ts is the time sampling rate during image processing, which is calculated when the algorithm detects the original corners based on the translated corners, and tb is the looping time for image processing, which is calculated after the end of ts to the first ts. A time function in C++ is used to calculate the time sampling rate.

       3.2 Implementation of the Kalman Filter

    During image processing, the noises change the vehicle translation and direction, which makes the vehicle position distant from the real position. The KF is a tool that can estimate the variables used for the localization calculations during image processing. In this study, a KF was used to estimate the vehicle position based on its input (translation and direction). The KF state is modeled by

    image

    and the measurements are taken from the robot translation calculation produced by the optical flow, where x, y, and θ can be measured directly:

    image

    where Xt and Xt?1 are state vectors at time t and t-1, respectively.

    In the simulation, the state is defined as

    image

    A is an n × n matrix that describes the state at time t-1, which is defined by

    image

    B is the n×m matrix that describes the control input u, which is defined by

    image

    and zt is a measurement state.

    The KF process comprises two stages: the prediction state and the measurement update state. The prediction state is given by

    image

    where Pt and

    image

    are a priori and a posteriori estimates of the error covariance, respectively, and Ex is the process with noise respect to Xt.

    The measurement update state is given by

    image
    image
    image

    where K is the Kalman gain, Ez is a measurement noise matrix

    with respect to vt, and I is an identity matrix. The KF reduces the noise of the target point position. H is a Jacobian matrix that contains the partial derivatives of the measurement function h(x) with respect to the state Xt, where

    image

    4. Experimental Results

    We tested the proposed algorithm in experiments using our mobile robot. The mobile robot was moved and the upwardfacing camera captured images of the ceiling while tracking the robot movements. The experiment was divided into two areas. In the first experiment, the vehicle was moved from the study room through the meeting room of our laboratory into the hall of the third floor of our building. Figure 5 shows a map of our laboratory condition and tracks showing the vehicles movements. The vehicle moved straight from the start point in the study room, turned right after moving 350 cm, turned left after 570 cm, and stopped outside the laboratory or in the hall. The robot trajectory was plotted using the optical flow and the artificial landmark. Figure 6 shows the results of the vehicle movements and the position estimated using only the optical flow. The trajectory of the vehicle calculated used the optical flow alone shows that the difference between the vehicle positions was far from the real position of the vehicle. This was attributable to noise during image processing.

    Figure 7 shows the vehicle trajectory calculated using the artificial landmark and the KF. The KF could be used to estimate the vehicle position. The figure compares the localization results using optical flow alone with that using the KF and an artificial landmark. The noise that interfered with the position calculation was eliminated by the KF, so the vehicle orientation error was corrected when the vehicle detected the artificial landmark.

    In the second experiment, the vehicle moved in the corridor of our building. Figure 8 shows a map of the experimental environment in the hall. This environment was similar to that where we tested the encoder and gyro in our previous studies [7,8]. The figure shows that there were still errors in the tracking algorithm. These errors were due to light malfunctions along the corridor. These errors caused the vehicle’s position to deviate far from the original trajectory, which could be reduced by the KF. Figure 9 shows the trajectory of the vehicle without the KF and artificial landmarks. This shows that the errors were very different from the real trajectory because there was no

    correction process when the vehicle moved along the corridor. Figure 10 shows the trajectory of the vehicle using the KF and artificial landmarks, where the KF corrected the positional error caused by noise.

    5. Conclusion

    In this study, we developed a vehicle tracking method based on an optical flow algorithm using natural features on the ceiling for real-time image processing. The results confirmed that our tracking algorithms can replace odometry sensors. Using a combination of artificial landmarks and natural features on the ceiling, the vehicle could localize and navigate autonomously with a single low cost digital camera. The natural landmarks could be recognized by an artificial neural network based on the shape and color of the image.

참고문헌
  • 1. Rusdinar A., Kim J., Lee J., Kim S. 2011 “Implementation of real-time positioning system using extended Kalman filter and artificial landmark on ceiling” [Journal of Mechanical Science and Technology] Vol.26 P.949-958 google cross ref
  • 2. Kim J. M. “Optical guidance: ceiling” google
  • 3. Fukuda T., Ito S., Arai F., Yokoyama Y., Abe Y., Tanaka K., Tanaka Y. 1995 “Navigation system based on ceiling landmark recognition for autonomous mobile robot: landmark detection based on fuzzy template matching (FTM)” [in Proceedings of 1995 IEEE/RSJ International Conference on Intelligent Robots and Systems] P.150-155 google cross ref
  • 4. Jeong W. Y., Lee K. M. 2005 “CV-SLAM: a new ceiling vision-based SLAM technique” [in Proceedings of 2005 IEEE/RSJ International Conference on Intelligent Robots and Systems] P.3195-3200 google cross ref
  • 5. Panzieri S., Pascucci F., Setola R., Ulivi G. 2001 “A low cost vision based localization system for mobile robots” [Target] Vol.4 google
  • 6. Xu D., Han L., Tan M., Li Y. F. 2009 “Ceiling-based visual positioning for an indoor mobile robot with monocular vision” [IEEE Transactions on Industrial Electronics] Vol.56 P.1617-1628 google cross ref
  • 7. Wu C. J., Tsai W. H. 2009 “Location estimation for indoor autonomous vehicle navigation by omni-directional vision using circular landmarks on ceilings” [Robotics and Autonomous Systems] Vol.57 P.546-555 google cross ref
  • 8. Lee Y. J., Yim B. D., Song J. B. 2009 “Mobile robot localization based on effective combination of vision and range sensors” [International Journal of Control, Automation and Systems] Vol.7 P.97-104 google cross ref
  • 9. Kim M. Y., Cho H. 2006 “An active trinocular vision system of sensing indoor navigation environment for mobile robots” [Sensors and Actuators A: Physical] Vol.125 P.192-209 google cross ref
  • 10. Menegatti E., Maeda T., Ishiguro H. 2004 “Image-based memory for robot navigation using properties of omnidirectional images” [Robotics and Autonomous Systems] Vol.47 P.251-267 google cross ref
  • 11. Park S., Han Y., Hahn H. 2009 “Balance control of a biped robot using camera image of reference object” [International Journal of Control, Automation and Systems] Vol.7 P.75-84 google cross ref
  • 12. Jin T. S., Morioka K., Hashimoto H. 2004 “Appearance based object identification for mobile robot localization in intelligent space with distributed vision sensors” [International Journal of Fuzzy Logic and Intelligent Systems] Vol.4 P.165-171 google cross ref
  • 13. Myung H., Lee H. K., Choi K., Bang S. 2010 “Mobile robot localization with gyroscope and constrained Kalman filter” [International Journal of Control, Automation and Systems] Vol.8 P.667-676 google cross ref
  • 14. Kim Y. S., Hong K. S. 2005 “A tracking algorithm for autonomous navigation of AGVs in an automated container terminal” [Journal of Mechanical Science and Technology] Vol.19 P.72-86 google cross ref
  • 15. Rusdinar A., Kim S. 2012 “Modeling of vision based robot formation control using fuzzy logic controller and extended Kalman filter” [International Journal of Fuzzy Logic and Intelligent System] Vol.12 P.238-244 google cross ref
  • 16. Lee H. “FormationEKF” google
이미지 / 테이블
  • [ Figure 1. ]  Mobile robot with an upward-facing camera and the landmark on the ceiling.
    Mobile robot with an upward-facing camera and the landmark
on the ceiling.
  • [ Figure 2. ]  Vehiclemovements and the upward-facing camera position.
    Vehiclemovements and the upward-facing camera position.
  • [ Figure 3. ]  Calculation of the angular velocity.
    Calculation of the angular velocity.
  • [ Figure 4. ]  Image processing to calculate the optical flow.
    Image processing to calculate the optical flow.
  • [ Figure 5. ]  Map of the experimental environment (laboratory to hall).
    Map of the experimental environment (laboratory to hall).
  • [ Figure 6. ]  Vehicle trajectory based on optical flow alone.
    Vehicle trajectory based on optical flow alone.
  • [ Figure 7. ]  Vehicle trajectory using optical flow, artificial landmarks, and the Kalman filter.
    Vehicle trajectory using optical flow, artificial landmarks,
and the Kalman filter.
  • [ Figure 8. ]  Map of the experimental environment (hall).
    Map of the experimental environment (hall).
  • [ Figure 9. ]  Experimental results in the hall using optical flow alone.
    Experimental results in the hall using optical flow alone.
  • [ Figure 10. ]  Experimental results in the hall using artificial landmarks, optical flow, and the Kalman filter.
    Experimental results in the hall using artificial landmarks,
optical flow, and the Kalman filter.
(우)06579 서울시 서초구 반포대로 201(반포동)
Tel. 02-537-6389 | Fax. 02-590-0571 | 문의 : oak2014@korea.kr
Copyright(c) National Library of Korea. All rights reserved.