검색 전체 메뉴
PDF
맨 위로
OA 학술지
Analysis of Indoor Robot Localization Using Ultrasonic Sensors
  • 비영리 CC BY-NC
  • 비영리 CC BY-NC
ABSTRACT
Analysis of Indoor Robot Localization Using Ultrasonic Sensors
KEYWORD
Localization , Mobile robot , Motion model , Measurement model , Monte Carlo localization , Range sensor
  • 1. Introduction

    The estimation of position and orientation is vital for navigation of a mobile robot. The estimation of location, called localization, has been studied extensively. There are many methods that have been proposed and implemented. These methods include simple dead reckoning, least squares, and other more complicated filtering approaches.

    The most intuitive and trivial method, which is also impractical, is dead reckoning. This method integrates the velocity over time to determine the change in robot position from its starting position. Other localization systems use beacons [15] placed at known positions in the environment. The beacons use range data to the robot. The range sensors use ultrasonic or radio frequency signals to determine the distances between the robot and the beacons. The least squares method or filtering method uses the range data to estimate the position of the mobile robot. Uncertainty in robot motion and the noise in the range measurement affect the performance of the estimation; additionally, some control parameters of the methods should be adjusted according to the levels of uncertainty and noise. There have been several filtering approaches for localization.

    There have been several filtering approaches for localization. Several of the major approaches are the Kalman filter (KF), extended Kalman filter (EKF) [6], unscented Kalman filter (UKF), and particle filter (PF) [79]. All of these filters follow the Bayesian filter approach. The variations in the Kalman filter assume that the uncertainties in robot motion and measurement are Gaussian.

    The pure KF only uses a linear model of robot motion and sensor measurement. To handle the nonlinearity in robot motion and sensor measurement, the EKF approximates the nonlinearity with a first-order linear model. UKF, a recent development of KF, does not approximate the robot motion and sensor measurement. Instead, it uses a nonlinear model of motion and measurement as it is. It uses samples called the sigma points to describe the probabilistic properties of the robot motion and sensor measurement. UKF also assumes that all the uncertainty involved in the system is Gaussian.

    Though EKF has been used widely and successfully for mobile robot localization, it sometimes provides unacceptable results because real data can be very complex, involving elements of non-Gaussian uncertainty, high dimensionality, and nonlinearity. Moreover, EKF requires derivation of a highly complicated Jacobian for linearization. Therefore, Monte Carlo methods have been introduced for localization.

    Monte Carlo localization (MCL) is based on a particle filter [1012]. MCL can solve the global localization and kidnapped robot problem in a highly robust and efficient way. The Monte Carlo localization method uses a set of samples, called the particles, to depict the probabilistic features of the robot positions. In other words, rather than approximating probability distributions in parametric form, as is the case for KFs, it describes the probability as it is using the particles. MCL has the great advantage of not being subject to linearity or Gaussian constraints on the robot motion and sensor measurement.

    This paper is concerned with the estimation of robot position and orientation in an indoor environment. We have used a sensor system comprised of static ultrasonic beacons and one mobile receiver installed on the robot. The robot navigates through predefined path points. The exteroceptive measurement information is the range data between the robot and the beacons. In this experiment, we have used a differential drive mobile robot. The paper contributes to understanding the effect of control parameters on the localization performance. How the uncertainty in robot motion and sensor measurement affects the location estimation has been exploited through experiments.

    The remainder of this paper is organized as follows. In Section 2, MCL and its fundamentals are discussed. Section 3 illustrates details of the experiment and analysis of MCL. Section 4 covers the discussion of the experiments, and Section 5 concludes the paper.

    2. Monte Carlo Localization and its Models

    The MCL method iterates sampling and importance resampling in the frame of the Bayesian filter [13] approach for localization of a mobile robot. It is alternatively known as the bootstrap filter [14], the Monte Carlo filter [15], the condensation algorithm [16], or the survival of the fittest algorithm [17]. All of these methods are generally known as particle filters.

    MCL method can approximate almost any probabilistic distribution of practical importance. It is not bound to a limited parametric subset of probabilistic distributions as in the case of an EKF localization method for a Gaussian distribution. Increasing the total number of particles increases the accuracy of the approximation. However, a large number of particles degrades the computational efficiency that is needed for real-time application of the MCL. The idea of MCL is to represent a belief about a robot position with a particle set , each representing a hypothesis on the robot pose (x, y, θ).

    Monte Carlo localization repeats three steps: 1) application of a motion model, 2) application of a measurement model, and 3) resampling of particles. These three steps are explained in Table 1 using pseudocode.

    [Table 1.] Monte Carlo localization (MCL) algorithm

    label

    Monte Carlo localization (MCL) algorithm

    In Table 1, the prediction phase starts from the set of particles and applies the motion model to each particle in a particle set . In a measurement model, the importance factor, sometimes called the belief of each particle is determined. The information in the measurement zt is incorporated into the particle set via the importance factor . After the belief calculation, resampling is performed on the basis of belief . Resampling transforms the particle set into another particle set of the same size, which finally yields the estimated particle set for time t.

    The resulting sample set usually consists of many duplicates. It refocuses the particle set into a region of high posterior probabilities. The particles that were not contained in have lower belief. It should be noted that the resampling process neither includes the particles in order from the highest belief nor excludes the particles in order from the lowest belief. Thus, the set consists of P particles, which represent the probable locations of the robot at time t.

       2.1 Motion Model

    A motion model is used to predict the pose of the robot from the previous pose using the control command or proprioceptive motion sensing (v, w). Table 2 shows the proposed motion model that was used in MCL [6].

    In Table 2, , , and constitute a particle that represents the pose of the robot. Δt is the algorithm time step, and v and w are the translational and rotational velocities measured by the wheel encoders on the robot. The variable motionpara consists of α1, α2, α3, α4, α5, and α6, which represent the motion uncertainty. The function sample generates a random number from the Gaussian random variable with zero mean and variance of αiv2jw2.

       2.2 Measurement Model

    In the measurement model, the belief of the predicted particles is computed utilizing the received range information from the beacons. Table 3 shows the proposed measurement model that we have to implement in MCL. is the distance between the predicted particle and the beacons.

    prob( - , δr in Table 3 is the Gaussian probability distribution of the measurement noise in the range information. The measurement noise can be caused by unexpected objects, crosstalk between different signals, and specular reflection of the signals.

       2.3 Resampling

    Finally, all particles are resampled, i.e., a new set of particles are drawn from the current set on the basis of the belief . We use systematic resampling, which is also known as stochastic universal resampling because it is fast and simple to implement.

    3. Experiment and Analysis

    The experiments are conducted in a classroom with chairs and tables on which computers and monitors are located. The four ultrasonic beacons are installed on the ceiling.

    Figures 1, 2 shows the experimental setup which indicates the trajectory of the robot motion and the location of ultrasonic beacons. The positions of the beacons and way points are provided in Tables 4 and 5, respectively.

    [Table 4.] Locations of the beacons

    label

    Locations of the beacons

    [Table 5.] Locations of the way points

    label

    Locations of the way points

    We use the MRP-NRLAB 02 differential drive robot (see Figure 3) manufactured by RED ONE technologies and the USAT A105 ultrasonic sensor system (see Figure 4) from the company Korea LPS. The work area for the experiment is 14.5m×7.25m. The receiver of the sensor system is mounted on the robot, and beacons are attached on the ceiling of the room.

    The robot is controlled by a joystick and uses the wheel encoder data to calculate the translational velocity v and rotational velocity w. The ranges between the robot and beacons are measured by the ultrasonic sensor system to correct the predicted robot location. The initial pose (xo, yo, θo) of the robot is (5.3m, 1.2m, 0.00rad). The robot navigates with a translational velocity of v = 0.3m/sec and rotational velocity w = 0.1rad/sec. The MCL is implemented using MATLAB with 300 particles. Table 6 lists the values of the control parameter that are used to investigate the performance of MCL.

    [Table 6.] Control parameter values

    label

    Control parameter values

    During the experiment, the ultrasonic range measurement system often fails in detecting some of the ranges. Because of the failure, ambiguity and large estimation error are observed in the path segments from 3 to 4, 4 to 5, and 5 to 6.

    Figure 5 shows the plots of the estimated trajectories by colored lines according to the values of the control parameters, as shown in Table 6. The asterisks represent the positions of the beacons. Figures 6 and 7 illustrate the distance error and orientation error between the estimated and real robot trajectories. Tables 7 and 8 list the mean and standard deviation of the distance error and orientation error of the estimation, respectively. From the experimental results, the estimation error of the robot is observed to decrease when the appropriate control parameter values are used.

    [Table 7.] Mean of distance error and orientation error

    label

    Mean of distance error and orientation error

    [Table 8.] Standard deviation (SD) of distance error and orientation error

    label

    Standard deviation (SD) of distance error and orientation error

    4. Discussion

    A variety of techniques and suggestions have been proposed for mobile robot localization [1824]. It is evident that a comparison of different techniques was difficult because of a lack of commonly accepted test standards and procedures. We have developed our own experimental environment using four ultrasonic beacons and six way points for robot navigation. An analysis of MCL was performed under three different cases, as provided in Table 6. Each case in Table 6 is categorized according to the low, medium, and high values of motion uncertainty and measurement noise. The investigation of MCL analysis was based on the following:

    1) plotting the trajectory of the estimated position against the real robot trajectory,2) calculating and plotting the distance error and orientation error between the real robot position and the estimated position, and3) evaluation of mean and standard deviation of the distance error and orientation error.

    Large estimation error is observed in (a) and (b) in Figure 5, which corresponds to cases (i) and (ii), respectively. Figure 5c, which corresponds to case (iii), shows the least estimation error. The results also suggest that the motion error and measurement noise of the experiment are relatively high.

    Figures 7 and 8 show the distance error and orientation error for the experiments. The mean and standard deviation of the errors are listed in Tables 7 and 8, respectively. The results are shown in Figures 5 through 7, and Tables 7 and 8 indicate that the proper selection of control parameters is crucial for the use of MCL. MCL fails if the values of the control parameters assume that the uncertainty of robot motion is lower than the actual uncertainty. Likewise, assuming that the measurement uncertainty is lower than that of the actual measurement deteriorates the estimation performance.

    5. Conclusion

    This paper shows results of MCL used for localization of a mobile robot in an indoor environment. The experiment uses a differential drive robot, which uses wheel encoder data and range data from four fixed beacons.

    The experiments compare three different cases, which represent localization under different control parameter values. The control parameters adjust the uncertainty of robot motion and sensor measurement noise. From the comparison, it is concluded that assuming that the motion uncertainty and measurement noise are lower than the actual values causes poor estimation performance. This is the case for KF approaches, where the mismatch in the process error level and measurement error level causes poor estimation performance.

    Conflict of Interest

    No potential conflict of interest relevant to this article was reported.

참고문헌
  • 1. Jin T. S., Kim H. S., Kim J. W. 2010 “ Landmark detection based on sensor fusion for mobile robot navigation in a varying environment, ” [International Journal of Fuzzy Logic and Intelligent Systems] Vol.10 P.281-286 google cross ref
  • 2. Peca M., Gottscheber A., Obdrlek D., Schmidt C. 2010 “ Ultrasonic localization of mobile robot using active beacons and code correlation, ” [in Research and Education in Robotics-EUROBOT 2009, Communications in Computer and Information Science] Vol.82 P.116-130 google
  • 3. Kang S. C., Jin T. S. 2007 “ Global map building and navigation of mobile robot based on ultrasonic sensor data fusion,” [International Journal of Fuzzy Logic and Intelligent Systems] Vol.7 P.198-204 google cross ref
  • 4. Moreno L., Armingol J., Garrido S., de la Escalera A., Salichs M. 2002 “ A genetic algorithm for mobile robot localization using ultrasonic sensors, ” [Journal of Intelligent and Robotic Systems] Vol.34 P.135-154 google cross ref
  • 5. Leonard J. J., Durrant-Whyte H. F. 1991 “ Mobile robot localization by tracking geometric beacons, ” [IEEE Transactions on Robotics and Automation] Vol.7 P.376-382 google cross ref
  • 6. Welc G., Bishop G. August 12-17 “ An introduction to the Kalman filter, ” [in Proceedings of SIGGRAPH ’01, The 28th International Conference on Computer Graphics and Interactive Techniques] google
  • 7. Jeong W., Kim Y. J., Lee J. O., Lim M. T. October 18-21, 2006 “ Localization of mobile robot using particle filter, ” [in SICE-ICASE International Joint Conference] P.3031-3034 google cross ref
  • 8. Thrun S., Burgard W., Fox D. 2005 Probabilistic Robotics google
  • 9. Thrun S. 2002 “ Particle filters in robotics, ” [in Proceedings of the Eighteenth Conference on Uncertainty in Artificial Intelligence] P.511-518 google
  • 10. Thrun S., Fox D., Burgard W., Dellaert F. 200 “ Robust Monte Carlo localization for mobile robots, ” [Artificial Intelligence] Vol.128 P.99-141 google cross ref
  • 11. Fox D., Burgard W., Dellaert F., Thrun S. July 18-22, 1999 “Monte Carlo localization: efficient position estimation for mobile robots, ” [in Proceedings of the Sixteenth National Conference on Artificial Intelligence and Eleventh Conference on Innovative Applications of Artificial Intelligence] P.343-349 google
  • 12. Dellaert F., Fox D., Burgard W., Thrun S. May 10-15, 1999 “ Monte Carlo localization for mobile robots, ” [in Proceedings of the IEEE International Conference on Robotics and Automation] P.1322-1328 google cross ref
  • 13. Doucet A., Godsill S., Andrieu C. 2000 “ On sequential Monte Carlo sampling methods for Bayesian filtering, ” [Statistics and Computing] Vol.10 P.197-208 google cross ref
  • 14. Gordon N. J., Salmond D. J., Smith A. F. M. 1993 “ Novel approach to nonlinear/non-Gaussian Bayesian state estimation, ” [IEE Proceedings, Part F: Radar and Signal Processing] Vol.140 P.107-113 google cross ref
  • 15. Kitagawa G. 1996 “ Monte Carlo filter and smoother for Non-Gaussian nonlinear state space models, ” [Journal of Computational and Graphical Statistics] Vol.5 P.1-25 google cross ref
  • 16. Isard M., Blake A. 1998 “ CONDENSATION: conditional density propagation for visual tracking, ” [International Journal of Computer Vision] Vol.29 P.5-28 google cross ref
  • 17. Kanazawa K., Koller D., Russell S. August 18-20, 1995 “ Stochastic simulation algorithms for dynamic probabilistic networks, ” [in Proceedings of the Eleventh conference on Uncertainty in artificial intelligence] P.346-351 google
  • 18. Kim D. W., Igor Y., Kang E. S., Jung S. 2013 “ Design and control of an omni directional cleaning robot based landmarks, ” [Journal of Korean Institute of Intelligent Systems] Vol.23 P.100-106 google cross ref
  • 19. Jeong S. K., Kim T. G., Ko N. Y. 2013 “ Programming toolkit for localization and simulation of a mobile robot, ” [Journal of Korean Institute of Intelligent Systems] Vol.23 P.332-340 google cross ref
  • 20. Van Q. N., Eum H. M., Lee J., Hyun C. H. 2013 “ Vision sensor-based driving algorithm for indoor automatic guided vehicle, ” [International Journal of Fuzzy Logic and Intelligent Systems] Vol.13 P.140-146 google cross ref
  • 21. Edlinger T., Von Puttkamer E. September 12-16, 1994 “ Exploration of an indoor-environment by an autonomous mobile robot, ” [in Proceedings of the IEEE/RSJ/GI International Conference on Intelligent Robots and Systems and Advanced Robotic Systems and the Real World] P.1278-1284 google cross ref
  • 22. Talluri R., Aggarwal J. K., Chen C. H., Pau L. F., Wang P. S. P. 1993 “ Position estimation techniques for an autonomous mobile robot: a review, ” P.769-801 google
  • 23. Durieu C., Clergeot H., Monteil F. May 14-19, 1989 “ Localization of a mobile robot with beacons taking erroneous data into account, ” [in Proceedings of the IEEE International Conference on Robotics and Automation] P.1062-1068 google cross ref
  • 24. Betke M., Gurvits L. 1997 “ Mobile robot localization using landmarks,” [IEEE Transactions on Robotics and Automation] Vol.13 P.251-263 google cross ref
이미지 / 테이블
  • [ Table 1. ]  Monte Carlo localization (MCL) algorithm
    Monte Carlo localization (MCL) algorithm
  • [ Table 2. ]  Motion model
    Motion model
  • [ Table 3. ]  Measurement model
    Measurement model
  • [ Figure 1. ]  Capstone design laboratory (environment for robot navigation experiment).
    Capstone design laboratory (environment for robot navigation experiment).
  • [ Figure 2. ]  Experimental setup.
    Experimental setup.
  • [ Table 4. ]  Locations of the beacons
    Locations of the beacons
  • [ Table 5. ]  Locations of the way points
    Locations of the way points
  • [ Figure 3. ]  Differential drive robotMRP-NRLAB02.
    Differential drive robotMRP-NRLAB02.
  • [ Figure 4. ]  Ultrasonic sensor system USAT A105.
    Ultrasonic sensor system USAT A105.
  • [ Table 6. ]  Control parameter values
    Control parameter values
  • [ Figure 5. ]  Comparison of estimated trajectories using different parameter values.
    Comparison of estimated trajectories using different parameter values.
  • [ Figure 6. ]  Distance error between estimated and actual robot trajectories.
    Distance error between estimated and actual robot trajectories.
  • [ Figure 7. ]  Orientation error between estimated and actual robot orientation.
    Orientation error between estimated and actual robot orientation.
  • [ Table 7. ]  Mean of distance error and orientation error
    Mean of distance error and orientation error
  • [ Table 8. ]  Standard deviation (SD) of distance error and orientation error
    Standard deviation (SD) of distance error and orientation error
(우)06579 서울시 서초구 반포대로 201(반포동)
Tel. 02-537-6389 | Fax. 02-590-0571 | 문의 : oak2014@korea.kr
Copyright(c) National Library of Korea. All rights reserved.