검색 전체 메뉴
PDF
맨 위로
OA 학술지
Vision Sensor-Based Driving Algorithm for Indoor Automatic Guided Vehicles
  • 비영리 CC BY-NC
  • 비영리 CC BY-NC
ABSTRACT
Vision Sensor-Based Driving Algorithm for Indoor Automatic Guided Vehicles
KEYWORD
Automatic guided vehicle , Marker , Path following control , Vision sensor
  • 1. Introduction

    Automatic guided vehicles (AGVs) have been a growing research over the last two decades. AGVs have important roles in the design of new factories and warehouses, and in the safe movement of goods to their correct destinations. AGVs have been used in assembly lines or production lines in many factories, e.g., automobiles, food processing, wood working, pharmaceuticals, and other factories. Many researchers have developed and designed AGVs to suit their applications, which are typically related to the major problems encountered in a factory working space.

    There are two main types of navigation methods for AGVs: guidance with lines and guidance without lines. In the first type of system, most AGVs track buried cables or guidepaths painted on the floor. One of the most popular types of navigation is based on the use of a magnetic tape as a guide path where the AGV is fitted with an appropriate guide sensor so it can follow the path of the tape [1]. However, it is not easy to change the layout of these guide paths and any breaks in the wires makes it impossible to detect the route. The other form of AGV navigation system uses no paths or guidelines on the floor but instead it is based on laser targets and inertial navigation [2,3]. Vehicles navigate using a laser scanner, which measures the angles and distances to reflectors mounted on the walls and machines. The use of lasers provides maximum flexibility to make easy guidance path changes but this type of system is expensive to construct.

    Image processing techniques have also been of interest for detecting and recognizing guide paths during the development of AGVs [4-10]. In many studies related to vision-based orientation, various methods have been introduced for use in navigation systems, e.g., a feedback optimal controller [11], a fuzzy logic controller [12], neural networks, and interactive learning navigation. The use of vision systems has a significant effect on the development of systems that are flexible and efficient, with low maintenance costs, especially mobile equipment. AGV systems that utilize a vision-based sensor and computer image processing for navigation can improve the flexibility of system because more environmental information can be observed, which makes the system robust [13-15].

    In this study, we implemented a vision sensor-based driving algorithm to allow AGVs to follow a guide path where markers were installed on the floor, which facilitated high reliability and rapid execution. This method only requires minor modifications to make adjustments for use with any layout platform. Using two cameras, we can reduce the position errors and allow the AGV to change the driving algorithm smoothly while following the desired path. A camera sensor on the top of the AGV can be used to identify the next distant navigation marker. This camera is attached to the vehicle so it subtends an angle α with the floor. The other camera, which is at half the distance from top of the robot to the floor, is positioned perpendicular to the floor and is used to detect the nearest marker. These two cameras also provide angle and distance information between the center of the robot and the center of the marker, which is very helpful for allowing the AGVs to track the guide path robustly. These vision sensors can also be used to detect other AGVs, thereby preventing AGVs from colliding.

    The paper is organized as follows. Section 2 presents the AGV platform configuration. The marker recognition algorithm is described in Section 3. Section 4 explains the driving algorithm for tracking the desired path of an AGV. The experimental results using the two-wheeled mobile robot Stella B2 are given in Section 5, which are followed by the conclusions in Section 6.

    2. AGV Platform Configuration

    This section describes the configuration of the AGV in detail, where a two-wheeled mobile Stella B2 robot is used as the platform, with two 360092305universal serial bus (USB) cameras, vision and movement control software, and a laptop computer.

    The Stella B2 robot (NTREX, Incheon, Korea) is a commercial

    mobile robot system, which comprises a frame with three plate layers, two assembled nonholonomic wheels powered by two DC motors attached to an encoder, a motor driver, a one caster wheel, a power module, and a battery (12V 7A). A Lenovo Thinkpad (1.66 GHz, 1GB of RAM; Lenovo, Morrisvulle, NC, USA) laptop computer is used as the controller device. The Microsoft Foundation Classes (MFC) control application was built for image processing and movement control. The computer and mobile robot communicate via a USB communication (COM) port.

    Two USB cameras are installed on the second and third plate layer. The camera on the second plate layer subtends an angle of 90ο with the floor. The second camera is placed on top of the third plate layer so the angle subtended between its view and the floor is α.

    An overview of the AGV navigation control system is shown in Figure 1. The laptop computer obtains environmental information from the front of the robot using two USB cameras. The scenarios analyzed by the image processing algorithm are used as inputs by the driving navigation module so it can set a suitable velocity based on the current AGV environment.

    3. Marker Recognition Algorithm

    We use markers to provide directional information to the AGV, instead of using the line tracking method. The markers are detected and recognized according to the following steps [16-19].

    1) Learn the marker images and classify them using a support

    vector machine (SVM).

    2) Acquire images from two cameras: camera 1 for a bird’s eye view and camera 2 for a perpendicular aspect.

    3) HSV (hue, saturation, and value) color-based image calibration.

    4) Noise reduction with median filters.

    5) Conversion of binary images

    6) Marker extraction using the histogram of oriented gradients (HOG) algorithm.

    Using the marker recognition algorithm as shown in Figure 2, the markers are detected initially and the directional information and yaw angle information obtained from the detected markers are provided to the AGV. This information is used for AGV path tracking control.

    4. Driving Algorithm for Tracking the Desired Path

    In this section, we introduce the AGV driving algorithm based on vision sensors, including the movement patterns and the path following algorithm that allows an AGV to track the guide path correctly and robustly while avoiding collisions.

       4.1 Movement Patterns

    We use five main movement patterns to ensure the smooth performance of the AGV: starting, moving straight, pre-turning, left/right turning, and stopping. The movement pattern changes according to the navigation marker detected by the AGV.

    4.1.1 Starting

    The AGV does not move forward immediately at high speed but it starts gradually and increases its speed until it reaches the desired velocity. This start process protects the two motors from damage and it also reduces the vibration cause by sudden changes in velocity.

    4.1.2 Moving straight

    The two AGV wheels move forward at the same speed. When the camera on top of the AGV detects the next straight navigation marker, it increases its speed to the next speed level. If the next marker on the floor is still straight ahead, the AGV will not change its speed. While the AGV moves straight ahead it also uses the path following algorithm to ensure that it remains in the center of the guide path. This algorithm will be described in more detail in the next section.

    4.1.3 Pre-turning

    The turning marker on the floor is detected by the camera on the top of the AGV. Immediately after its detection, the AGV reduced its speed to a lower rate but it keeps moving forward until the lower camera detects the turning marker. At this point, the AGV automatically changes to the turning pattern. In this movement pattern, the path following algorithm still allows the AGV to keep its exact position on the guide path.

    4.1.4 Left/right turning

    The AGV reduces its speed continuously and it stops when the center of the AGV is close to the center of the turning marker, before it makes a turn of 90ο from its current position (by maintaining the speed of the two wheels as equal but moving them in opposite directions).

    4.1.5 Stopping

    Immediately after the AGV detects a stop marker, the overall velocity of the vehicle is reduced so it is equal to zero when the AGV reaches the stop sign. The path following algorithm is also used to drive the AGVs to the correct marker position.

       4.2 Path Following

    The path following algorithm uses the information the AGV obtains from the vision sensor. The most important information required by the algorithm is the angle between the center of the

    AGV and the center of the navigation marker on the floor. The AGV analyzes the angle and gives the two motors an appropriate velocity so it can follow the guide path smoothly and robustly. The upper and lower cameras use this algorithm to ensure more precise path following.

    We use nine angle intervals for AGVs, which correspond to nine velocity levels, four angle intervals for turning left, four angle intervals for turning right, and one for moving straight, which track the guide path adequately. The AGV velocity is selected based on the angle between the center of the marker and the center view of the AGV. If the navigation marker is on the center view of the camera or very closed to it in interval A0, the AGV moves straight ahead. However, if the angle between the marker and the camera view center is not in the A0 interval, the velocity of the two wheels will change to make the AGV turn back to the correct track. From angle intervals A0 to A4 or A’0 to A’4, the angle between the AGV and the marker appears to become larger, as shown in Figure 3.

    Let us assume that the AGV is in a position, which that subtends angle β with the marker in interval A’4, as shown in Figure 4. Thus, the AGV needs to turn right so it can move back to the guide path. The AGV will not turn back to the track abruptly, but instead it gradually turns right so angle β lie in the interval A’3. When the AGV is in A’3, the velocity difference between the left wheel and right wheel is reduced slightly to ensure that the movement is smooth. Using a similar algorithm, the AGV turns right to A’2 and A’1 until it reaches interval A0, before it moves straight ahead. In this case, angle β is on right side from A4 to A1, so we use the same algorithm for the AGV, except we execute a left turn.

       4.3 AGV Collision Avoidance

    Factories often use several AGVs to increase performance. Each AGV works in a different area, but two AGVs or more may work in one area occasionally. To avoid AGV collisions, we constructed a collision avoidance algorithm to protect the AGVs. Based on the information obtained from the two cameras, the AGV can detect whether another AGV is close by, so it can stop moving until the other AGV moves out of the way.

    5. Experimental Results

    This sections presents our experimental results using a twowheeled Stella B2 mobile robot. Each experiment was conducted in a controlled environment, including the specification of the layout design and the lighting source. The system captured images directly from two USB cameras during real-time, processed the images with the MFC program, analyzed the information using the proposed control algorithms, and sent the result to the mobile robot motor controller via the serial interface of the laptop computer. The overall control system comprised the navigation marker, the AGV detection system, and the driving algorithm. In this experimental study, the control system was evaluated separately according to each of the following criteria: 1) marker detection experiment, 2) AGV detection experiment, 3) path following experiment.

       5.1 Navigation Marker Detection Algorithm Experimental Results

    We tested the capacity for autonomous navigations using navigation markers installed on the floor, where the navigation path was set up as shown in Figure 5.

    Figure 6 shows a sample image result, which was acquired after the marker detection process. In this image, the marker on the floor is straight ahead, while the negative sign of the angle value indicates that the AGV is deviating to the right from the center of the marker.

    The AGVs moved from the start position by detecting the sign marker using the driving algorithm, until it reached the final target.

       5.2 AGVs Detection Algorithm Experimental Results

    In this experiment, the effectiveness of the proposed AGVs detection algorithm was tested and evaluated using two Stella B2 mobile robots, which moved in opposite directions. When the two mobile robots were sufficiently close to detect each other, they stopped to avoid a collision. Figure 7 shows the experimental results with the AGV detection algorithm.

       5.3 Path Following Algorithm Experimental Results

    The path following algorithm was constructed to ensure that the AGV would move exactly toward the center of the navigation marker or return to the correct guide path if the AGV was not in the center of the navigation marker. In this experiment, a Stella B2 mobile robot was placed on the floor so it subtended an angle β relative to the center of the navigation marker. The results of this experiment showed that the proposed algorithm could track the guide path smoothly and precisely. Figure 8 shows actual images of the Stella B2 mobile robot while it was following the desired path.

    We produced a graph of the path tracking process by generating a log of the angle data while the AGV was running. Figure 9 shows the path tracking control results for the AGV. We compared the path tracking performance of a controller using the proposed driving algorithm with that of a controller that lacked the proposed driving algorithm. Figure 9 shows that the proposed driving algorithm converged on the desired path with 10% tracking errors.

    6. Conclusion

    The experimental results showed that the vision sensor-based driving algorithm for AGVs was implemented successfully in a real path guidance system platform in a laboratory environment.

    The vision-based marker following method for AGVs worked perfectly using two low-cost USB cameras. The combination of information from two cameras allowed the AGV to operate highly efficiently and smoothly. The AGV followed the guide path correctly and moved as close as possible to the center of the navigation marker using the vision sensors. The AGVs could avoid collisions when more than one AGV worked in the same space. This control system does not require that the destination target is programmed because it depends totally on signs placed on the floor. Thus, AGVs can operate in a flexible manner using any layout.

참고문헌
  • 1. Lee Y. J., Ryoo Y. J. 2011 “Navigation of unmanned vehicle using relative localization and magnetic guidance” [Journal of Korean Institute of Intelligent Systems] Vol.21 P.430-435 google cross ref
  • 2. Alenya G., Escoda J., Martinez A. B., Torras C. 2005 “Using laser and vision to locate a robot in an industrial environment: a practical experience” [in Proceedings of the 2005 IEEE International Conference on Robotics and Automation] P.3528-3533 google cross ref
  • 3. Lee M., Han J., Jang C., Sunwoo M. 2013 “Information fusion of cameras and laser radars for perception systems of autonomous vehicles” [Journal of Korean Institute of Intelligent Systems] Vol.23 P.35-45 google cross ref
  • 4. Desouza G. N., Kak A. C. 2002 “Vision for mobile robot navigation: a survey” [IEEE Transactions on Pattern Analysis and Machine Intelligence] Vol.24 P.237-267 google cross ref
  • 5. Badal S., Ravela S., Draper B., Hanson A. 1994 “A practical obstacle detection and avoidance system” [in Proceedings of the 2nd IEEE Workshop on Applications of Computer Vision] P.97-104 google cross ref
  • 6. Christensen H. I., Kirkeby N. O., Kristensen S., Knudsen L., Granum E. 1994 “Model-driven vision for in-door navigation” [Robotics and Autonomous Systems] Vol.12 P.199-207 google cross ref
  • 7. Dao N. X., You B. J., Oh S. R. 2005 “Visual navigation for indoor mobile robots using a single camera” [in Proceedings of 2005 IEEE/RSJ International Conference on Intelligent Robots and Systems] P.1992-1997 google cross ref
  • 8. Park J., Rasheed W., Beak J. 2008 “Robot navigation using camera by identifying arrow signs” [in Proceedings of the 3rd International Conference on Grid and Pervasive Computing Workshops] P.382-386 google cross ref
  • 9. Hu H., Gu D. 2000 “Landmark-based navigation of industrial mobile robots” [International Journal of Industrial Robot] Vol.27 P.458-467 google cross ref
  • 10. Chao M. T., Braunl T., Zaknich A. 1999 “Visually-guided obstacle avoidance” [in Proceedings of the 6th International Conference on Neural Information Processing] P.650-655 google cross ref
  • 11. Lee J. W., Choi S. U., Lee C. H., Lee Y. J., Lee K. S. 2011 “A study for AGV steering control and identification using vision system” [in Proceedings of 2001 IEEE International Symposium on Industrial Electronics] P.1575-1578 google cross ref
  • 12. Cho J. T., Nam B. H. 2000 “A study on the fuzzy control navigation and the obstacle avoidance of mobile robot using camera” [in Proceedings of 2000 IEEE International Conference on Systems, Man, and Cybernetics] P.2993-2997 google cross ref
  • 13. Jin T. S., Morioka K., Hashimoto H. 2011 “Appearance based object identification for mobile robot localization in intelligent space with distributed vision sensors” [International Journal of Fuzzy Logic and Intelligent Systems] Vol.4 P.165-171 google cross ref
  • 14. Lee W. H., Lee H. W., Kim S. H., Jung J. Y., Roh T. J. 2004 “Moving path following and high speed precision control of autonomous mobile robot using fuzzy” [Journal of Korean Institute of Intelligent Systems] Vol.14 P.907-913 google cross ref
  • 15. Min D. H., Jung K. W., Kwon K. Y., Park J. Y. 2011 “Application of recent approximate dynamic programming methods for navigation problems” [Journal of Korean Institute of Intelligent Systems] Vol.21 P.737-742 google cross ref
  • 16. Chapman D. 1998 Teach Yourself Visual C++ 6 in 21 Days google
  • 17. Gonzalez R. C., Woods R. E. 2007 Digital Image Processing google
  • 18. Szeliski R. 2010 Computer Vision: Algorithms and Applications google
  • 19. Baxes G. A. 1994 Digital Image Processing: Principles and Applications google
OAK XML 통계
이미지 / 테이블
  • [ Figure 1. ]  Automatic guided vehicle (AGV) navigation control system.
    Automatic guided vehicle (AGV) navigation control system.
  • [ Figure 2. ]  Flow chart of the vision-based marker recognition algorithm.
    Flow chart of the vision-based marker recognition algorithm.
  • [ Figure 3. ]  Angle intervals between the automatic guided vehicle and the navigation marker.
    Angle intervals between the automatic guided vehicle and the navigation marker.
  • [ Figure 4. ]  Path following by an automatic guided vehicle.
    Path following by an automatic guided vehicle.
  • [ Figure 5. ]  Navigation path on the floor.
    Navigation path on the floor.
  • [ Figure 6. ]  The navigation marker detection result.
    The navigation marker detection result.
  • [ Figure 7. ]  Automatic guided vehicle collision avoidance experiment.
    Automatic guided vehicle collision avoidance experiment.
  • [ Figure 8. ]  Path following algorithm experiment using a Stella B2 mobile robot.
    Path following algorithm experiment using a Stella B2 mobile robot.
  • [ Figure 9. ]  Automatic guided vehicle path tracking using the path following algorithm; sample time: sec.
    Automatic guided vehicle path tracking using the path following algorithm; sample time: sec.
(우)06579 서울시 서초구 반포대로 201(반포동)
Tel. 02-537-6389 | Fax. 02-590-0571 | 문의 : oak2014@korea.kr
Copyright(c) National Library of Korea. All rights reserved.