Kuroda Yoji
   Department   Undergraduate School  , School of Science and Technology
   Position   Professor
Language English
Publication Date 2014/04
Type Academic Journal
Peer Review Peer reviewed
Title Pre-Driving Needless System for Autonomous Mobile Robots Navigation in Real World Robot Challenge 2013
Contribution Type Co-authored (other than first author)
Journal Journal of Robotics and Mechatronics
Journal TypeJapan
Publisher JSME/Fuji Technology Press
Volume, Issue, Page 26(2),pp.185-195
Author and coauthor M. Saito, K. Kiuchi, S. Shimizu, T. YOKOTA, Y. Fujino, T. Saito, and Y. Kuroda
Details This paper describes navigation systems for autonomous mobile robots taking part in the real-world Tsukuba Challenge 2013 robot competition. Tsukuba Challenge 2013 enables any information on the route to be collected beforehand and used on the day of the challenge. At the same time, however, autonomous mobile robots should function appropriately in daily human life even in areas where they have never been before. System thus need not capture pre-driving details. We analyzed traverses in complex urban areas without prior environmental information using light detection and ranging (LIDAR). We also determined robot status, such as its position and orientation using the gauss maps derived from LIDAR without gyro sensors. Dead reckoning was combined with wheel odometry and orientation from above. We corrected 2D robot poses by matching electronics maps from the Web. Because drift inevitably causes errors, slippage and failure, etc., our robot also traced waypoints derived beforehand from the same electronics map, so localization is consistent even if we do not drive through an area ahead of time. Trajectory candidates are generated along global planning routes based on these waypoints and an optimal trajectory is selected. Tsukuba Challenge 2013 required that robots find specified human targets indicated by features released on the Web. To find the target correctly without driving in Tsukuba beforehand, we searched for point cloud clusters similar to specified human targets based on predefined features. These point clouds were then projected on the camera image at the time, and we extracted points of interest such as SURF to apply fast appearance-based mapping (FAB-MAP). This enabled us to find specified targets highly accurately. To demonstrate the feasibility of our system, experiments were conducted over our university route and over that in the Tsukuba Challenge.
URL for researchmap https://www.fujipress.jp/jrm/rb/robot002600020185