Layering Laser Rangefinder Points onto Images for Obstacle Avoidance by a Neural Network
Obstacle avoidance is essential to autonomous robot navigation but maneuvering around an obstacle causes the system to deviate from its normal path. Oftentimes, these deviations cause the robot to enter new regions which lack the path's usual or meaningful features. This is problematic for vision-based steering controllers, including convolutional neural networks (CNNs), which depend on patterns to be present in camera images. The absence of a path fails to provide consistent and noticeable patterns for the neural network, and this usually leads to erroneous steering commands. In this paper, we mitigate this problem by superimposing points from a two-dimensional (2D) scanning laser rangefinder (LRF) onto camera images using the Open Source Computer Vision (OpenCV) library. The visually encoded LRF data provides the CNN with a new pattern to recognize, aiding in the avoidance of obstacles and the rediscovery of its path. In contrast, existing approaches to robot navigation do not use a single CNN to perform line-following and obstacle avoidance. Using our approach, we were able to train a CNN to follow a lined path and avoid obstacles with a reliability rate of nearly 60% on a complex course and over 80% on a simpler course.
Cameras, Robot vision systems, Collision avoidance, Navigation, Lasers
N. P. Ges, W. C. Anderson and C. J. Lowrance, "Layering Laser Rangefinder Points onto Images for Obstacle Avoidance by a Neural Network," 2019 SoutheastCon, Huntsville, AL, USA, 2019, pp. 1-6, doi: 10.1109/SoutheastCon42311.2019.9020359.