Application of Convolutional Neural Network Image Classification for a Path-Following Robot
In discrete control systems, a variety of sensors are used to ensure that a robot completes the desired task as robustly as possible. Many tasks that could be readily completed by a human operator typically require a complex combination of sensors and code for a robot to complete autonomously. The purpose of this research was to develop a system in which a robot learned to follow a line the same way that a human would. This was accomplished by training a classification model using recorded images from tele-operation trials by a human operator. The training images were obtained from a forward-facing camera and each image was automatically labeled as one of three steering classes (i.e., straight, left turn, or right turn) based the button pressed on the joystick at the time the image was recorded. The convolutional neural network was built and trained using the Keras Python library with TensorFlow as its backend. The results showed that a model trained to 82% accuracy was able to navigate the course in all 40 trials that were conducted. The ground truth of each trial was recorded via a motion capture system, and a comparison of the human operator trials and the network trials showed little difference between the paths taken. The results also showed that the numerical accuracy of the network was not the best indicator of how well it would imitate human operation.
Training, Educational robots, Navigation, Control systems, Sensors, Convolutional neural networks, Task analysis
W. K. Born and C. J. Lowrance, "Application of Convolutional Neural Network Image Classification for a Path-Following Robot," 2018 IEEE MIT Undergraduate Research Technology Conference (URTC), Cambridge, MA, USA, 2018, pp. 1-4, doi: 10.1109/URTC45901.2018.9244781.