A Machine Learning Framework for Building Passive Surveillance Photogrammetry Models
Determining the geographic location of an object using two-dimensional (2D) images recorded at high-oblique angles is a nontrivial problem. Existing methods to solve this problem rely on parameters that are either difficult to measure or are based on assumptions. This paper investigates the accuracy of building photogrammetric models using machine learning. Our novel approach involves the collection of training examples before using supervised learning to build a nonlinear, multitarget prediction model. We collected training examples using an unmanned ground vehicle (UGV) that moved throughout the fields of view of multiple cameras. The UGV was tracked and bounded using existing computer vision techniques. With each image frame, the center pixel position (x, y image coordinates) of the vehicle and its bounding box area (in pixels) were mapped to its current GPS coordinates. Multiple machine learning models were created using various combinations of cameras to determine the key features for building accurate photogrammetric models. Data was collected under realistic conditions for ground-based surveillance systems, which may require cameras to be placed at low elevations and high-oblique angles. We found the prediction accuracy of our models to be between 0.58 and 3.54 meters depending upon a number of factors, including the locations, heights, and orientations of the cameras used.
Cameras, Robot vision systems, Training, Robot kinematics, Machine learning, Global Positioning System
E. M. Sturzinger, B. L. Whitehall, J. C. Tyler and C. J. Lowrance, "A Machine Learning Framework for Building Passive Surveillance Photogrammetry Models," 2019 SoutheastCon, Huntsville, AL, USA, 2019, pp. 1-6, doi: 10.1109/SoutheastCon42311.2019.9020437.