Airborne sensor fusion: Expected accuracy and behavior of a concurrent adjustment
Tightly-coupled sensor orientation, i.e. the simultaneous processing of temporal (GNSS and raw inertial) and spatial (image and lidar) constraints in a common adjustment, has demonstrated significant improvement in the quality of attitude determination with small inertial sensors. This is particularly beneficial in kinematic laser scanning on lightweight aerial platforms, such as drones, which employ direct sensor orientation for the spatial interpretation of laser vectors. In this study, previously reported preliminary results are extended to assess the gain in accuracy of sensor orientation through leveraging all available spatio-temporal constraints in a dynamic network i) with a commercial IMU for drones and ii) with simultaneous processing of raw-observations of several low-quality IMUs. Additionally, we evaluate the influence of different types of spatial constraints (image 2D and point-cloud 3D tie-points) and flight geometries (with and without a cross flight line). We present the newly implemented estimation of confidence levels and compare those with the observed residual errors. The empirical evidence demonstrates that the use of spatial constraints increases the attitude accuracy of the derived trajectory by a factor of 2–3, both for the commercial and low-quality IMUs, while at the same time reducing the dispersion of geo-referencing errors, resulting in a considerably more precise and self-coherent geo-referenced point-cloud. We further demonstrate that the use of image constraints (additionally to lidar constraints) stabilizes the in-flight lidar boresight estimation by a factor of 3–10, establishing the feasibility of such estimation even in the absence of special calibration patterns or calibration targets.
2024-02-22
12
April
100057
REVIEWED
Funder | Grant Number |
H2020 | 101004255 |
CTI/Innosuisse | 53622.1 |