High fidelity validation of vision-based sensors and algorithms for spaceborne navigation
- Connor Robert Beierle.
- [Stanford, California] : [Stanford University], 2019.
- Copyright notice
- Physical description
- 1 online resource.
Also available at
- Beierle, Connor Robert, author.
- D'Amico, Simone, degree supervisor.
- Macintosh, Bruce, 1966- , degree committee member.
- Schwager, Mac, degree committee member.
- Stanford University. Department of Aeronautics and Astronautics.
- ["This dissertation addresses the development of new hardware-in-the-loop testbeds for spaceborne vision-based navigation. The miniaturization of satellites and trend towards distributed space systems place demanding requirements on the navigation capabilities. In particular, next generation miniature space systems requires high levels of autonomy using limited onboard resources. In this context, vision-based sensors play an important role in providing a robust and passive mean to perform both inertial navigation and relative navigation with respect to space resident objects. The characteristic low-cost, low power consumption, small form factor, and high-dynamic range make vision-based sensors the ideal tool to operate in space. Current testbeds used for validation of vision-based sensors and navigation algorithms are affected by a number of limitations such as limited geometric resolution, radiometric dynamics range, or scope in terms of range of operations and functional modes. To overcome these limitations, this research addresses the design, calibration and utilization of two complementary hardware-in-the loop testbeds which make use of a combination of virtual reality and robotics. The virtual reality testbed is a variable-magnification optical stimulator which consists of a pair of actuated lenses that magnify a monitor. High-fidelity, synthetic scenes of the space environment are rendered to the monitor in real-time and closed-loop to stimulate a vision-based sensor test article. This is done using both computer graphics and machine learning techniques. The physical reality consists of a 7 degrees-of-freedom robotic arm, which positions and orients a vision-based sensor with respect to a target object or scene. Custom illumination devices simulate Earth albedo and Sun light to high fidelity to emulate the illumination conditions present in space. After design and calibration, the resulting testbeds are used in conjunction to validate the performance of new navigation algorithms and train neural networks for navigation of the next generation spacecraft."]
- Publication date
- Copyright date
- Submitted to the Department of Aeronautics and Astronautics.
- Thesis Ph.D. Stanford University 2019.