Disciplines
Electrical Engineering, Electronics, Information Engineering (20%); Computer Sciences (80%)
Keywords
-
Optimal Control,
Artificial Intelligence,
UAV,
Machine Learning,
Continual Learning
In this research project, a drone will teach itself how to fly. Starting from simplest tasks such as hovering, the drone should gradually explore its motor skills, learn to understand the cause and effect of its motor control, and gain more experience and skills through constant practice until finally even challenging movements are possible. In essence, we are mimicking the development of human motor skills and proprioception that occurs continuously and gradually, increasing ability and complexity over time, and building on past experience. To that end, we will combine elements of continual learning with novel, AI-based algorithms. The innovative and high-risk element is that everything happens live on the drone. This approach carries significant risks. Drones require continuous control inputs to stabilize in the air. Crashes are almost always catastrophic to the systems hardware. At the same time, the computing resources available onboard are limited because weight and power consumption directly reduce flight time. This makes the use of the latest AI algorithms a challenge and makes learning advanced AI algorithms on the device seemingly impossible. Nevertheless, we believe that with our approach, it will be possible to teach a drone how to fly complex and fast manoeuvres with better precision and greater agility than previously possible. In addition, we hypothesize that rather than simply learning to repeat desired behaviour for each new manoeuvre from scratch, the system will be able to build on its experience and reason about the optimal control sequence even for new manoeuvres not previously encountered. The project has the potential to initiate a paradigm shift in autonomous system navigation and control away from the current trend of big data-driven, offline-trained algorithms with black box character, towards a more hardware-related and task-oriented design of AI algorithms. The ability to use self- learned knowledge about oneself to master new tasks can pave the way for the next generation of intelligent mechatronic systems beyond the scope of drones.
The goal of the project is to enable a drone to teach itself how to fly with the help of Artificial Intelligence (AI). The main innovative element is that everything happens live and directly on the drone, and that computations are not offloaded to a powerful workstation. In essence, the drone shall learn movement patterns similar to how a human develops motor skills - by trial and error and by building on past experience. This approach is inherently challenging, as modern AI algorithms are typically trained from a large set of prerecorded experiences on powerful GPU workstations, and risky, as wrong predictions of the AI model can lead to catastrophic crashes. In the course of the project, we have investigated two different approaches for learning based motor control to reach desired target positions. One based on popular reinforcement learning techniques and one based on a fuzzy controller. The results have shown that both types of controllers achieve superior results compared to standard controllers available on today's drones in particular with respect to disturbances present in the environment such as wind gusts. The algorithms have first been developed and tested in a simulation environment to ensure proper performance before they are being implemented live on the drone. The latter is subject to ongoing work. In a different yet related line of work, we have investigated methods to improve localization of drones with AI methods. Accurate localization is a key prerequisite for drone navigation and is tightly linked with control. To that end, we have trained an AI model to preprocess and clean noisy inertial data and in turn significantly improve localization of the drones. Significant work has been spent on making this algorithm online, and real-time capable. First results with training these models directly on the drone during flight show promising performance. Combining localization and control, we have also developed a reinforcement learning-based approach that explicitly takes a potentially faulty localization into account thus improving the overall performance in realistic, real-world use cases where accurate localization cannot be guaranteed. Future work will build on these fundamental results to further push the boundaries of drone localization and control with AI models trained from scratch and on the drone during flight.
- Universität Klagenfurt - 100%
Research Output
- 3 Publications
- 2 Disseminations
- 1 Fundings
-
2024
Title AIVIO: Closed-Loop, Object-Relative Navigation of UAVs With AI-Aided Visual Inertial Odometry DOI 10.1109/lra.2024.3479713 Type Journal Article Author Jantos T Journal IEEE Robotics and Automation Letters -
2023
Title Deep Neural Networks and Statistical Estimators for Robot Perception and State Estimation Type Postdoctoral Thesis Author Jan Steinbrener -
2023
Title Deterministic Framework based Structured Learning for Quadrotors DOI 10.1109/mmar58394.2023.10242440 Type Conference Proceeding Abstract Author Singh R Pages 99-104
-
2022
Title Bridge Ausschreibung 2022 Type Research grant (including intramural programme) Start of Funding 2022 Funder Austrian Research Promotion Agency