Parameterized Analysis in Artificial Intelligence
Parameterized Analysis in Artificial Intelligence
Disciplines
Computer Sciences (100%)
Keywords
-
Parameterized Complexity,
Fixed-Parameter Tractability,
Artificial Intelligence,
Machine Learning
Finding efficient and rigorous ways to solve difficult computational problems is one of the most fundamental tasks in computer science. In general, there are two prominent approaches that can be used to deal with such computational problems: we can either aim for solutions which are nearly optimal (leading to so-called approximation algorithms), or we can perform a fine-grained analysis of the problem to identify properties that will help us find optimal solutions. Indeed, while optimal solutions for many important computational problems cannot be efficiently computed in general, the real-world instances we really care about often contain some hidden structure that can be exploited to solve the problem. This idea has led to the introduction of the parameterized complexity paradigm at the turn of the 21st century, and systematic research in this direction has since then not only allowed us to obtain faster algorithms for fundamental problems in numerous areas of computer science, but also revealed the precise conditions under which such problems can be efficiently tackled. And yet, in the context of artificial intelligence an area which has become an ubiquitous part of today`s society we often see a distinct lack of research targeting the fine-grained, parameterized complexity of problems arising in that field. The goal of this project is to change that by establishing the foundations for parameterized complexity theory in state-of-the-art research on artificial intelligence (AI) a task which will drastically expand our understanding of which AI problems can actually be solved efficiently, and under what conditions. Crucially, the project will not only apply the established parameterized complexity paradigm to computational problems that arise in state-of-the-art artificial intelligence, but will also develop a theory of parameterized sample complexity: a set of mathematical tools and techniques that will allow us to obtain fine-grained bounds on how much data learning algorithms actually need to perform well. This comes with specific and non-trivial challenges, but the pay-off is huge as well: the ultimate outcome will not only be smarter and faster algorithms, but also a deeper understanding of which problems can be efficiently handled by machines.
- Technische Universität Wien - 100%
- Mikko Koivisto, University of Helsinki - Finland
- Daniel Marx, CISPA Helmholtz Center for Information Security - Germany
- Iyad Kanj, DePaul University - USA
- Sebastian Ordyniak, University of Leeds
Research Output
- 124 Citations
- 73 Publications