Disciplines
Computer Sciences (70%); Mathematics (30%)
Keywords
-
Machine Learning,
Algebraic Topology,
Persistent homology,
Deep Learning
Over the past decade, concepts from the field of algebraic topology have evolved into computationally practical methods to analyze data from a topological perspective. This is now more broadly known as Topological data analysis (TDA). Presumably, the most widely used tool from TDA is persistent homology, which offers a concise summary of homological information at different scales, such as the number of connected components, holes or voids. While summaries of this kind can be highly informative and potentially useful for learning purposes, the connection between TDA and machine learning is still in its infancy. The goal of this project, Deep Homological Learning, is to develop novel and theoretically well-founded approaches to bridge the gap between TDA and recent advances in learning with deep neural networks. This not only includes (1) leveraging information from persistent homology as an additional data source for learning, but also (2) to learn filtrations for persistent homology from data and (3) to use concepts from algebraic topology to study neural network architectures, their capacity and learning progress. We will contribute to the theoretical foundation of learning with persistent homology and to a deeper understanding of neural network capacity and neural network learning behavior from a topological perspective. Advances along the lines proposed in this project (1) have great potential to offer better guidelines for neural network design, for example, informed by topological properties of the data, and (2) will eventually lead to practically useful diagnostic tools to analyze learning progress.
The project's overall goal was to establish a solid, theoretically well-founded bridge between machine learning methods (neural networks in particular) and the relatively new subfield of Topological Data Analysis, focusing primarily on persistent homology. Throughout the project duration, we realized several of those bridges; most notably, we (1) introduced novel construction schemes for so-called "barcode vectorizations", i.e., representations of the prevalent summary representation of topological features in data (barcodes), that can readily be used as novel input layers to neural networks, and (2) we successfully demonstrated that one can use persistent homology during end-to-end training of neural networks, e.g., to promote specific topological properties of a network's internal representation of the data. The latter point, in fact, opened up the path to novel regularizers and established a way forward to study generalization in neural networks from a topological perspective in the future.
- Universität Salzburg - 100%
Research Output
- 95 Citations
- 19 Publications
- 1 Datasets & models
- 2 Disseminations
-
2021
Title Dissecting Supervised Contrastive Learning DOI 10.48550/arxiv.2102.08817 Type Preprint Author Graf F -
2021
Title ICON: Learning Regular Maps Through Inverse Consistency DOI 10.48550/arxiv.2105.04459 Type Preprint Author Greer H -
2022
Title On Measuring the Excess Capacity of Neural Networks Type Conference Proceeding Abstract Author Graf F Conference Advances in Neural Information Processing Systems (NeurIPS) Pages 10164--10178 Link Publication -
2022
Title $\texttt{GradICON}$: Approximate Diffeomorphisms via Gradient Inverse Consistency DOI 10.48550/arxiv.2206.05897 Type Preprint Author Tian L -
2019
Title Metric Learning for Image Registration DOI 10.1109/cvpr.2019.00866 Type Conference Proceeding Abstract Author Niethammer M Pages 8455-8464 Link Publication -
2019
Title Metric Learning for Image Registration DOI 10.48550/arxiv.1904.09524 Type Preprint Author Niethammer M -
2019
Title Connectivity-Optimized Representation Learning via Persistent Homology DOI 10.48550/arxiv.1906.09003 Type Preprint Author Hofer C -
2020
Title Graph Filtration Learning Type Conference Proceeding Abstract Author Graf F Conference Proceedings of the 37th International Conference on Machine Learning Pages 4314-4323 Link Publication -
2020
Title Topologically Densified Distributions Type Conference Proceeding Abstract Author Graf F Conference Proceedings of the 37th International Conference on Machine Learning Pages 4304-4313 Link Publication -
2020
Title A shooting formulation of deep learning Type Conference Proceeding Abstract Author Kwitt R Conference Advances in Neural Information Processing Systems (NeurIPS) Pages 11828--11838 Link Publication -
2020
Title A Shooting Formulation of Deep Learning DOI 10.48550/arxiv.2006.10330 Type Preprint Author Vialard F -
2023
Title Inverse Consistency byConstruction forMultistep Deep Registration; In: Medical Image Computing and Computer Assisted Intervention - MICCAI 2023 - 26th International Conference, Vancouver, BC, Canada, October 8-12, 2023, Proceedings, Part X DOI 10.1007/978-3-031-43999-5_65 Type Book Chapter Publisher Springer Nature Switzerland -
2023
Title Inverse Consistency by Construction for Multistep Deep Registration DOI 10.48550/arxiv.2305.00087 Type Preprint Author Greer H Link Publication -
2019
Title Learning Representations of Persistence Barcodes Type Journal Article Author Hofer C Journal Journal of Machine Learning Research Pages 1-45 Link Publication -
2019
Title Connectivity-Optimized Representation Learning via Persistent Homology Type Conference Proceeding Abstract Author Hofer C Conference Proceedings of the 36th International Conference on Machine Learning Pages 2751-2760 Link Publication -
2021
Title Topological Attention for Time Series Forecasting Type Conference Proceeding Abstract Author Graf F Conference Advances in Neural Information Processing Systems (NeurIPS) Pages 24871--24882 Link Publication -
2021
Title Dissecting Supervised Contrastive Learning Type Conference Proceeding Abstract Author Graf F Conference Proceedings of the 38th International Conference on Machine Learning Pages 3821-3830 Link Publication -
2021
Title ICON: Learning Regular Maps Through Inverse Consistency DOI 10.1109/iccv48922.2021.00338 Type Conference Proceeding Abstract Author Greer H Pages 3376-3385 Link Publication -
2023
Title GradICON: Approximate Diffeomorphisms via Gradient Inverse Consistency. DOI 10.1109/cvpr52729.2023.01734 Type Journal Article Author Greer H Journal Proceedings. IEEE Computer Society Conference on Computer Vision and Pattern Recognition Pages 18084-18094