Lifting-based regularization for dynamic image data
Lifting-based regularization for dynamic image data
Disciplines
Computer Sciences (20%); Mathematics (80%)
Keywords
-
Variational Image Processing,
Regularization Functionals,
Functional Lifting,
Inverse Problems,
Dynamic Image Data,
Convex relaxation
Digital image data is nowadays an integral part of human culture. It stores our holiday memories, visualizes nano-scale structures in cells and allows an examination of the interior of the human body. The process of recording a digital image from an object always involves a measurement of a physical quantity and the generation of the image from this measurement via mathematical operations. While image generation is comparably easy when the image data is observed directly, as with a digital camera, indirect or corrupted measurements of the object of interest make this process much harder. In most scientific and clinical applications of image processing, however, only the second type of measurements is available. Important examples are tomographic imaging techniques such as magnetic resonance (MR) imaging where it is only possible to measure the current that hydrogen protons induce in receiver coils in order to reconstruct an image of the inside body. When indirect or corrupted data prevents a direct reconstruction of images, mathematical modeling comes into play. A mathematical model of expected image structures is able to provide the missing information necessary to convert corrupted measurements to clean image data. In variational image processing, this is achieved by solving a minimization problem that involves the measured data and the image model in form of a regularization functional. In this context, it is crucial that the regularization functional fulfills a delicate balance between computational realizability and a realistic modeling of data. The underlying concept of regularization functionals is to measure typical properties of image data with mathematical operations that are simple enough to be realized in practice. State-of-the-art approaches for still images, for instance, measure smoothness of the image via differentiation operations. With dynamic data, the distinctive role of the temporal dimension makes the development of realistic yet tractable models more challenging. In addition, the human eye is very sensitive to any variation in time and hence an accurate reconstruction of image dynamics is crucial for the perceived quality of the results. On the other hand, temporal correspondences between frames comprise, generically, a strong source of redundancy that can be exploited for regularization. It is the goal of the proposed project to develop and employ new mathematical tools for the regularization of dynamic image data. To this aim, we will transfer the image data to a higher dimensional space where essential properties are more easily accessible with simple mathematical operations. This will allow us to achieve new regularization approaches for dynamic data which better describe image dynamics than existing ones and are still computationally tractable. These new techniques will then be employed also for the reconstruction of dynamic MR data, contributing to a high resolution imaging of the beating heart.
An integral problem setting in science and engineering is to determine properties of a (physical or virtual) object for which only indirect measurements are available. Examples of such properties comprise (material) parameters in physical systems, interior body measurements in medical imaging, but also virtual parameters in computer programs (such as neuronal networks) that one aims to tune such that the program is able to complete certain tasks. While often the measurements as well as a mathematical model on how these measurements arose can be assumed to be given, the determination of the unknown object properties that are the cause for these measurements is rather difficult. Here, the main challenge in many applications is an ambiguous and/or an unstable dependence of the unknown object on the measured data. The latter means that even the slightest deviation in the measured data (e.g. due to measurement noise) leads to a dramatic change of the object properties that can be directly associated with the measurement. A very successful mathematical technique to overcome these difficulties is regularization. Regularization means that one aims to determine the unknown object properties not by solely relying on the measured data, but also by incorporating some prior knowledge on them. Such prior knowledge is typically formulated as a second mathematical model. Using a numerical algorithm, the unknown object properties being the solution of the original problem are then obtained as those properties that provide an optimal balance between explaining the measured data on the one hand and fitting to the mathematical model of prior knowledge on the other hand. In order to make the above-described regularization approach work in practice, an important task is the development of mathematical models for particular problem settings that i) allow to overcome an ambiguous and/or unstable dependence of the solution on the measured data, ii) are sufficiently rich to realistically describe a large class of objects, and iii) are sufficiently simple such that a numerical realization is feasible in practice. The goal of this project was to develop new mathematical models fulfilling these properties with a focus on applications in mathematical image processing, where the unknown object properties are described by image data. In order to fulfill in particular the requirements ii) and iii) above, a technique that was used in the project was functional lifting. This means that rich but numerically difficult models are transferred to a higher dimensional space in a way such that in this space they become numerically tractable. Using these techniques, as a main result of the project, we were able to develop a mathematical model for repeating patterns in image data that is numerically feasible and allows to reconstruct unknown images from given, indirect measurements via simultaneously and automatically learning such patterns. In addition, analyzing this model, we were able to prove mathematically that indeed the model also satisfies the requirement i) above. Further, using numerical implementations, we showed that the proposed model improves significantly upon existing techniques in different problem settings. These results can be expected to allow for practical improvements in different, application-driven problems in science and engineering whenever the unknown object properties contain repeating patterns, a class of data for which a comprehensive mathematical model was missing so far.
- Ecole Polytechnique - 100%
- University of Cambridge - 100%
- Gabriel Peyré, Ecole Normale Supérieure de Paris - France
- Florian Knoll, Friedrich-Alexander-Universität Erlangen-Nürnberg - Germany
- Benedikt Wirth, Universität Münster - Germany
Research Output
- 68 Citations
- 3 Publications
-
2018
Title Coupled regularization with multiple data discrepancies DOI 10.1088/1361-6420/aac539 Type Journal Article Author Holler M Journal Inverse Problems Pages 084003 Link Publication -
2019
Title A Convex Variational Model for Learning Convolutional Image Atoms from Incomplete Data DOI 10.1007/s10851-019-00919-7 Type Journal Article Author Chambolle A Journal Journal of Mathematical Imaging and Vision Pages 417-444 Link Publication -
2019
Title Total generalized variation regularization for multi-modal electron tomography DOI 10.1039/c8nr09058k Type Journal Article Author Huber R Journal Nanoscale Pages 5617-5632 Link Publication