Real-time Three-Dimensional Diminished Reality
Real-time Three-Dimensional Diminished Reality
Disciplines
Computer Sciences (100%)
Keywords
-
Mixed Reality,
Image-Based Rendering,
Inpainting,
Diminished Reality
Real-time Three-Dimensional Diminished Reality (3DDR) develops a novel software framework for real-time diminished reality (DR), a variant of mixed reality (MR) which changes a user`s real time perception so that unwanted objects disappear. Our new approach can do this in full 3D and in real time, so that users can move arbitrarily and see through or behind objects. Previous techniques have been limited to the assumption that the region in the image that must be diminished (replaced) is flat, which is often not the case in real-world applications. Our method restores a three-dimensional, non-flat background, and can also fill in the background if no previous observation exists. This type of filling -- a three-dimensional variant of so-called "inpainting" -- must be temporally coherent, so that perspective changes resulting from a moving user position do not destroy the illusion of DR. Our project lends itself to visual inspection tasks, aestethic improvements of mixed reality scenes and telepresence applications with high visual quality.
Diminished reality (DR) is a technique to remove objects from a streaming video shown on a display while the camera remains in motion. DR applications include removing pedestrians from a scene, simulating urban environments by removing buildings, creating educational videos by removing surgeons, eliminating unwanted objects during filming, etc. One can achieve DR by installing multiple cameras in the environment. However, this multi-view approach cannot restore areas that are not observed by the cameras. To solve this issue, inpainting is used to hallusinate the missing areas by copying pixels in the vicinity areas. This is especially useful when advanced scene prepartions are not possible. As inpainting has mostly been performed in image space, some inpainting-based DR systems assume a 3D space is approximated by a flat plane. This approach is only applicable if the scene is safely flat and tracked. To overcome this flat-world assumption, we developed a new approach called InpaintFusion, which removes objects in a 3D space. We perform inpainting on both color and depth images while scanning the scene. Since this process can take a few seconds, we use multi-threading technology to run it in the background. When the inpainting is ready, we project the 2D information back into 3D space. The system continues to inpaint new unobserved areas to fill in any gaps. This merged 3D information allows for 3D interaction and effects like changing the lighting without being affected by existing objects. We discovered that the quality of 3D inpainting depends significantly on the selection of the initial frame. To help novice users choose the right frame, we developed the "Good Keyframes for Inpainting" approach. We examined six heuristics used in InpaintFusion and derived simple criteria that can be measured during scene scanning. We approximated the suitability of the first frame using a robust linear combination. To calculate the coefficients, we collected real-world data and repeatedly applied inpainting. This data-driven frame selection leads to more stable and better inpainting quality compared to random selection. The modern multi-layer images support lightweight and wide field of view 3D rendering. Since multi-layer images provide a canonical space for inpainting, coherent inpainting over multiple frames is unnecessary. We developed a new inpainting algorithm to remove pixelated objects in this new image modality. Additionally, we investigated multi-layer scene representation to better understand the conditions for robust multi-layer scene generation and depth map creation from a collection of defocused images for future 3D DR approaches.
- Technische Universität Graz - 100%
- Vincent Lepetit, Universite Paris Est - France
- Hideo Saito, Keio University - Japan
Research Output
- 7 Publications
- 2 Datasets & models
- 3 Scientific Awards
-
2024
Title Toward Multi-Plane Image Reconstruction from a Casually Captured Focal Stack DOI 10.5220/0012438700003660 Type Conference Proceeding Abstract Author Saito H Pages 72-82 -
2024
Title A Way Out of the Replication Crisis in Diminished Reality Research DOI 10.1109/ismar-adjunct64951.2024.00018 Type Conference Proceeding Abstract Author Mori S Pages 35-36 -
2025
Title Dense Depth from Event Focal Stack DOI 10.1109/wacv61041.2025.00446 Type Conference Proceeding Abstract Author Horikawa K Pages 4545-4553 -
2023
Title Compacting Singleshot Multi-Plane Image via Scale Adjustment DOI 10.1109/ismar-adjunct60411.2023.00117 Type Conference Proceeding Abstract Author Bergfelt M Pages 549-554 -
2023
Title Multi-Layer Scene Representation from Composed Focal Stacks. DOI 10.1109/tvcg.2023.3320248 Type Journal Article Author Ishikawa R Journal IEEE transactions on visualization and computer graphics Pages 4719-4729 -
2023
Title Exemplar-Based Inpainting for 6DOF Virtual Reality Photos. DOI 10.1109/tvcg.2023.3320220 Type Journal Article Author Mori S Journal IEEE transactions on visualization and computer graphics Pages 4644-4654 Link Publication -
2023
Title Good Keyframes to Inpaint. DOI 10.1109/tvcg.2022.3176958 Type Journal Article Author Mori S Journal IEEE transactions on visualization and computer graphics Pages 3989-4000 Link Publication
-
2026
Link
Title Data related to "Exemplar-Based Inpainting for 6DOF Virtual Reality Photos" DOI 10.5281/zenodo.18315105 Type Database/Collection of data Public Access Link Link -
2026
Link
Title Data related to "Good Keyframes to Inpaint" DOI 10.5281/zenodo.18315264 Type Database/Collection of data Public Access Link Link
-
2025
Title Special Interest Group on Mixed Reality Award Type Research prize Level of Recognition National (any country) -
2023
Title IEEE ISMAR 2023 Best Journal Paper Award Nominee Type Research prize Level of Recognition Continental/International -
2023
Title IEEE ISMAR 2023 Best Journal Paper Award Type Research prize Level of Recognition Continental/International