Learning multi-modal features for dense matching-based confidence estimation

Show simple item record

dc.identifier.uri http://dx.doi.org/10.15488/14346
dc.identifier.uri https://www.repo.uni-hannover.de/handle/123456789/14463
dc.contributor.author Heinrich, K.
dc.contributor.author Mehltretter, M.
dc.contributor.editor Paparoditis, N.
dc.contributor.editor Mallet, C.
dc.contributor.editor Lafarge, F.
dc.contributor.editor Yang, M.Y.
dc.contributor.editor Yilmaz, A.
dc.contributor.editor Wegner, J.D.
dc.contributor.editor Wegner, J.D.
dc.contributor.editor Remondino, F.
dc.contributor.editor Fuse, T.
dc.contributor.editor Toschi, I.
dc.date.accessioned 2023-07-28T06:35:44Z
dc.date.available 2023-07-28T06:35:44Z
dc.date.issued 2021
dc.identifier.citation Heinrich, K.; Mehltretter, M.: Learning multi-modal features for dense matching-based confidence estimation. In: Paparoditis, N.; Mallet, C.; Lafarge, F.; Yang, M.Y.; Yilmaz, A. et al. (Eds.): XXIV ISPRS Congress "Imaging today, foreseeing tomorrow", Commission II. Katlenburg-Lindau : Copernicus Publications, 2021 (The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences ; XLIII-B2-2021), S. 91-99. DOI: https://doi.org/10.5194/isprs-archives-xliii-b2-2021-91-2021
dc.description.abstract In recent years, the ability to assess the uncertainty of depth estimates in the context of dense stereo matching has received increased attention due to its potential to detect erroneous estimates. Especially, the introduction of deep learning approaches greatly improved general performance, with feature extraction from multiple modalities proving to be highly advantageous due to the unique and different characteristics of each modality. However, most work in the literature focuses on using only mono- or bi- or rarely tri-modal input, not considering the potential effectiveness of modalities, going beyond tri-modality. To further advance the idea of combining different types of features for confidence estimation, in this work, a CNN-based approach is proposed, exploiting uncertainty cues from up to four modalities. For this purpose, a state-of-the-art local-global approach is used as baseline and extended accordingly. Additionally, a novel disparity-based modality named warped difference is presented to support uncertainty estimation at common failure cases of dense stereo matching. The general validity and improved performance of the proposed approach is demonstrated and compared against the bi-modal baseline in an evaluation on three datasets using two common dense stereo matching techniques. eng
dc.language.iso eng
dc.publisher Katlenburg-Lindau : Copernicus Publications
dc.relation.ispartof XXIV ISPRS Congress "Imaging today, foreseeing tomorrow", Commission II
dc.relation.ispartofseries The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences ; XLIII-B2-2021
dc.rights CC BY 4.0 Unported
dc.rights.uri https://creativecommons.org/licenses/by/4.0
dc.subject CNN eng
dc.subject Cost Volume eng
dc.subject Fusion Network eng
dc.subject Local-Global Approach eng
dc.subject Uncertainty eng
dc.subject.classification Konferenzschrift ger
dc.subject.ddc 550 | Geowissenschaften
dc.title Learning multi-modal features for dense matching-based confidence estimation eng
dc.type BookPart
dc.type Text
dc.relation.essn 2194-9034
dc.relation.doi https://doi.org/10.5194/isprs-archives-xliii-b2-2021-91-2021
dc.bibliographicCitation.volume XLIII-B2-2021
dc.bibliographicCitation.firstPage 91
dc.bibliographicCitation.lastPage 99
dc.description.version publishedVersion
tib.accessRights frei zug�nglich


Files in this item

This item appears in the following Collection(s):

Show simple item record

 

Search the repository


Browse

My Account

Usage Statistics