Skip to Main content Skip to Navigation
Conference papers

Few-camera Dynamic Scene Variational Novel-view Synthesis

Abstract : Few-camera videos Structure from motion Novel depth and RGB Relaxed point reconstruction without temporal consistency Efficient pose estimation robust to dynamic objects Sparse noisy point clouds Camera poses Virtual camera path Variational optimization with temporal consistency Figure 1: Given a set of video sequences from a few cameras only, our method computes camera poses and sparse points, then optimizes those points into a novel video sequence following a user-defined camera path. Our space-time SfM relaxes temporal consistency for points on dynamic objects and instead robustly adds consistency during the novel view synthesis via our variational formulation.
Document type :
Conference papers
Complete list of metadata

https://hal.archives-ouvertes.fr/hal-03372486
Contributor : Beatrix-Emoke Fulop-Balogh Connect in order to contact the contributor
Submitted on : Sunday, October 10, 2021 - 6:38:29 PM
Last modification on : Wednesday, November 3, 2021 - 7:54:38 AM

File

JFIG2020___Consistent_Depth_Es...
Files produced by the author(s)

Identifiers

  • HAL Id : hal-03372486, version 1

Citation

Beatrix-Emőke Fülöp-Balogh, Eleanor Tursman, Nicolas Bonneel, James Tompkin, Julie Digne. Few-camera Dynamic Scene Variational Novel-view Synthesis. JOURNÉES FRANÇAISES D'INFORMATIQUE GRAPHIQUE, Nov 2020, Nancy, France. ⟨hal-03372486⟩

Share

Metrics

Record views

23

Files downloads

20