Skip to Main content Skip to Navigation
Conference papers

Few-camera Dynamic Scene Variational Novel-view Synthesis

Abstract : Few-camera videos Structure from motion Novel depth and RGB Relaxed point reconstruction without temporal consistency Efficient pose estimation robust to dynamic objects Sparse noisy point clouds Camera poses Virtual camera path Variational optimization with temporal consistency Figure 1: Given a set of video sequences from a few cameras only, our method computes camera poses and sparse points, then optimizes those points into a novel video sequence following a user-defined camera path. Our space-time SfM relaxes temporal consistency for points on dynamic objects and instead robustly adds consistency during the novel view synthesis via our variational formulation.
Document type :
Conference papers
Complete list of metadata
Contributor : Beatrix-Emoke Fulop-Balogh Connect in order to contact the contributor
Submitted on : Sunday, October 10, 2021 - 6:38:29 PM
Last modification on : Wednesday, November 3, 2021 - 7:54:38 AM


Files produced by the author(s)


  • HAL Id : hal-03372486, version 1


Beatrix-Emőke Fülöp-Balogh, Eleanor Tursman, Nicolas Bonneel, James Tompkin, Julie Digne. Few-camera Dynamic Scene Variational Novel-view Synthesis. JOURNÉES FRANÇAISES D'INFORMATIQUE GRAPHIQUE, Nov 2020, Nancy, France. ⟨hal-03372486⟩



Record views


Files downloads