3DTV: A Feedforward Interpolation Network for Real-Time View Synthesis
About
Real-time free-viewpoint rendering requires balancing multi-camera redundancy with the latency constraints of interactive applications. We address this challenge by combining lightweight geometry with learning and propose 3DTV, a feedforward network for real-time sparse-view interpolation. A Delaunay-based triplet selection ensures angular coverage for each target view. Building on this, we introduce a pose-aware depth module that estimates a coarse-to-fine depth pyramid, enabling efficient feature reprojection and occlusion-aware blending. Unlike methods that require scene-specific optimization, 3DTV runs feedforward without retraining, making it practical for AR/VR, telepresence, and interactive applications. Our experiments on challenging multi-view video datasets demonstrate that 3DTV consistently achieves a strong balance of quality and efficiency, outperforming recent real-time novel-view baselines. Crucially, 3DTV avoids explicit proxies, enabling robust rendering across diverse scenes. This makes it a practical solution for low-latency multi-view streaming and interactive rendering. Project Page: https://stefanmschulz.github.io/3DTV_webpage/
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Novel View Synthesis | LLFF | PSNR10.3 | 130 | |
| Novel View Synthesis | ZJU-MoCap | PSNR24.1 | 31 | |
| Novel View Synthesis | DNA Rendering dataset (test) | Memory (GB)2.2 | 18 | |
| Novel View Synthesis | THuman 2.1 | PSNR26.7 | 8 | |
| Novel View Synthesis | RIFTCast | PSNR25.7 | 8 | |
| Novel View Synthesis | MVHuman | PSNR25.4 | 8 | |
| Novel View Synthesis | DNA-Rendering | PSNR25.9 | 8 |