Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

OmniField: Conditioned Neural Fields for Robust Multimodal Spatiotemporal Learning

About

Multimodal spatiotemporal learning on real-world experimental data is constrained by two challenges: within-modality measurements are sparse, irregular, and noisy (QA/QC artifacts) but cross-modally correlated; the set of available modalities varies across space and time, shrinking the usable record unless models can adapt to arbitrary subsets at train and test time. We propose OmniField, a continuity-aware framework that learns a continuous neural field conditioned on available modalities and iteratively fuses cross-modal context. A multimodal crosstalk block architecture paired with iterative cross-modal refinement aligns signals prior to the decoder, enabling unified reconstruction, interpolation, forecasting, and cross-modal prediction without gridding or surrogate preprocessing. Extensive evaluations show that OmniField consistently outperforms eight strong multimodal spatiotemporal baselines. Under heavy simulated sensor noise, performance remains close to clean-input levels, highlighting robustness to corrupted measurements.

Kevin Valencia, Thilina Balasooriya, Xihaier Luo, Shinjae Yoo, David Keetae Park• 2025

Related benchmarks

TaskDatasetResultRank
Spatiotemporal Field ReconstructionNavier-Stokes (Full)
CRPS0.4765
30
Spatiotemporal Field ReconstructionNavier-Stokes 10% Subset
CRPS1.0651
30
Spatiotemporal forecastingAirDelhi AD-B
CRPS29.244
10
Showing 3 of 3 rows

Other info

Follow for update