Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Beyond RGB: Very High Resolution Urban Remote Sensing With Multimodal Deep Networks

About

In this work, we investigate various methods to deal with semantic labeling of very high resolution multi-modal remote sensing data. Especially, we study how deep fully convolutional networks can be adapted to deal with multi-modal and multi-scale remote sensing data for semantic labeling. Our contributions are threefold: a) we present an efficient multi-scale approach to leverage both a large spatial context and the high resolution data, b) we investigate early and late fusion of Lidar and multispectral data, c) we validate our methods on two public datasets with state-of-the-art results. Our results indicate that late fusion make it possible to recover errors steaming from ambiguous data, while early fusion allows for better joint-feature learning but at the cost of higher sensitivity to missing data.

Nicolas Audebert, Bertrand Le Saux, S\'ebastien Lef\`evre• 2017

Related benchmarks

TaskDatasetResultRank
Semantic segmentationPotsdam (test)--
104
Semantic segmentationVaihingen (test)
OA89.8
43
Map extractionShanghai regions (test)
IoU62.6
18
Map extractionSingapore regions (test)
IoU60.5
18
Map extractionPorto regions (test)
IoU68.8
18
Post-flood water mappingC2S-MS Floods (test)
IoU83.33
14
Showing 6 of 6 rows

Other info

Follow for update