Self-supervised audiovisual representation learning for remote sensing data
Abstract
Many deep learning approaches make extensive use of backbone networks pretrained on large datasets like ImageNet, which are then fine-tuned. In remote sensing, the lack of comparable large annotated datasets and the diversity of sensing platforms impedes similar developments. In order to contribute towards the availability of pretrained backbone networks in remote sensing, we devise a self-supervised approach for pretraining deep neural networks. By exploiting the correspondence between co-located imagery and audio recordings, this is done completely label-free, without the need for manual annotation. For this purpose, we introduce the SoundingEarth dataset, which consists of co-located aerial imagery and crowd-sourced audio samples all around the world. Using this dataset, we then pretrain ResNet models to map samples from both modalities into a common embedding space, encouraging the models to understand key properties of a scene that influence both visual and auditory appearance. To validate the usefulness of the proposed approach, we evaluate the transfer learning performance of pretrained weights obtained against weights obtained through other means. By fine-tuning the models on a number of commonly used remote sensing datasets, we show that our approach outperforms existing pretraining strategies for remote sensing imagery. The dataset, code and pretrained model weights are available at https://github.com/khdlr/SoundingEarth.