About cookies on this site Our websites require some cookies to function properly (required). In addition, other cookies may be used with your consent to analyze site usage, improve the user experience and for advertising. For more information, please review your options. By visiting our website, you agree to our processing of information as described in IBM’sprivacy statement. To provide a smooth navigation, your cookie preferences will be shared across the IBM web domains listed here.
Publication
IGARSS 2024
Conference paper
TOWARDS EFFICIENT SATELLITE DATA REPRESENTATION LEARNING WITH CONSISTENCY LOSS
Abstract
Foundation models are often pretrained on large datasets and have been valuable in improving efficiency of model train- ing for language and visual processing tasks. An increasing amount of research has focused on foundation models for satellite data which would be beneficial for a wide range of applications such as climate impact modelling. As large quantities of unlabeled satellite data are collected daily, self- supervised learning methods such as image inpainting are key to pretraining these models. Reducing the amount of data re- quired for pretraining by improving learning efficiency could make the implementation of such models more feasible. This research proposes the use of an augmentation based consis- tency loss to improve pretraining efficiency while enhancing downstream performance. Two variations of the proposed approach are evaluated by finetuning pretrained models on flood segmentation and multilabel land cover downstream tasks. Findings show that incorporating consistency loss can enhance downstream performance, although the degree of improvement depends on the downstream task. It is further demonstrated that the downstream improvements can be achieved even with reduced pretraining data.