About cookies on this site Our websites require some cookies to function properly (required). In addition, other cookies may be used with your consent to analyze site usage, improve the user experience and for advertising. For more information, please review your options. By visiting our website, you agree to our processing of information as described in IBM’sprivacy statement. To provide a smooth navigation, your cookie preferences will be shared across the IBM web domains listed here.
Publication
AGU 2024
Poster
Exploring Different Types of Foundation Models on Flood Segmentation Datasets
Abstract
Semantic segmentation AI models play a critical role in tasks requiring pixel-level classification, such as coverage detection or disaster evaluation. However, these models often require preparing large amounts of training data and extensive model training. The latest foundation models can potentially enhance data efficiency to achieve better model accuracy with less training data. This paper explores various foundation models and training schemes for flood segmentation datasets in scenarios with limited training data available, including prompt tuning on visual-language models(VLM) and fine-tuning models specialized for geographical data. We benchmark the performance of different models with various tuning schemes. Our experimental results show the potential for accuracy improvement by foundation models to enhance flood segmentation tasks, especially with restricted data availability. Notably, prompt tuning on VLMs, with a similar number of learnable parameters as linear probing on conventional segmentation models, outperforms conventional models in all tested scenarios of data availability.