IBM and NASA open source the largest geospatial AI foundation model on Hugging Face
The move aims to widen access to NASA satellite data and accelerate climate-related discoveries.
The move aims to widen access to NASA satellite data and accelerate climate-related discoveries.
Extreme heat has gripped many parts of the world this summer, increasing the risk of wildfires and drought. Mapping the aftermath of these events can help communities predict which areas are most at risk in the future and plan where to focus their adaptation efforts.
Climate change poses numerous risks. The need to understand quickly and clearly how Earth’s landscape is changing is one reason IBM set out six months ago in a collaboration with NASA to build an AI model that could speed up the analysis of satellite images and boost scientific discovery. Another motivator was the desire to make nearly 250,000 terabytes of NASA mission data accessible to more people.
To further both goals, IBM is now making its foundation model public through the open-source AI platform, Hugging Face. It’s the largest geospatial model to be hosted on Hugging Face and the first open-source AI foundation model NASA has collaborated to build. And, it can analyze geospatial data up to four times faster than state-of-the-art deep-learning models, with half as much labeled data, IBM has estimated.
A commercial version of the model, part of IBM’s AI and data platform watsonx, will be available through the IBM Environmental Intelligence Suite (EIS) later this year.
“AI remains a science-driven field, and science can only progress through information sharing and collaboration,” said Jeff Boudier, head of product and growth at Hugging Face. “This is why open-source AI and the open release of models and datasets are so fundamental to the continued progress of AI, and making sure the technology will benefit as many people as possible.”
IBM fine-tuned the model to allow users to map the extent of past U.S. floods and wildfires, measurements that can be used to predict future areas of risk. But with additional fine tuning, the model could be redeployed for tasks like tracking deforestation, predicting crop yields, or detecting and monitoring greenhouse gasses.
Foundation models are highly versatile, and by open sourcing this one, Hugging Face, NASA, and IBM hope that researchers worldwide will be motivated to improve on it and build other geospatial models and applications. IBM and NASA researchers are currently working with Clark University to adapt the model for other applications, including time-series segmentation and similarity search.
"AI foundation models for Earth observations present enormous potential to address intricate scientific problems and expedite the broader deployment of AI across diverse applications,” says Rahul Ramachandran, IMPACT Manager and a senior research scientist at Marshall. “We call on the Earth science and applications communities to evaluate this initial HLS foundation model for a variety of uses and share feedback.”
Underpinning all foundation models is the transformer, an AI architecture that can turn heaps of raw data — text, audio, or in this case, satellite images — into a compressed representation that captures the data’s basic structure. From this scaffold of knowledge, a foundation model can be tailored to a wide variety of tasks with some extra labeled data and tuning.
Traditionally, analyzing satellite data has been highly tedious because of the time required for human experts to annotate features like crops and trees in each satellite image. Foundation models cut out a lot of this manual effort by extracting the structure of raw, natural images so that fewer labeled examples are needed.
In January, under a NASA Space Act Agreement, IBM began training a foundation model on a sliver of NASA’s Harmonized Landsat Sentinel-2 (HLS) dataset, which provides a full view of Earth every two to three days. At a resolution of 30-meters per pixel, HLS images are close enough for detecting changes in land-use but not quite detailed enough for identifying individual trees.
Built on a vision transformer and a masked autoencoder architecture, the model has been adapted to process satellite images by expanding its spatial attention mechanism to include time. IBM trained the model on its AI supercomputer, Vela, and leveraged PyTorch and ecosystem libraries for training and tuning on labeled images of floods and burn-scars from wildfires. In tests, researchers saw a 15% accuracy boost compared to state-of-the-art deep learning models for mapping floods and fires.
The project coincides with NASA’s Year of Open Science, a series of events to promote data and AI model sharing. It’s also part of NASA’s decade-long Open-Source Science Initiative to build a more accessible, inclusive, and collaborative scientific community.
IBM’s decision to open source the model follows a long commitment to making AI accessible to everyone, from its support of Red Hat OpenShift to enable portable cloud computing, to its work with the Ray and PyTorch communities to coordinate and streamline AI workflows.