News
3 minute read

IBM Granite tops Stanford’s list as the world’s most transparent model

The Stanford University Foundation Model Transparency Index has ranked IBM Granite number one this year — with the highest score in the history of the index.

Foundation models are everywhere these days, powering AI tools as diverse as chatbots, code assistants, and geospatial models. But as foundation models transform more areas of business and our everyday lives, it’s worth asking: Do we really know how they’re built?

This is the question that guides Stanford University’s Center for Research on Foundation Models, which published its third annual Foundation Model Transparency Index (FMTI) report today. IBM open-sourced its Granite models in 2024, but openness alone doesn’t always equal transparency for developers and end users. The FMTI scores the transparency of popular foundation models according to 100 different indicators, including data sources, risk evaluations, open weights, external reproducibility, incident reporting protocols, and data usage policies. The Stanford team scored models from the major AI companies against their rubrics, giving companies the opportunity to respond to the ratings before the FMTI was published.

Stanford reviewed models from 13 companies this year, and when scores across all the domains were tallied up, IBM Granite 3.3 emerged as the clear winner, scoring 95% on the FMTI — 23 percentage points ahead of the first runner-up, and 54 points ahead of the mean score of 41%.

Scoreboard for Stanford Foundation Model Transparency Index, showing IBM at the top with a score of 95
The FMTI scored models on three broad domains: Upstream, Model, and Downstream. IBM Granite led the field on each domain individually, and on overall score.

On 10 of the 15 of the major dimensions of transparency, IBM Granite scored a perfect 100. These categories included data acquisition, compute, and downstream mitigations.

For IBM, building transparency into the core of its models isn’t just an ethical decision — it’s a sensible business choice. Like any other supply chain decision businesses make around the world, companies want to know they can trust what’s in the products they’re purchasing. IBM’s models are transparent by default. The FMTI’s results supported this trend, showing that B2B models tended to be more transparent.

Even as IBM pulled well ahead of the pack, the average score fell 17 points this year, indicating reduced transparency among the other top AI companies. “I think it is quite telling that we leaned further into transparency this year when others backed off,” said IBM Fellow Kush Varshney, who leads IBM Research’s AI safety efforts. The numbers on this front are stark: Whereas IBM scored a perfect 100 on the Data Properties domain, eight of the other companies scored zero, and the average score for this category was only 14.

Beyond the overall drop in scores, the FMTI report reveals another trend: Only half as many companies submitted their own transparency reports in 2025 as in 2024. The Center for Research on Foundation Models sourced the remaining data on its own.

Foundation Model Transparency Index scores broken down by major dimensions of transparency
IBM Granite scored a perfect 100 on 10 out of 14 of the major dimensions of transparency in the FMTI. In all 14 dimensions, it surpassed the average score for all the evaluated models.

The FMTI was updated this year to reflect changes to the field, adding criteria on AI agents’ information retrieval capabilities and including models from Chinese companies for the first time. The FMTI captured granular aspects of openness, including not just the amount of access companies provide to their models, but the nature of that access, too, wrote the team behind the Index: “for example, subsidized access enables third-party research into model risks and agent protocols enable interoperability across agents.”

IBM was one of the companies that worked with Stanford to provide information on its models. In the time since we submitted data on Granite 3.3, IBM has released the Granite 4.0 family of models, the next generation of IBM language models. These new models, open sourced under a permissive Apache 2.0 license, are the world’s first open models to receive ISO 42001 certification. The models are also cryptographically signed, ensuring they adhere to internationally recognized best practices for security, governance, and transparency.

Another recent study from a different group within Stanford, the Hazy Research lab, showed that the open Granite 4.0 Tiny, Micro, and Small models are extremely adept at handling many AI tasks when running on consumer-grade hardware. To them, models like the IBM Granite 4.0 family point to where the AI industry is heading.

“Just like most of us are unwilling to eat or drink things when we don't know their ingredients, enterprises across industries and sectors should be insisting on transparency of their LLMs,” said Varshney. “IBM Granite is a farm-to-table concept.”

Related posts