We define 100 indicators that comprehensively characterize transparency for foundation model developers. We divide our indicators into three broad domains:
In addition to the top-level domains (upstream, model, and downstream), we also group indicators together into subdomains. Subdomains provide a more granular and incisive analysis, as shown in the figure below. Each of the subdomains in the figure includes three or more indicators.
One of the most contentious policy debates in AI today is whether AI models should be open or closed. While the release strategies of AI are not binary, for the analysis below, we label models whose weights are broadly downloadable as open. Open models lead the way: We find that two of the three open models (Meta's Llama 2 and Hugging Face's BLOOMZ) score greater than or equal to the best closed model (as shown in the figure on the left), with Stability AI's Stable Diffusion 2 right behind OpenAI's GPT-4. Much of this disparity is driven the lack of transparency of closed developers on upstream issues such as the data, labor, and compute used to build the model (as shown in the figure on the right).
The 2023 Foundation Model Transparency Index was created by a group of eight AI researchers from Stanford
University's Center for
Research on Foundation Models (CRFM) and
Institute on Human-Centered Artificial Intelligence (HAI), MIT Media
Lab, and Princeton University's Center for Information Technology
Policy. The shared interest that brought the group together is improving the transparency of
foundation models.
See author websites below.
Acknowledgments. We thank Alex Engler, Anna Lee Nabors, Anna-Sophie Harling, Arvind Narayanan, Ashwin
Ramaswami, Aspen Hopkins, Aviv Ovadya, Benedict Dellot, Connor Dunlop, Conor Griffin, Dan Ho, Dan Jurafsky,
Deb Raji, Dilara Soylu, Divyansh Kaushik, Gerard de Graaf, Iason Gabriel, Irene Solaiman, John Hewitt,
Joslyn Barnhart, Judy Shen, Madhu Srikumar, Marietje Schaake, Markus Anderljung, Mehran Sahami, Peter Cihon,
Peter Henderson, Rebecca Finlay, Rob Reich, Rohan Taori, Rumman Chowdhury, Russell Wald, Seliem El-Sayed,
Seth Lazar, Stella Biderman, Steven Cao, Toby Shevlane, Vanessa Parli, Yann Dubois, Yo Shavit, and Zak
Rogoff for discussions on the topics of foundation models, transparency, and/or indexes that informed the
Foundation Model Transparency Index.
We especially thank Loredana Fattorini for her extensive work on the visuals for this project, as well as
Shana Lynch for her work in publicizing this effort.