Light in a Black Blox

Generated by Dalle 3: Bringing transparency in to the world of foundation models and LLMs

The Center for Research on Foundation Models (CRFM) by Stanford University has released its first Foundation Model Transparency Index (FMTI). This index aims to bring more light into the Foundational Model space, by examining leading Foundational Models and their Developers. You can find their publich website here, which also includes links to the research paper (highly recommended) and the underlying data (more on that later).

Foundation models (FMs) are a transformative class of AI that has sparked immense commercial investment and public awareness. However, their development and deployment have been shrouded in opacity, raising concerns about accountability, innovation, and societal impact. To address these concerns, the paper introduces the Foundation Model Transparency Index (FMTI), a composite index that measures and ranks the transparency of FM and their developers. The FMTI consists of three domains: upstream transparency (data, labor, and compute resources), model-level transparency (capabilities, risks, and evaluations), and downstream transparency (distribution channels, usage policies, and affected geographies). By evaluating FM and their developers across these domains, the FMTI aims to promote responsible business practices, enhance public accountability, and facilitate informed decision-making.

The following 10 FMs/Developers where examined for this first iteration of the index:

  • AI21 Labs (Jurassic-2)

  • Amazon (Titan Text)

  • Anthropic (Claude 2)

  • Cohere (Command)

  • Google (PaLM 2)

  • Hugging Face (BLOOMZ)

  • Inflection (Inflection-1)

  • Meta (Llama 2)

  • OpenAI (GPT-4)

  • Stability AI (Stable Diffusion 2)

The index contains 100 indicators that are scored on a binary scale. These indicators are grouped into 23 sub domains which are grouped into 3 domains. The three domains are:

  • Upstream (What is needed to build the model)

  • Model (What can the model do and what are its limits & associated risks)

  • Downstream (How is the model distributed and what impact does it have)

To me the most interesting domain is the Upstream one has it has a lot of implications of how biased the model is, how much copyright was potentially violeted, what are focus domains, etc.. While the data subdomain isn’t the lost scoring one, it is very much in the bottom third of the list. I understand that most model developers do not want to disclose what data they used, but as a consumer of these models that has vital implications for my trust in them and also how comfortable I feel using them.

It will be interesting to see how this index evolves overtime, as all developers have been contacted about this research and most of them even responded. I believe that is a solid step in helping gaining more insights into the Foundational Models and it will help insipire more trust in their usage. I also think that this type of research is vital in the current policy discussions as it can help move from opinion and speculation to a more data driven approach.

If you want to dig deeper into the data itself I created a dashboard that you can browse on your own here. The script to get the data can be found here and because I discovered that there where some incosistencies between the indicator values in the three data sets I created a pull request to help fix that - please add your support there if you want.

Previous
Previous

Leading in the Age of Data [Book Review]

Next
Next

Uncontrollable [Book Review]