The EU AI Act enters into force today: the Act is a watershed moment in the history of AI and technology policy, constituting the first comprehensive legislation on artificial intelligence. The Act is the byproduct of prescience and commitment by the European Union. The European Commission penned the initial White Paper on the AI Act in February 2020 and introduced the first draft of the Act in April 2021, and the EU came to political agreement on the legislation in December 2023. The EU AI Act sets the precedent for all subsequent AI policy: its efficacy will shape the future of artificial intelligence and society far beyond Europe.
The EU AI Act is a key policy priority for Stanford’s Center on Research of Foundation Models. Specifically, the European Parliament introduced provisions on foundation models into the Act and came to agreement on the Parliament position in June 2023. A day later, we released an initial assessment of how major foundation model providers fared against the Parliament’s position and, in the subsequent months, we extensively engaged with EU legislators on the Act. As the trilogue negotiations came to a close at the end of 2023, we made public our thinking on how to design proportionate regulation by tiering foundation models and, more sharply, our overall proposal to achieve compromise on the Act. Moving forward, we will continue to engage with the EU on the AI Act’s implementation through the AI Office and Scientific Panel.
This piece explores how foundation models are addressed by the EU AI Act. The Act’s requirements for foundation models and general-purpose AI will trigger one year from today, with the AI Office currently conducting a stakeholder consultation and designing codes of practice. To understand the Act’s obligations, we furnish a rigorous coding of the Act’s requirements related to general-purpose AI. Our coding organizes the obligations based on whether they require information disclosures or substantive actions, target all developers or specific subsets, and categorize who receives any disclosed information (e.g. the public vs. the EU AI Office). We count 25 requirements that apply to 4 classes of general-purpose artificial intelligence. Most requirements are information disclosures where developers provide information to either the government or downstream firms. The few substantive requirements that compel a developer to take a particular action hinge on whether a model is designated as posing systemic risk. As of today, estimates indicate that 8 models meet the default criteria of 10^25 training FLOPs.
Using our coding scheme, we compare the Parliament proposal of June 2023, the Stanford proposal of December 2023, and the AI Act of July 2024. Each of these texts primarily centers on information disclosure: the Parliament position and the Stanford proposal emphasize public-facing disclosure, whereas the AI Act includes just one public-facing disclosure requirement that is specific to general-purpose AI. The Stanford proposal and the AI Act share a two-tiered approach and consider many similar criteria for setting the threshold, yet the AI Act’s default criterion of compute diverges from the Stanford proposal’s focus on demonstrated market impact. Critically, while the AI Act includes many of the elements of the Stanford proposal, it lacks provisions for third-party researcher access to models (in contrast to the EU Digital Services Act) or adverse reporting (in contrast to the G7 Code of Conduct and the US National AI Advisory Committee recommendation). Third-party audits and adverse event reporting are vital information gathering mechanisms for building the evidence base of demonstrated harms from foundation models.
The legal text for the EU AI Act is an expansive 100+ page document with 113 articles and 12 annexes that builds on a broad array of EU regulations on digital technology (e.g. GDPR on data privacy, DMA on market power, DSA on online platforms). The Act defines a general-purpose AI model as “an AI model, including when trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable to competently perform a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications. This does not cover AI models that are used before release on the market for research, development and prototyping activities”.
The obligations for general-purpose AI are organized under Chapter V: General-Purpose AI Models (Articles 51 – 56) along with three supporting annexes (Annex XI – XIII). Obligations for general-purpose AI vary significantly based on whether a model is classified as posing systemic risk (Articles 51 – 52). While the obligations for general-purpose AI take effect on August 2, 2025, the EU AI Office is tasked (Article 56) with preparing codes of practice by May 2, 2025 to facilitate compliance.
We code the AI Act’s requirements on general-purpose AI to increase organization and clarity. To start, we collated the requirements with short-form descriptors. For each requirement, we classified it as either a disclosure requirement, meaning information must be disclosed by the provider of a general-purpose AI model to some entity, or a substantive requirement, meaning the provider must implement some practice. For each disclosure requirement, we further coded the recipient to whom the information must be disclosed, which may include (i) the EU and national governments, (ii) downstream providers that integrate the model into their AI system, or (iii) the public. And, finally, we coded every requirement for the scope of providers to which it applies, which may include (i) all providers, (ii) providers of open models without systemic risk, (iii) providers of non-open models without systemic risk, or (iv) providers of models with systemic risk.
Takeaways.
To understand the evolution of the AI Act, we compare it to prior proposals made during the AI Act legislative process. In particular, given our focus on foundation models and general-purpose AI, we consider the European Parliament’s negotiated position from June 2023. This is the first formal EU position to introduce the subject of foundation models. And we also consider our own position from December 2023. To compare both of these proposals to the AI Act, we code them using the same coding scheme that we describe above.
Takeaways.
We thank Luca Bertuzzi and Kai Zenner for their extensive coverage of the AI Act, which proved to be an invaluable service for increasing public transparency and understanding.
@misc{bommasani2024euaiact,
author = {Rishi Bommasani and Alice Hau and Kevin Klyman and Percy Liang},
title = {Foundation Models under the EU AI Act},
howpublished = {Stanford Center for Research on Foundation Models},
month = aug,
year = 2024,
url = {https://crfm.stanford.edu/2024/08/01/eu-ai-act.html}
}