Why We Need EU Regulation of Foundation Models
The AI Act Is Our Golden Opportunity To Create Powerful, Safe AI
Summary: the exponential progress of AI is bringing great benefits, but also a range of risks. In a world that will contain millions of applications, but at most dozens of advanced foundation models, only regulation of foundation models can address the most serious risks. Fortunately, such regulation is entirely consistent with innovation and competition.
Two years of work on the EU AI Act are now coming to a head. For the occasion, I’m embarking on a series of blog posts on the topic of regulation of foundation models, writing from the perspective of a tech industry veteran.
Some quick background on me. I’m a software engineer, and co-founder of seven startups, most notably Writely (now known as Google Docs) and Scalyr (acquired by SentinelOne and now acting as the data platform for their cybersecurity suite). Most recently, I’ve been studying and writing about the capabilities, trajectory, and implications of generative AI.
In this post, I’m going to briefly explain why EU regulation of advanced foundation models is our best hope for setting AI on a path that is beneficial to society. As a repeat entrepreneur, I do not generally tilt toward regulation of bleeding-edge technology, but AI is an exceptional technology and requires an exceptional response.
Over the next few days, I’ll cover some key points in greater detail:
Why a broad range of AI researchers and members of the tech community agree that “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war”.
The longer we wait to enact meaningful regulation, the harder the task will become.
Well-designed regulation can actually speed the path of progress, by averting disasters that – like Chernobyl, the Space Shuttle Challenger disaster, or early missteps in gene therapy – can set an industry back by decades.
Some paths to AI are safer than others. Setting off down an unsafe path will require painful rework later, just as we are now facing the need to replace vast amounts of fossil-fuel-based infrastructure.
Sensible safety regulation is no barrier to competition.
Open source projects are not immune to safety concerns.
Innate Risks of Advanced AI
The rapidly increasing power of AI models opens the door to both great benefits and potential harms. Here, I am going to focus on two categories of risk:
AI misuse, such as using AI to create a chemical or biological weapon, carry out cyberattacks, or aid repressive governments in manipulating and monitoring citizens.
AI misbehavior includes the frightening possibility of a Terminator-style scenario, where an advanced AI develops unintended goals, and fights to protect those goals.
Disagreement over AI regulation often stems from a failure to engage with these risks: lumping them in with more prosaic problems such as biased training data, or lazily dismissing them as science fiction. I – and a rapidly increasing share of the academic and technical community – believe that we need to take these risks very seriously. I’ll go into more detail in a subsequent post.
We Must Regulate Foundation Models Directly
AI is typically used in the form of an application, such as a tool for taking meeting notes, built on a foundation model, such as GPT-4 or Llama 2. Creating an advanced foundation model costs millions of dollars, whereas some applications can be slapped together in a weekend by a single developer.
Because applications are so easy to build, misuse and misbehavior risks can only be addressed in the foundation model. If a scrappy startup builds a tool for screening resumes, and it makes biased decisions, we can require the startup to correct it. If a modern-day Unabomber uses GPT-6 to engineer a pandemic, no threat of regulatory penalty on the Unabomber can deter or repair the damage.
Millions of developers build on foundation models. Distributing a dangerous model, in conjunction with safety requirements for applications, would be like allowing unrestricted manufacturing and distribution of plutonium, accompanied by a strict requirement that it only be used to power space probes. Any expectation that this would deter terrorists from creating an atomic bomb would look foolish in hindsight.
For the sorts of catastrophic risks that require airtight safety standards, the process of creating an advanced foundation model must be regulated directly.
Sensible Regulation Will Promote Innovation And Competition
Overly broad regulation could make it difficult for new competitors to arise to challenge the big AI labs. Fortunately, the dangers we are discussing only apply to advanced models, which are expensive to train in any case. OpenAI spent over $100,000,000 to create GPT-4, and is preparing to spend billions on future models. Costs will come down, but even so, an organization that is operating at this level can address compliance as part of their safety effort. (If they don’t have a strong safety team, then they should not be building advanced AI!)
Another question is whether regulation might stifle innovation. I will begin by noting, as someone whose entire career has been built on innovation, that it is no bargain to rapidly innovate into a catastrophe. While it has become a cliche to point it out, “move fast and break things” becomes less appealing when the thing we risk breaking is human civilization.
Less obviously, well-designed regulation can actually speed the path of progress, by averting a Chernobyl-style disaster that might trigger a backlash. Regulation can also create the incentive to adopt inherently safer architectures, which will lay the foundation for truly advanced AIs in the years to come.
The EU AI Act Is Our Chance To Set AI On A Sustainable Path
EU policymakers have spent two years drafting regulations for foundation models (previously referred to as “general-purpose AI”). In the US, by contrast, the prospects for meaningful legislation seem dim. I needn’t explain the current dysfunction in the US Congress, and the upcoming election cycle won’t help.
The EU’s head start makes the upcoming AI Act vote our only realistic hope for meaningful action in the near term. As I’ll explain in an upcoming post, the flood of commercial development coming in the next few years will make it much harder to act later. Conversely, if the EU does act now, it can establish a global standard and inspire catch-up legislation in the US, UK, and elsewhere.
Both proponents and opponents of regulation are in agreement that AI is climbing an exponential curve of progress. The explosion of Covid into an unprepared world in March 2020 shows us how critical it is for government action to stay ahead of exponential curves. To avoid accidents that could cause catastrophic harm and set back the pace of AI development, we need binding regulation on the training of advanced foundation models, and we need it today.
Thanks to Cate Hall, James Gealy, Nicolas Moës, and William Gunn for suggestions and feedback.