There is much debate regarding regulation of AI, and in particular of advanced “foundation models”1. As I discussed in the previous installment, the potential for “science fiction” risks from increasingly capable models is quite real. Meaningful regulation would need to have sufficient teeth to prevent the training of a foundation model that could enable creation of an engineered pandemic, catastrophic cyberattack, or other disaster scenario. There are three options for enacting such regulation:
Now.
Later.
Never.
“Never” is not a realistic option. As capabilities increase, it would just take one poor choice at one AI lab, shortchanging safety in the face of competitive pressure, to open the door to disaster. So the choices for advanced foundation models are “regulate now” or “regulate later”.
It may seem that safety regulation will come at the expense of progress. However, the impact of regulation on advanced foundation models should be more balanced than intuition might suggest:
No-holds-barred development, in the face of intense competition and massive profit motives, could lead to a disaster that triggers a backlash.
Lack of regulation creates a race-to-the-bottom incentive, where a developer might believe they could seize a temporary advantage by taking shortcuts on safety, even if that were to the ultimate detriment of the industry. Regulation allows developers to plan for the long term, confident that their rivals will not undercut them.
Prioritizing safety now will steer AI R&D toward inherently safer architectures, thus smoothing the path for further progress.
Regardless of regulation, only very well-resourced organizations can afford to develop advanced foundation models. These organizations can easily afford to comply with reasonable regulations.
The longer we wait, the more difficult it will be to enact sensible regulation, as entrenched interests will create winners and losers.
The sensible choice is to begin meaningful regulation of advanced foundation models today.
Safety Is Good For Progress
Prompt regulation will help avert a scenario where one irresponsible developer triggers an incident that tarnishes the entire AI industry. A prominent disaster can lead to a public or regulatory backlash, sometimes delaying progress for years. Three Mile Island, Chernobyl, and Fukushima poisoned the public appetite for nuclear power. The Space Shuttle Challenger disaster had downstream impacts at NASA, such as shifting DoD funding away from the Shuttle toward expendable rockets. In 1999, the field of human gene therapy suffered a severe setback after an early trial resulted in the death of an 18-year-old patient, Jesse Gelsinger.
It’s worth noting that in most of these incidents, the problem stemmed from a “proceed by default” approach, where safety measures were subservient to the primary goal of forward progress. For instance, Nobelist Richard Feynman’s observations on the ill-fated Challenger mission describe a steady erosion of safety criteria in order to maintain flight schedules: absent any visible disasters, there is a constant temptation to fit safety into the schedule, rather than adjusting the schedule to accommodate the needs of safety. This proceed-by-default approach also seems characteristic of many AI labs at the moment.
Air travel, by contrast, has benefited from robust and enlightened regulation, assisted by well-regarded organizations such as the U.S. NTSB. The result is that air travel is both trusted and safe, even as competition continues to drive new generations of ever-more-efficient aircraft.
Meanwhile, core AI capabilities are progressing faster than our ability to assimilate them. Even tech companies are scrambling to figure out how to best use the models that are already available. The AI industry can afford to proceed at a safe pace; but it can’t afford a prominent disaster. We want AI to progress like air travel, not nuclear power.
Safety May Be Hard To Retrofit
There is more than one way to approach a given technology, and some approaches are inherently safer than others.
Consider nuclear power. Most plants in operation today are “light water reactors”, which overheat if not continually cooled. The cost and complexity of modern reactors is partly driven by the need to ensure that the supply of cooling water is absolutely never interrupted. (The Fukushima disaster occurred because the tsunami wiped out all of the redundant power sources for the cooling pumps.) Other proposed designs don’t depend on a constant flow of cooling water, allowing them to be both simpler and safer. However, the light water reactor design, originally developed for nuclear submarines, has such a head start that the industry has never managed to move away from it2.
Today, we risk making similarly unfortunate choices in the development of AI. Most commercial development is currently centered on a single approach – large language models based on the “transformer” architecture. Researchers have proposed alternative designs which may be both safer and more efficient, but due to short-term competitive pressures, all of the major commercial labs are sprinting to train ever-larger transformer LLMs. Competitive pressures, absent a safety mandate, are incentivizing short-term thinking and pushing the industry into a monoculture that could turn out to be an inherently unsafe dead end.
If You Can Afford Advanced AI, You Can Afford Compliance
Regulatory compliance costs money, but any company plausibly competing to develop an advanced foundation model will be able to afford it. It’s worth noting that Anthropic’s strong commitment to stringent safety measures has not deterred investors from pouring billions of dollars into the company.
Developing these models is inherently expensive. There is a reason that OpenAI has already raised over ten billion dollars and competitors are doing their best to follow suit. Organizations with this level of resources can afford to comply with safety regulations, just as Airbus and Boeing can afford to comply with the strict regulations that keep air travel so safe. Ask Uber’s autonomous vehicle team how much time and money they saved by cutting corners on safety... except, of course, that you can’t: the project was terminated after an experimental Uber self-driving car killed a pedestrian in Arizona.
An organization that can’t afford a strong safety team cannot realistically afford to develop an advanced foundation model.
If We Wait Too Long, There’s No Going Back
The longer we wait to institute safety regulations, the greater the risk. If a dangerous foundation model is developed, it will be impossible to put the genie back in the bottle. Limiting proliferation is far more difficult for AI models than for nuclear weapons.
Consider the Soviet Union program to build an atomic bomb. Despite having access to detailed information obtained from spies in the Manhattan Project, this required vast resources and effort. Even today, building a nuclear weapon requires massive infrastructure for refining uranium, as well as specialized triggering devices and other components. By contrast, the complete “weights” for an advanced AI can easily be copied over the Internet, and then a handful of chips are sufficient to operate it. Once a country, business, or even an individual person obtains a copy of an advanced AI, it will be impossible to verify that they’ve deleted it. AI proliferation runs in only one direction.
Commercial incentives for proliferation of AI compound the risk. Across most of a century, only a handful of countries have undertaken serious nuclear weapons programs. By contrast, the number of well-capitalized companies aggressively developing AI foundation models seems to grow by the week.
Waiting Won’t Make Anything Better
For decades, attempts to enact meaningful policies for climate change mitigation have been resisted by strenuous lobbying efforts, financed by entrenched interests across the industrialized economy.
For AI, there is currently a public-spirited openness to regulation across much of the tech community, and comparatively few entrenched interests. This provides a window in which to act; however, this window may be rapidly closing. If we regulate now, there will be time for regulations to evolve and grow with the industry. If we wait, then powerful special interests will be in place to distort or forestall any proposed bill.
In summary, the longer we we wait to regulate advanced foundation models:
The greater the risk to public safety.
The greater the risk of setbacks.
The more the resulting regulation will be distorted by entrenched interests.
The greater the risk of winding up on a dead-end development path.
If AI were like fusion power, a fledgling industry struggling for commercial viability, there might be reason to hesitate on regulation. But the big AI labs are moving from strength to strength, and the world is already struggling to assimilate current foundation models. Sacrificing safety for a little extra short-term progress is a poor bargain.
The window for constructive regulation may not remain open for long. Let’s take advantage of the opportunity.
Thanks to Cate Hall, James Gealy, Nicolas Moës, and William Gunn for suggestions and feedback.
A foundation model, such as GPT-4, is a general-purpose AI that can be used for a broad range of applications. “Advanced” generally means something along the lines of “more capable than GPT-4”.
Some companies are trying to move forward with alternative reactor designs now, but it’s been tough going.