“Arms race” is the wrong mental model for AI. Here’s a better one.
You’ve probably heard AI progress described as a classic “arms race.” The basic logic is that if you don’t race forward on making advanced AI, someone else will — probably someone more reckless and less safety-conscious. So, better that you should build a superintelligent machine than let the other guy cross the finish line first! (In American discussions, the other guy is usually China.)
But as I’ve written before, this isn’t an accurate portrayal of the AI situation. There’s no one “finish line,” because AI is not just one thing with one purpose, like the atomic bomb; it’s a more general-purpose technology, like electricity. Plus, if your lab takes the time to iron out some AI safety issues, other labs may take those improvements on board, which would benefit everyone.
And as AI Impacts lead researcher Katja Grace noted in Time, “In the classic arms race, a party could always theoretically get ahead and win. But with AI, the winner may be advanced AI itself [if it’s unaligned with our goals and harms us]. This can make rushing the losing move.”
I think it’s more accurate to view the AI situation as a “tragedy of the commons.” That’s what ecologists and economists call a situation where lots of actors have access to a finite valuable resource and overuse it so much that they destroy it for everyone.
A perfect example of a commons: the capacity of Earth’s atmosphere to absorb greenhouse gas emissions without tipping into climate disaster. Any individual company can argue that it’s pointless for them to use less of that capacity — someone else will just use it instead — and yet every actor acting in their rational self-interest ruins the whole planet.
AI is like that. The commons here is society’s capacity to absorb the impacts of AI without tipping into disaster. Any one company can argue that it would be pointless to limit how much or how fast they deploy increasingly advanced AI — if OpenAI doesn’t do it, it’ll just be Google or Baidu, the argument goes — but if every company acts like that, the societal result could be tragedy.
“Tragedy” sounds bad, but framing AI as a tragedy of the commons should actually make you feel optimistic, because researchers have already found solutions to this type of problem. In fact, political scientist Elinor Ostrom won a Nobel Prize in Economics in 2009 for doing exactly that. So let’s dig into her work and see how it can help us think about AI in a more solutions-focused way.
Elinor Ostrom’s solution to the tragedy of the commons
In a 1968 essay in Science, the ecologist Garrett Hardin popularized the idea of the “tragedy of the commons.” He argued that humans compete so hard for resources that they ultimately destroy them; the only ways to avoid that are total government control or total privatization. “Ruin is the destination toward which all men rush,” he wrote, “each pursuing his own best interest.”
Ostrom didn’t buy it. Studying communities from Switzerland to the Philippines, she found example after example of people coming together to successfully manage a shared resource, like a pasture. Ostrom discovered that communities can and do avert the tragedy of the commons, especially when they embrace eight core design principles:
1) Clearly define the community managing the resource.
2) Ensure that the rules reasonably balance between using the resource and maintaining it.
3) Involve everyone who’s affected by the rules in the process of writing the rules.
4) Establish mechanisms to monitor resource use and behavior.
5) Create an escalating series of sanctions for rule-breakers.
6) Establish a procedure for resolving any conflicts that arise.
7) Make sure the authorities recognize the community’s right to organize and set rules.
8) Encourage the formation of multiple governance structures at different scales to allow for different levels of decision-making.
Applying Ostrom’s design principles to AI
So how can we use these principles to figure out what AI governance should look like?
Actually, people are already pushing for some of these principles in relation to AI — they just may not realize that they slot into Ostrom’s framework.
Many have argued that AI governance should start with tracking the chips used to train frontier AI models. Writing in Asterisk magazine, Avital Balwit outlined a potential governance regime: “The basic elements involve tracking the location of advanced AI chips, and then requiring anyone using large numbers of them to prove that the models they train meet certain standards for safety and security.” Chips control corresponds to Ostrom’s principle #4: establishing mechanisms to monitor resource use and behavior.
Others are noting that AI companies need to face legal liability if they release a system into the world that creates harm. As tech critics Tristan Harris and Aza Raskin have argued, liability is one of the few threats these companies actually pay attention to. This is Ostrom’s principle #5: escalating sanctions for rule-breakers.
And despite the chorus of tech execs claiming they need to rush ahead with AI lest they lose to China, you’ll also find nuanced thinkers arguing that we need international coordination, much like what we ultimately achieved with nuclear nonproliferation. That’s Ostrom’s principle #8.
If people are already applying some of Ostrom’s thinking, perhaps without realizing it, why is it important to explicitly note the connection to Ostrom? Two reasons. One is that we’re not applying all her principles yet.
The other is this: Stories matter. Myths matter. AI companies love the narrative of AI as an arms race — it justifies their rush to market. But it leaves us all in a pessimistic stance. There’s power in telling ourselves a different story: that AI is a potential tragedy of the commons, but that tragedy is only potential, and we have the power to avert it.
----------------------------------------
By: Sigal Samuel
Title: AI is a “tragedy of the commons.” We’ve got solutions for that.
Sourced From: www.vox.com/future-perfect/2023/7/7/23787011/ai-arms-race-tragedy-commons-risk-safety
Published Date: Fri, 07 Jul 2023 16:20:00 +0000
Did you miss our previous article...
https://consumernewsnetwork.com/politics-us/georgetown-was-trying-to-atone-for-its-past-in-the-slave-trade-what-now