Categories: AI News

How to understand AI: it’s a tragedy of the commons, not an arms race

You’ve probably heard the development of AI described as a classic “arms race.” The basic logic is that if you don’t race ahead to create advanced AI, someone else will – probably someone who is more reckless and less safety-conscious. So, you better build a superintelligent machine than let the other guy cross the finish line first! (In American discussions, the other guy is usually Chinese.)

But as I wrote before, this is not an accurate description of the AI ​​situation. There is no “finish line,” because AI is not just something with a single purpose, like the atomic bomb; it is a more general purpose technology, like electricity. Additionally, if your lab takes the time to iron out some AI safety issues, other labs may bring improvements, which will benefit everyone.

And as AI Impacts lead researcher Katja Grace told Time, “In the classic arms race, one party can always come out on top and win. But in AI, the winner will be advanced AI itself [if it’s unaligned with our goals and harms us]. This can speed up the loss of mobility. ”

I think it’s more accurate to look at the AI ​​situation as a “tragedy of the commons.” That is what ecologists and economists call a situation where many actors have access to a limited valuable resource and overuse it they destroy it for everyone.

A perfect example of a common one: the capacity of the Earth’s atmosphere to absorb greenhouse gas emissions without tipping into a climate disaster. Any individual company can argue that it makes no sense for them to use less capacity – someone else will use it – and yet each actor acting in their rational self-interest destroys the entire planet.

That’s how AI works. What is common here is society’s capacity to absorb the effects of AI without falling into disaster. Any company can argue that there’s no point in limiting how much or how fast they can deploy more advanced AI — if OpenAI doesn’t do it, only Google or Baidu can, the argument goes. goes – but if every company acts like that, the result for society will be tragic.

“Tragedy” doesn’t sound good, but framing AI as a tragedy of the commons should make you optimistic, because researchers have already found solutions to this type of problem. In fact, political scientist Elinor Ostrom won a Nobel Prize in Economics in 2009 for doing just that. So let’s explore his work and see how it can help us think about AI in a way that is more focused on solutions.

Elinor Ostrom’s solution to the tragedy of the commons

In a 1968 essay on Science, ecologist Garrett Hardin popularized the idea of ​​the “tragedy of the commons.” He argues that humans compete so much for resources that they eventually destroy them; The only way to avoid that is total government control or total privatization. “Destruction is the destination to which all men hasten,” he wrote, “every one seeking his own good.”

Ostrom didn’t buy it. Studying communities from Switzerland to the Philippines, he found example after example of people coming together to successfully manage a shared resource, such as pasture. Ostrom discovered that communities can create and avoid the tragedy of the commons, especially if they adopt eight design principles:

1) Clearly define the community that manages the resource.

2) Ensure that the rules are a reasonable balance between the use of the resource and its sustainability.

3) Involve everyone affected by the rules in the rules writing process.

4) Establish mechanisms to monitor resource use and behavior.

5) Establish an escalating series of penalties for rule breakers.

6) Develop a procedure to resolve any conflicts that arise.

7) Make sure the authorities recognize the community’s right to organize and make rules.

8) Encourage the formation of multiple governance structures at different scales to allow for different levels of decision making.

Applying Ostrom’s design principles to AI

So how can we use these principles to define what AI management looks like?

In fact, people are already pushing for some of these principles in relation to AI – perhaps without realizing they’re running into Ostrom’s framework.

Many argue that AI management should start tracking the chips used to train AI frontier models. Writing in Asterisk magazine, Avital Balwit outlined a potential governance regime: “The basic elements include tracking the location of advanced AI chips, and then requiring anyone to use large numbers of them to verify that the models they train meet certain standards for safety and security.” Control chips correspond to Ostrom’s principle #4: establish mechanisms to monitor usage and behavior of resources.

Others noted that AI companies should face legal liability if they release a system into the world that creates harm. As tech critics Tristan Harris and Aza Raskin have argued, liability is one of the few threats that companies actually monitor. This is Ostrom’s principle #5: increasing penalties for rule breakers.

And despite the chorus of tech execs who say they need to rush AI so they don’t lose out to China, you’ll also find nuanced thinkers who argue we need international coordination, like what what we have achieved is the end of nuclear nonproliferation. That’s Ostrom’s principle #8.

If people are already applying some of Ostrom’s thinking, perhaps without realizing it, why is it important to clearly note the Ostrom connection? Two reasons. One is that we did not apply all his principles still.

The other is this: Stories matter. Myths are important. AI companies love the narrative of AI as an arms race – it justifies their rush to market. But this leaves us all in a pessimistic position. There is power in telling ourselves a different story: that AI is a potential tragedy of the commons, but that tragedy is only potential, and we have the power to avoid it.

cleantechstocks

Recent Posts

Aduro’s Disruptive Oil Upgrading Technology Moves Closer to Commercialization

  Aduro's Disruptive Oil Upgrading Technology Moves Closer to Commercialization Alberta's oil sands produce vast…

1 year ago

Global Markets: Retail sales increase in July

WINNIPEG – The following is a glance at the news moving markets in Canada and…

1 year ago

Top picks in REIT sector from BMO and RBC analysts

Daily roundup of research and analysis from The Globe and Mail’s market strategist Scott Barlow…

1 year ago

Investors look to AI-darling Nvidia’s earnings as US stocks rally wobbles

The logo of technology company Nvidia is seen at its headquarters in Santa Clara, California…

1 year ago

China’s ‘Lehman Moment’? Which domino will fall next as property crisis grows? – South China Morning Post

China’s ‘Lehman Moment’? Which domino will fall next as property crisis grows?  South China Morning Post…

1 year ago

Slide in euro zone service sector sharpens ECB’s rates dilemma

LONDON, Aug 23 (Reuters) - Euro zone business activity declined far more than thought in…

1 year ago