Over 350 AI executives, researchers, and industry leaders signed a one-sentence warning released Tuesday, saying that we should try to stop their technology from destroying the world.
“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” reads the statement, released by the Center for AI Safety. The signatories including Sam Altman, the CEO of OpenAI, Demis Hassabis, CEO of Google DeepMind, Dario Amodei, CEO of Anthropic, and Geoffrey Hinton, the so called “Godfather of AI” who recently quit Google over fears about his life’s work.
As the public conversation about AI shifted from awestruck to dystopian over the last year, a growing number of advocates, lawmakers, and even AI executives united around a single message: AI could destroy the world and we should do something about it. What that something should be, specifically, is entirely unsettled, and there’s little consensus about the nature or likelihood of these existential risks.
There’s no question that AI is poised to flood the world with misinformation, and a large number of jobs will likely be automated into oblivion. The question is just how far these problems will go, and when or if they will dismantle the order of our society.
Usually, tech executives tell you not to worry about the threats of their work, but the AI business is taking the opposite tactic. OpenAI’s Sam Altman testified before the Senate Judiciary Committee this month, calling on Congress to establish an AI regulatory agency. The company published a blogpost arguing that companies should need a license if they want to work on AI “super intelligence.” Altman and the heads of Anthropic and Google DeepMind recently met with President Biden at the White House for a chat about AI regulation.
Things break down when it comes to specifics though, which explains the length of Tuesday’s statement. Dan Hendrycks, executive director of the Center for AI Safety, told the New York Times they kept it short because experts don’t agree on the details about the risks, or what, exactly, should be done to address them. “We didn’t want to push for a very large menu of 30 potential interventions,” Hendrycks said. “When that happens, it dilutes the message.”
It may seem strange that AI companies would call on the government to regulate them, which would ostensibly get in their way. It’s possible that unlike the leaders of other tech businesses, AI executives really care about society. There are plenty of reasons to think this is all a bit more cynical than it seems, however. In many respects, light-touch rules would be good for business. This isn’t new: some of the biggest advocates for a national privacy law, for example, include Google, Meta, and Microsoft.
For one, regulation gives businesses an excuse when critics start making a fuss. That’s something we see in the oil and gas industry, where companies essentially throw up their hands and say “Well, we’re complying with the law. What more do you want?” Suddenly the problem is incompetent regulators, not the poor corporations.
Regulation also makes it far more expensive to operate, which can be a benefit to established companies when it hampers smaller upstarts that could otherwise be competitive. That’s especially relevant in the AI businesses, where it’s still anybody’s game and smaller developers could pose a threat to the big boys. With the right kind of regulation, companies like OpenAI and Google could essentially pull up the ladder behind them. On top of all that, weak nationwide laws get in the way of pesky state lawmakers, who often push harder on the tech business.
And let’s not forget that the regulation that AI businessmen are calling for is about hypothetical problems that might happen later, not real problems that are happening now. Tools like ChatGPT make up lies, they have baked-in racism, and they’re already helping companies eliminate jobs. In OpenAI’s calls to regulate super intelligence — a technology that does not exist — the company makes a single, hand-waving reference to the actual issues we’re already facing, “We must mitigate the risks of today’s AI technology too.”
So far though, OpenAI doesn’t actually seem to like it when people try to mitigate those risks. The European Union took steps to do something about these problems, proposing special rules for AI systems in “high-risk” areas like elections and healthcare, and Altman threatened to pull his company out of EU operations altogether. He later walked the statement back and said OpenAI has no plans to leave Europe, at least not yet.
Want to know more about AI, chatbots, and the future of machine learning? Check out our full coverage of artificial intelligence, or browse our guides to The Best Free AI Art Generators and Everything We Know About OpenAI’s ChatGPT.
BOJ increases Wednesday’s bond purchase as 10-year JGB yields hit decade peak
Haley defends past praise of Ramaswamy: ‘Things he’s saying’ on campaign trail are ‘very different’
Taylor Swift has 3-word reaction to noise level upon entering MetLife Stadium
Arsenal ‘line up swoop for Man City flop’ with Arteta set to bolster midfield
Brazil to Issue Digital IDs for Its 214 Million Citizens Using Blockchain Technology
Bybit to Stop UK Services as Fin Regulators Clamp Down – Are You Affected?
11 Hidden Sales You Don't Want to Miss: Pottery Barn, SKIMS & More
Things Lie About Their Identities Just Like People Do
Emirates Seeks Experienced Airbus Captains to Join its Expanding A380 Fleet
Chelsea ‘confident of signing’ Ivan Toney as Brentford slap £60m fee on striker
Josh Duggar Has Had a Change to His Prison Release Date Once Again
Review & setlist: Lil Yachty’s effortlessly cool Field Trip Tour drives Boston to the next rap frontier
‘When Calls the Heart’ Season 10 Episode 10 Recap: Lucas Defends Hope Valley
Providence woman convicted of involuntary manslaughter in overdose death of North Attleboro man
Nicole Scherzinger and Fiance Thom Evans Met While She Judged Him on ‘X Factor: Celebrity’
Lifestyle18 hours ago
15 Affordable Products to Help Your Tech Feel Like New Again
Finance18 hours ago
Chainlink Completes Falling Wedge on Weekly Timeframe While P2E Battle Token Crosses $150k in Presale
News19 hours ago
Meghan Markle Linked to Run for Dianne Feinstein’s Senate Seat
News23 hours ago
It’s Bad News That So Many in the GOP Are Pissed About Averting a Shutdown
Travel23 hours ago
50 tourism ministers, 500 guests welcomed to Diriyah for World Tourism Day gala event
Finance19 hours ago
Fentanyl Isn’t Just Smuggled In From Mexico. It Also Arrives Duty Free By Mail
Finance15 hours ago
Congress Passes Funding Bill to Avoid US Government Shutdown
Politics14 hours ago
Biden and Trump bring cold comfort to auto workers girding for long strike