Tesla CEO Elon Musk has been issuing warnings about artificial intelligence for a decade, suggesting the technology poses the greatest danger humanity has ever faced.
In 2014, Musk told a gathering at MIT: “With artificial intelligence, we’re summoning the demon. You know those stories where there’s the guy with the pentagram, and the holy water, and he’s like… Yeah, he’s sure he can control the demon? Doesn’t work out.”
Old joke about agnostic technologists building artificial super intelligence to find out if there’s a God.
— Elon Musk (@elonmusk) March 30, 2023
They finally finish & ask the question.
AI replies: “There is now, mfs!!”
Now with the unleashing of ChatGPT, Bard, and other game-changing AI tech into the public domain, Musk is among those calling for a pause in the sector, adding his influential signature to an open letter to AI labs asking them to “pause giant AI experiments.”
The letter contends that AI developers are “locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one — not even their creators — can understand, predict or reliably control.” The signatories call for “at least” a 6-month pause.
Ideally, the pause the letter contemplates would allow AI proponents and developers to create “shared safety protocols” for AI systems.
The future Musk and company want aligns with this wish: “Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable.”
In a startling sentence, given Mr. Musk’s recent anti-government rhetoric and strong libertarianism, the letter proposes that “if such a pause cannot be enacted quickly, governments should step in and institute a moratorium.”
Musk’s antithetical acquiescence to — even recommendation of — draconian government regulation (“a moratorium”) over technology development is the clearest signal yet that the level of danger he assesses is off the charts.
One problem with the pause, as identified by South Carolina Congresswoman Nancy Mace, is that a widespread adherence to it requies trusted buy-in from all AI developers, a scenario that in Mace’s estimation would be nearly impossible to enforce, especially with suspect actors like China and Russia.
Don’t disagree about the inherent risks here, but do we really think China (or Russia or Iran or any other global adversary) will agree to pause for six months? https://t.co/G5ZkumyE6V
— Rep. Nancy Mace (@RepNancyMace) March 30, 2023