(C) Common Dreams
This story was originally published by Common Dreams and is unaltered.
. . . . . . . . . .
Everyone Wants to Regulate AI. No One Can Agree How [1]
['Condé Nast', 'Steven Levy', 'Amanda Hoover', 'Boone Ashworth', 'Adrienne So', 'Morgan Meaker', 'Kate Knibbs', 'Todd Feathers', 'Chris Stokel-Walker']
Date: 2023-05-26 13:00:00+00:00
As the artificial intelligence frenzy builds, a sudden consensus has formed. We should regulate it!
While there’s a very real question whether this is like closing the barn door after the robotic horses have fled, not only government types but also people who build AI systems are suggesting that some new laws might be helpful in stopping the technology from going bad. The idea is to keep the algorithms in the loyal-partner-to-humanity lane, with no access to the I-am-your-overlord lane.
Though since the dawn of ChatGPT many in the technology world have suggested that legal guardrails might be a good idea, the most emphatic plea came from AI’s most influential avatar of the moment, OpenAI CEO Sam Altman. “I think if this technology goes wrong, it can go quite wrong,” he said in a much anticipated appearance before a US Senate Judiciary subcommittee earlier this month. “We want to work with the government to prevent that from happening.”
That is certainly welcome news to the government, which has been pressing the idea for a while. Only days before his testimony, Altman was among a group of tech leaders summoned to the White House to hear Vice President Kamala Harris warn of AI’s dangers and urge the industry to help find solutions.
Choosing and implementing those solutions won’t be easy. It’s a giant challenge to strike the right balance between industry innovation and protecting rights and citizens. Clamping limits on such a nascent technology, even one whose baby steps are shaking the earth, courts the danger of hobbling great advances before they’re developed. Plus, even if the US, Europe, and India embrace those limits, will China respect them?
The White House has been unusually active in trying to outline what AI regulation might look like. In October 2022—just a month before the seismic release of ChatGPT—the administration issued a paper called the Blueprint for an AI Bill of Rights. It was the result of a year of preparation, public comments, and all the wisdom that technocrats could muster. In case readers mistake the word blueprint for mandate, the paper is explicit on its limits: “The Blueprint for an AI Bill of Rights is non-binding,” it reads, “and does not constitute US government policy.” This AI bill of rights is less controversial or binding than the one in the US Constitution, with all that thorny stuff about guns, free speech, and due process. Instead it’s kind of a fantasy wish list designed to blunt one edge of the double-sided sword of progress. So easy to do when you don’t provide the details! Since the Blueprint nicely summarizes the goals of possible legislation, let me present the key points here.
[END]
---
[1] Url:
https://www.wired.com/story/plaintext-everyone-wants-to-regulate-ai/
Published and (C) by Common Dreams
Content appears here under this condition or license: Creative Commons CC BY-NC-ND 3.0..
via Magical.Fish Gopher News Feeds:
gopher://magical.fish/1/feeds/news/commondreams/