(C) Daily Kos
This story was originally published by Daily Kos and is unaltered.
. . . . . . . . . .



When Artificial Intelligence runs into Real Stupidity inside Pete's head, a "Doocey" flies out. [1]

['This Content Is Not Subject To Review Daily Kos Staff Prior To Publication.']

Date: 2023-06-01

Hey kidz! Today we’re minting a neologism, a “Doocey!” I’ll start with an example from that place Republicans consider a hellscape and the rest of us call “reality.” The source? The words coming straight outta Pete Doocey’s mouth. The topic is allegedly “Artificial Intelligence,” but given it’s a Faux News talking point, we’re really looking at Alarmist Idiocy.

Here’s the quote:

“One thousand of the world’s smartest people are saying….”

Ok, let’s pause right there. Flag on the play! That’s a Doocey, right there. One thousand, that’s a lot of people to get agreeing on something. I mean, you ever try to get 7 people to agree on a restaurant? So a thousand…. whoah.

But this ain’t any rando thousands, nope. This is a thousand of the (checks the tape to be sure)

worlds smartest people.

That is amazing. I mean, it sounds like he’s saying they found the smartest people in the world and got a thousand of them together. Or maybe he wants it to sound like that because it’s a verbal hedge and they’re just asserting they have a thousand people who are among the smartest people in the world. Well, that’s a horse of a different color. All you gotta be is smarter than average and you are officially a member of “the world’s smartest people,” right? All of a sudden that is a lot less impressive. If this was a Tinder profile, you would be very disappointed when you hooked up. That’s a Doocey, my friend.

But let’s continue with the rest of the statement, because it is a veritable Turducken of Dooceys. Let’s dig in…

“One thousand of the world’s smartest people are saying…. artificial intelligence can impose profound risk to society and humanity. “

Ok...another flag on the play. You said you had the world’s smartest people and they got together and come up with a statement that is redundant AND repetitive? (shhh… don’t tell him, let’s see if he notices) I mean, seriously, how can you have something that affects society but not humanity? Or vice versa? That’s like being asked to write a 500 word essay on Hamlet about Rosencrantz without mentioning Guildenstern. Or vice versa.

At this point, it is worth doing something Faux News never does. Let’s go to the source and see what they actually wrote. I don’t want to be pedantic, but the statement was written, contrary to Doocey’s assertion, nobody actually said anything. In the real world, facts matter. So let’s try and keep them straight.

Here’s the Statement on AI Risk, from the Center for AI Safety:

Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.

That sounds pretty scary. You might say it is “alarming.” And you’d be right. If you can’t wrap it in “Alarming” it ain’t a fully cooked Doocey. If you can’t stir the pot, why take the shot? Given that statement represents the beginning, middle and end of what they published, you might expect it wouldn’t be too much to ask he read the statement they wrote and asked for the administration’s response. It’s not like it’s a 99 page piece of legislation.

In terms of policy directives, there’s not a lot of meat there. It’s certainly clear, but there are an awful lot of unspoken assumptions, assertions, predictions, and …. I hate to give Rumsfeld any credibility…. but there are a lot of known unknowns hanging out there.

While we’re at it, why limit the comparison to pandemics and nuclear war. Why not include meteor impacts? Or climate change? You know, stuff we actually have plans for. We might find solutions we can recycle, refine, or reduce to practice without reinventing the wheel. Given the fact that ChatGPT reached 1 million users in 5 days, compared to Twitter which took two and half years, or Facebook which took 10 months, you might want to put some pep into your step and get that risk management strategy finished pronto. Otherwise, you might wind up like Wiley Coyote after Road Runner came tearing through and blew your spot up. I mean, we are talking about “risk of extinction” so time is of the essence, right?

If only there was a tool we could use to at least get a quick first draft, a framework, a list of options to consider. Something that would point us in the right direction. You know like an AI prompt that might generate a short list of public policy recommendations to best address the concerns articulated and implied in that statement.

Hey! You ever wonder what happens if we ask ChatGPT to help us? Don’t bother. I did that before I started writing this essay.

Prompt: Provide a bullet list of public policy recommendations that would best address the concerns in the following statement: "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war."

Response:

Establish an international regulatory framework for AI: Collaborate with governments, industry leaders, and experts to develop global regulations that address the risks associated with AI development and deployment. This framework should ensure transparency, accountability, and ethical considerations in AI systems.

Promote AI safety research and development: Allocate substantial funding to support research on AI safety, focusing on areas such as robustness, interpretability, and ethical decision-making. Encourage collaboration among researchers and institutions worldwide to share knowledge and best practices.

Implement AI risk assessment protocols: Develop standardized procedures to assess the potential risks of AI technologies before their widespread deployment. This should involve comprehensive testing and evaluation of AI systems to identify and mitigate potential threats to human safety, privacy, and security.

Foster international cooperation and information sharing: Establish platforms for countries, organizations, and researchers to exchange information and insights on AI-related risks. Encourage the sharing of AI-related data, methodologies, and tools to enable collective understanding and preparedness.

Encourage responsible AI development and deployment: Encourage companies and organizations involved in AI research and development to adopt ethical guidelines and best practices. Promote the development of AI systems that prioritize human well-being, fairness, and societal benefit.

Prioritize AI education and awareness: Invest in educational programs that enhance public understanding of AI technologies, their potential risks, and their societal impacts. Foster interdisciplinary education and research initiatives to train professionals in AI ethics, safety, and governance.

Establish global AI governance bodies: Create international bodies that are dedicated to overseeing AI development and ensuring its responsible and safe use. These bodies should include representatives from governments, academia, industry, and civil society to facilitate informed decision-making and policy development.

Support international cooperation on AI standards: Encourage collaboration among nations to establish harmonized technical standards for AI systems. These standards should cover safety, security, privacy, and ethical considerations to ensure interoperability and global adoption of responsible AI practices.

Foster AI-enabled global cooperation: Promote the use of AI technologies to address global challenges, such as climate change, poverty, and inequality. Encourage the development of AI systems that facilitate cross-border collaboration, knowledge sharing, and resource allocation to tackle complex societal issues.

Clearly, there is a lot of work to be done. But this is not the first time we have been confronted with a potential existential risk posed by the emergence of a new and powerful technology. When they lit the fuse on the first thermonuclear weapon, Enrico Fermi was taking bets on whether they would set the atmosphere on fire on not. We saw the same kind of concern when recombinant DNA technology was reduced to practice. That’s why Paul Berg organized the Asilomar Conference. Cognizant of history, the policy wonks working in the administration’s Office of Science and Technology Policy had the perspicacity to release the first public draft of The AI Bill of Rights last October, long before ChatGPT was even out and about.

So asking if President Biden is worried about this horrible existential threat as a gotcha to push the “sleepy Joe” narrative about an old Boomer out of touch with the modern world, too old to do his job and too feeble to handle the call — again using the words that come out of Doocey’s mouth as an example, for the cherry on top — “That sounds crazy, doesn’t it?”

Yeah, that sure does sound crazy, because you’d have to be crazy to think no one is looking at this. That brings me to the final point that needs to be made for the folks who don’t read their news, but get it read to them. Here’s the litmus test to be sure you are dealing with a bona fide Doocey in the wild: If you gotta ask that last question as a follow up to your first question, it’s a Doocey!

[END]
---
[1] Url: https://www.dailykos.com/stories/2023/6/1/2172686/-Artifical-intelligence-vs-Whatever-the-hell-is-going-on-inside-Petey-Doocey-s-fevered-brain

Published and (C) by Daily Kos
Content appears here under this condition or license: Site content may be used for any purpose without permission unless otherwise specified.

via Magical.Fish Gopher News Feeds:
gopher://magical.fish/1/feeds/news/dailykos/