(C) Daily Kos
This story was originally published by Daily Kos and is unaltered.
. . . . . . . . . .



AI: Icarus Redux? [1]

['This Content Is Not Subject To Review Daily Kos Staff Prior To Publication.']

Date: 2025-01-03

You’ll have to pardon me. I’m an older guy (soon to turn 77) who grew up well before the internet, personal computers, smart phones, answering machines and free international calls. I have a smart phone and use my computer a lot. But I don’t understand modern technology much, and all I hear these days is buzz about AI.

I know it’s all around us now, embedded in our phones and cars, research algorithms and whatever, and has proven to be a valuable diagnostic tool in some cases. What concerns me, however, is those funding and pushing this technology aim to build a cognitive AI that will exceed human cognitive ability, but have no idea of what they’re possibly creating.

Futurists, especially Ray Kurzweil, foresee utopian smart machine/mind melds to achieve humans’ ultimate evolutionary destiny. Or maybe to manipulate the genome to build better bodies and brighter brains.

That’s a nice fantasy, but it could become a dystopia. The US and Chinese governments are spending billions in a race to out-AI the other for superior surveillance and weaponry. Mega-corporations including Google, Meta and others are racing to develop it, which almost guarantees epic corruption based on our human propensity to grab power and profit for the powerful at the expense of the weak.

Any comprehensive history book shows that human character over 50,000 years of increased knowledge and social development – what we call “civilization – has not improved. We as a species are capable of wonderful dreams but still live amid nightmares of human depravity all the while creating increasingly powerful technologies to deepen it.

E. O Wilson put it his way: “The real problem of humanity is the following: we have Paleolithic emotions, medieval institutions, and god-like technology.” (Sept. 9, 2009 debate at Harvard Museum of Natural History).

And now some want to build computers smarter than us when no AI developer or futurist I have read in books and articles can state unequivocally that such uber-smart machines could not revolt to wipe us out. But they’re racing for the prize anyway.

There has been little to no extensive public, governmental or corporate discussion of ethical considerations of where this technology is going, its effect on privacy, and how to control or at least guide it in the future.

So here are some questions that I believe need answering:

How should research and development be done on cognitive artificial intelligence? Should we continue with it knowing of possible risks? To be decided To what ends should this research be directed? To be decided Who benefits, and who loses with this technology? To be decided Could programmers control a super-smart computer system beyond their level of understanding? No one knows Who decides what will be encoded? Media companies? Governments? The ultra-rich? To be decided Will these machines become self-aware and ignore the ethical guidelines we encode? No one knows If AI is programmed with basic human values but allowed to self-modify, could it exert its own priorities and override ours? No one knows If it becomes possible to upload a human mind into a hard drive, would programming prevent that Singularity, in Kurzweil’s phrase, from becoming as flawed as us? Even Kurzweil doesn’t know Can researchers guarantee that these smart computers could never be hijacked by bad actors? No one knows

There are two other issues that need to be heeded.

First is the Law of Unintended Consequences. That law states that, “Actions of people – especially of governments – always have effects that are unanticipated or unintended.” I would add “governments, corporations, universities, or any system.” As of now we hear only promises of what this technology will do for us, little of what it might do to us.

And second, the promises of technology have rarely if ever been fulfilled. Following are some examples:

In the 1950s, nuclear power was promised to be so cheap that homes wouldn’t need meters.

A 1955 Look magazine prophesied small, portable atomic power plants for airplanes, which the Air Force was working on, and central heating systems for homes and offices. “Atomic energy would probably never run cars, but that wouldn’t be a problem,” the article stated. “With the atom running power plants and ships and trains and taking care of the military, gasoline would be dirt cheap … These are dreams –but not fantasies. Only war can keep them from becoming realities in a new human Utopia.”

In 1953, Life magazine touted the cultural and educational promise of television, then a new technology just entering homes. “The hunger of our citizens for culture and self-improvement has always been grossly underestimated; the number of Americans who would rather learn a little something than receive a sample tube of shaving cream is absolutely colossal.” Eight years later, Newton Minnow, former chairman of the FCC, condemned television programming as “a vast wasteland… The basic concept that our communication systems are to serve the public – not private interest – is now missing in action.”

So, what’s driving all of this, besides lust for money, power, control? I think we need to acknowledge a basic human trait long embedded in our genes. That is, the thrill of the chase, the lure of a challenge, the curiosity which creates the incentive to chase the Next Big Thing, to build what is possible despite the fallout. J. Robert Oppenheimer witnessed this during development of the atomic bomb at Los Alamos:

“When you see something that is technically sweet, you go ahead and do it and you argue about what to do about it after you have had your technical success. That was the way it was with the atomic bomb.”

(Richard Polenberg, In the matter if J. Robert Oppenheimer: The Security Clearance Hearing)

All of this reminds me of the tragedy of Icarus in Greek mythology:

Icarus and his father, Daedalus, were imprisoned in a tower by King Minos of Crete. Daedalus built two sets of wings with feathers glued with wax so they could fly for their freedom. Daedalus warned his son not to fly too high, but Icarus ignored him and flew toward the sun. The sun’s heat melted thee wax, and he plunged into the sea and drowned.

The moral here? Beware of hubris.

In closing, I admit that AI is now part of our lives and never to be uprooted. I accept that. What I don’t accept is where we’re going with it. No doubt many readers will agree or disagree with my arguments and sources, but that would be a good thing – we need much more public discussion about this technology and whether its promise is worth the existential risk. These discussions need to be less about computational power more about ourselves and our history with such tools; we must remember, so with AI we won’t repeat it.

[END]
---
[1] Url: https://www.dailykos.com/stories/2025/1/3/2295073/-AI-Icarus-Redux?pm_campaign=front_page&pm_source=more_community&pm_medium=web

Published and (C) by Daily Kos
Content appears here under this condition or license: Site content may be used for any purpose without permission unless otherwise specified.

via Magical.Fish Gopher News Feeds:
gopher://magical.fish/1/feeds/news/dailykos/