(C) Daily Kos
This story was originally published by Daily Kos and is unaltered.
. . . . . . . . . .
Vibing Our Way to Stupidity. Or Language is Not Intelligence. Or Turing Was Wrong [1]
['This Content Is Not Subject To Review Daily Kos Staff Prior To Publication.']
Date: 2025-07-29
Yes, this is a post about imitative AI, but it’s more just a reflection on how human beings fool themselves into believing that chatbots are intelligent or are a step on the way to intelligence.
What I am about to discuss is old-ish news, but as it seems to happen moderately regularly, it got me thinking about what humans take as a marker of intelligence. I saw that the founder of Uber has said that he feels like working with a chatbot has helped him come close to making new discoveries in theoretical physics. Musk thinks that Grok hallucinating information about material science (or to be more generous, Musk simply doesn’t know how to look shit up in real books) that he has never seen in books is an indicator that Grok is thinking. This is nonsense but I think it might be revealing nonsense.
First, this is absolutely nonsense. Large Language Models, the systems at the heart of imitative AI programs like Grok and ChatGPT, are not reasoning about anything. They literally cannot. All they are doing is calculating what should come next based on their training data. They are no more capable of coming up with a new idea than I am of being the starting center for the Blackhawks. They can only string together what is in their training data — and there are obviously no new ideas in material that has already existed. Some of this is just desperation and stupidity — they want these things to be replacement humans, so they see what they want to see in them. But I think another item at play may be the emphasis humans place on communication.
Terry Pratchett called humanity the story telling ape. And it fits — one of the things that separates us from the other animals is our ability to pass information down across time, to tell stories, whether they be morality lessons or lessons on how to best not be eaten by the local tigers. We naturally, then, equate communication language, with intelligence. We trust people who speak convincingly, even if their arguments are poor. Alan Turing, probably an actual genius and one of the founders of the study of artificial intelligence, set as his benchmark for intelligence a communications test. And I think he was wrong.
One of the lessons of LLMs is that faking coherence is not that difficult. Even when LLMs hallucinate, they sound convincing when they do so. In fact, that is part of the problem — they lie so easily because to them, a lie is the same as the truth. Lies and truths are both merely probabilistically generated strings of tokens it doesn’t have any basis for understanding. But the string of tokens appears like real language generated by real thought. And so, it reaches into our little money brains and pulls the lever that says “smart!”
I am likely not saying anything new, but this is likely the reason that so many people treat these word calculators as friends and therapists. They sound, especially to people who have little experience with either, like real friends or real therapists. It is easy to fall under their sway since all of human history and evolution has pointed toward “good with words” meaning intelligence. This is obviously an incredibly dangerous flaw in the system.
We aren’t, as a species, equipped to deal with false coherence well. It trips too many of our biases and lulls us into a sense of comfort and familiarity when we are in fact dealing with something disturbing and unique. These systems fake intelligence in a way that we are unusually susceptible to, and as a result, their mistakes and amorality can do serious harm to people. As I said, I doubt I am the first person to realize this, but I do think that we need to focus quite a bit more on it. We allow these systems to run free, all the while knowing that they have a hack into convincing us to take them seriously as a source of information rather than fancy Clippie.
That flaw means both that we should be much more serious about regulating these systems and in thinking about what intelligence really means. It cannot be predicated on coherence and communication. It must expand, I think, into the ability to model the world, to tell reality from fiction, and real intention behind the communication. The Turing test has failed us. It is time we think of another.
[END]
---
[1] Url:
https://www.dailykos.com/stories/2025/7/29/2335610/-Vibing-Our-Way-to-Stupidity-Or-Language-is-Not-Intelligence-Or-Turing-Was-Wrong?pm_campaign=front_page&pm_source=trending&pm_medium=web
Published and (C) by Daily Kos
Content appears here under this condition or license: Site content may be used for any purpose without permission unless otherwise specified.
via Magical.Fish Gopher News Feeds:
gopher://magical.fish/1/feeds/news/dailykos/