(C) Daily Kos
This story was originally published by Daily Kos and is unaltered.
. . . . . . . . . .



Imitative AI Cannot Reason Demonstrating Futile Nature of Their Business [1]

['This Content Is Not Subject To Review Daily Kos Staff Prior To Publication.']

Date: 2025-06-16

ChatGPT lost a chess game to an Atari 2600.

Okay, ha ha. We can all laugh, and we should, but this really demonstrates just how poor a business imitative AI really is.

ChatGPT lost because it cannot reason its way out of a wet paper bag. The Atari program was specifically designed to run chess, and so it beat the tar out of the word calculator. Now, some people may say “Aha! ChatGPT was not specifically designed to play chess, so that doesn’t prove anything!” Well, yeah, it kind of does. It proves that ChatGPT is terrible at problems that require generalizing from the specific, include situations where it lacks similar training data or situations where it has to extrapolate from incomplete training data. And that is a problem because the entire imitative AI business is premised on two things: imitative AI eventually turning into general intelligence, intelligence that can do some kinds of reasoning that humans can do and replacing enough workers in enough industries that they finally become profitable.

The idea that just making bigger and bigger imitative AI systems by training them on more and more data is going to lead to general intelligence is a fairly silly idea. Not only, as the chess example shows, do these things not reason well (read here for a great look, by the way, by Gary Marcus at a study showing imitative AI systems falling down at medium level complexity), they are getting worse as they are getting bigger. General intelligence has probably, at least for the leadership, been little more than marketing hype. A way to convince people to keep giving them money until the promised unicorn farts appeared.

And they need those unicorn farts, or at least the money. Because they do not make money, and likely cannot make money at their current usage rates. They lose money on every use of their systems, sometimes quite a bit. The only way they make money is by replacing entire industries — that is the only price point at which people can be convinced to pay them what they need to be whole. And two-minute ads, that multiple human beings had to supervise, are not going to cut it. A government bailout will help, but even ideological running mates in the government will not be enough to fully bring imitative AI to profitability. They need to consume entire industries, and they likely cannot do that unless imitative AI can reason.

And it cannot reason. It cannot handle the tricky situations, it cannot create, only imitate, it cannot even promise to get things right. What it can do must be checked and double checked by an expert. That is not the profile of a system that can replace people, much less replace people on the scale that these firms need to be profitable. There will be disruptions, of course, but when you hear someone claim that half of all white-collar entry-level jobs are going to disappear, you need to stop and ask how. Because right now, imitative AI is playing checkers, at best, in a chessboard world.

[END]
---
[1] Url: https://www.dailykos.com/stories/2025/6/16/2328191/-Imitative-AI-Cannot-Reason-Demonstrating-Futile-Nature-of-Their-Business?pm_campaign=front_page&pm_source=more_community&pm_medium=web

Published and (C) by Daily Kos
Content appears here under this condition or license: Site content may be used for any purpose without permission unless otherwise specified.

via Magical.Fish Gopher News Feeds:
gopher://magical.fish/1/feeds/news/dailykos/