(C) Daily Kos
This story was originally published by Daily Kos and is unaltered.
. . . . . . . . . .
Imitative AI Is Not a Viable Business or Capitalism Is Not Rational [1]
['This Content Is Not Subject To Review Daily Kos Staff Prior To Publication.']
Date: 2025-01-13
Making predictions is usually a mug’s game (why are coffee mugs perceived as bad at games? Dunno, but it feels vaguely discriminatory. I am sure shot glasses have just as much trouble with backgammon and go as your average coffee mug.), but the headline feels pretty accurate. There is just too much in the way these imitative AI systems are built that is untested, unproven, and not actually coming true. The question to me, then, is why are so many companies falling for the hype?
Supporters, and even some worried about the potential of imitative AI, often say something like “this is the worst imitative AI will ever be.” The implication, of course, is that imitative AI will improve significantly in the future. The so-called scaling laws of AI, which state that the more data you add to the model the better it will be, are not laws in any meaningful sense. Gravity is a law — try running off the edge of a cliff and see what happens. But the scaling laws are assumptions that may not be valid.
The time between ChatGPT 3 and ChaptGPT 4 was shorter than the time between now, a world in which ChatGPT 5 does not exist. Sora, despite being worked on for a year, is still incapable of maintaining consistency from scene to scene. Given how these systems work, none of this should be surprising.
Imitative AIs have no model of the world. Therefore, they have no way of knowing what is reasonable and what is not, what is life-like and what is not. All they can do is calculate what they think should come next based on what came next in their training data. That is not an insignificant technical achievement, but it is also not intelligence in any meaningful sense. It also implies a ceiling for these systems as, eventually, the calculation accuracy levels off. It makes as much sense to see diminishing returns from data as exponential ones. There are only so many ways to make a valid sentence in each language and at some point, you are going to have a system that can calculate them very well. More data, at that point, is not going to help. Or, worse, you need an increasing amount of data and compute power to eke out improvements.
And, of course, the hallucination or bullshit problem is inherent in the systems. Things that do not understand truth from fiction will not know that they made up a law citation because they have no understanding of the actual state of legal knowledge. They can be mitigated but never eliminated. That is why so many chat bots, from tax assistants to government legal aids, tell lies. Even in an area with repetitive work in with a large body of freely available knowledge to work from — programming — these limitations limit the usefulness of the imitative AI systems. Use of these systems has been shown to degrade code quality and to introduce security bugs — hardly the productivity improvements that they promise. All of which highlights why these are such bad businesses.
OpenAI is not making money — it is losing billions. Sam Altman recently admitted that at even two hundred dollars a month a seat, OpenAI still cannot make money. There is no reason to believe that is going to change anytime soon as there are no efficiencies in model training — you need to shovel more and more data and more and more compute power at the models to improve them. And that is even before we get to the massive externalities like the environmental costs that these systems produce. Business are alre already turning away from imitative AI because the productivity gains are simply not present at a significant enough level to justify the cost. A cost, mind you, that is already too low for the providers to be profitable.
So why are so many companies pushing this so hard? Because capitalism is about vibes.
The companies producing these systems need something to convince Wall Street that the abnormal levels of growth they have sustained since the last dot.com bust can continue. They likely cannot — there are few if any virgin markets from a technology perspective. That explains, in part, why we have cycled through so much bullshit from Silicon Valley. Crypto, the metaverse, NFTS — all the most recent Silicon Valley Next Big Things have been, well, not big, to be charitable. They are trying to find something that will be a significant growth driver, a market they have yet to enter. Imitative AI is meant to replace human work with computer work, but so far it simply cannot do that in a significant enough fashion to justify the money being spent.
So why do non-tech companies want to participate in this illusion? Vibes, I suspect. It is the reverse of the “no one was ever fired for buying IBM!” trope. Wall Street really wants companies to grow so they pressure companies to try anything that would limit employee costs. The promise of replacing people with machines is the dream of all investors, and the pressure to do so is immense. In addition, executives are people. They are subject to the same desire to go along and fear of missing out. If other CEOs are suing this, then maybe they should, because what if the pushers of imitative if AI are right? What happens then? Could I lose my job?
Essentially, business is not rational. They are driven by human emotions, not by logic or evidence, not really. The pushers of imitative AI need their promises to come true and so they constantly double down. The users of imitative Ai are bombarded with hype that they really cannot parse the truth of and thus they react in fear. And so we prop up a business that likely can never be profitable not out of any reasonable belief that they will be profitable, but because we are afraid of missing out on the possibilities. Vibes over logic, and good money after bad.
[END]
---
[1] Url:
https://www.dailykos.com/stories/2025/1/13/2296532/-Imitative-AI-Is-Not-a-Viable-Business-or-Capitalism-Is-Not-Rational?pm_campaign=front_page&pm_source=more_community&pm_medium=web
Published and (C) by Daily Kos
Content appears here under this condition or license: Site content may be used for any purpose without permission unless otherwise specified.
via Magical.Fish Gopher News Feeds:
gopher://magical.fish/1/feeds/news/dailykos/