(C) Daily Kos
This story was originally published by Daily Kos and is unaltered.
. . . . . . . . . .
Imitative AI Cannot Follow the News And Why This Indicates its End [1]
['This Content Is Not Subject To Review Daily Kos Staff Prior To Publication.']
Date: 2025-03-17
Before we begin, it is important to not generalize this study to all imitative AI search results. I have seen a few people in a couple different places make that claim, and it is not accurate. The study we are about to discuss applies only to imitative AI’s ability to cite news sources, and are not generalizable to other results.
But they do suck terribly at citing news sources.
A study has demonstrated that, when retrieving and citing news content:
We found that…
Chatbots were generally bad at declining to answer questions they couldn’t answer accurately, offering incorrect or speculative answers instead.
Premium chatbots provided more confidently incorrect answers than their free counterparts.
Multiple chatbots seemed to bypass Robot Exclusion Protocol preferences.
Generative search tools fabricated links and cited syndicated and copied versions of articles.
Content licensing deals with news sources provided no guarantee of accurate citation in chatbot responses.
Musk’s anti-woke imitative AI system Grok, you will be unsurprised to learn did the worst.
There are two issues here, I believe. The first is that the premium systems, the ones people pay for the most, were the most likely to give you confidently wrong answers. That is to say, the more you paid, to a certain extent, the more likely you were to be handed bullshit. That is terrible for these systems as a business. Companies are already leery of the output of the systems — finding out that you don’t get better results when you pay extra for, well, better results is not confidence inspiring. It is, however, a great way to lose customers.
Again, these results may not apply to general queries or queries in specific domains, but they are suggestive. At a minimum, in addition to not trusting output you have not personally identified, already a brake on the productivity possibilities of these systems, you are probably justified in seeing if you can get the same results or better out of a free or lessor priced system. Not, I assume, music to the sales teams’ ears.
And this ties into our second problem. Fixing this specific issue should be relatively simple. You parse the output of the request through a Lexus Nexus query (Lexus Nexus is a database of news articles.) and validate that the cited articles do, in fact, exist (the study did something similar but manually). The problem is that such checking would be costly. Even if you found a less expensive but similarly reliable method, you would still be adding costs to your results. And adding costs is the one thing that these firms simply cannot do.
Imitative AI is not a profitable business. OpenAI, by far the leader, lost about 5 billion dollars last year. These models are expensive to train and run. The fact that OpenAI is openly floating an ill-defined “PhD like” agent for 20 grand a month — a cost far higher than hiring an actual PhD — gives you some sense of the economic difficulties these firms are under. Add in the fact that performance gains are not really happening, or, at best, happening after a huge expenditure of time and money for incremental improvements, and you have a business that simply cannot take the steps necessary to be reliable. Because doing so means they cannot be profitable.
So while this study does not apply to every kind of search, it does highlight that imitative AI companies are not taking simple steps to make their products accurate and reliable. In fact, the more you pay, the worse your results in many cases. None of that suggests a sustainable business. Right now, it really does appear that the only thing keeping these companies alive are venture capital funds driven by the ideology that “workers must be punished”.
And while that maybe a coherent view of the world, it is not a coherent way to run a business.
[END]
---
[1] Url:
https://www.dailykos.com/stories/2025/3/17/2310608/-Imitative-AI-Cannot-Follow-the-News-And-Why-This-Indicates-its-End?pm_campaign=front_page&pm_source=more_community&pm_medium=web
Published and (C) by Daily Kos
Content appears here under this condition or license: Site content may be used for any purpose without permission unless otherwise specified.
via Magical.Fish Gopher News Feeds:
gopher://magical.fish/1/feeds/news/dailykos/