(C) Daily Kos
This story was originally published by Daily Kos and is unaltered.
. . . . . . . . . .
Vegetative Electron Microscopy, Imitative AI, And Trust [1]
['This Content Is Not Subject To Review Daily Kos Staff Prior To Publication.']
Date: 2025-04-23
Yeah, back to tech stupidity as opposed to political stupidity. Figure we can all use the break.
Vegetative electron microscopy does not mean making electron microscopes out of vegetables, as cool as that would be. Nor does it mean looking at vegetables under electron microscopes. It doesn’t even mean electron microscopes degrading into a vegetative state. It means precisely nothing. It is a digital scanning error that imitative AI has picked up and is perpetuating. And that say a lot about trusting imitative AI in a lot of areas.
First, it is at least highly suggestive that these things plagiarize. This phrase has absolutely no meaning in any discipline. It literally exists only because a scanning system misunderstood the concept of columns on a page. But it keeps popping up in papers. That, to me, seems a pretty clear indication that imitative AIs do in fact copy their source material. You know, like all the other examples.
But it also highlights that these machines are simply utterly unreliable. The phrase is nonsense, as anyone with an ounce of common sense could tell. Yes, you might think there is a specialist terminology involved, but a touch of research would disabuse you of that notion. But imitative AI cannot, by itself, verify that work. It has no commonsense, no model of the world to verify its answers against. It has no real way to go “what the hell is vegetative electron microscopy? Those word do not go together” to even start the hunt for an answer. Its calculations said that vegetative electron microscopy was a phrase, so, here have some vegetative electron microscopy. Goes great paired with a red wine.
Systems designed to double check their answer still have a similar problem — they need to know what to double check, or they must redo the answer and compare them to see if there is anything to check, then must know how to check if the discrepancies are a problem or not. Even RAG depends upon knowing or having the right external data to pull from. It is hardly foolproof and not likely to catch issues like this. The result is still dependent upon the word calculator and how well alternate systems can determine what the hell the word calculator is saying. And none of this is getting better.
Models didn’t use to contain this error. Adding training data that had the error spoiled the models. They now all have this problem. More training data, in this case, led to worse models. And OpenAI even admits that the models are getting worse — they admit that their latest systems produce more wrong answers than before. They claim that is okay because they now produce more right answers, but how the hell is one supposed to tell if you have a wrong or right answer?
And this is not as theoretical problem. An AI programming assistant company recently had an AI bot lie about the cause of a problem. The bot claimed the error was the result of a new policy, causing the company to lose users. It confidently made up some bullshit (the problem was a run of the mill technical issue) and harmed the firm’s bottom line and reputation. No one should trust these things with anything important.
I know that people want imitative AI to be the next big thing. Some people because they want to fire all humans and do things without pesky people. But most regular people just like neat things. And it is kinda neat to have a machine generate a funny meme for you. But that is all these things can reliably be — toys. Even if you put aside the moral questions around their energy use and the theft of other people’s work (which you should not do), they are just bullshit all the way down. And you shouldn’t trust your project, education, or company to bullshit.
[END]
---
[1] Url:
https://www.dailykos.com/stories/2025/4/23/2318058/-Vegetative-Electron-Microscopy-Imitative-AI-And-Trust?pm_campaign=front_page&pm_source=more_community&pm_medium=web
Published and (C) by Daily Kos
Content appears here under this condition or license: Site content may be used for any purpose without permission unless otherwise specified.
via Magical.Fish Gopher News Feeds:
gopher://magical.fish/1/feeds/news/dailykos/