(C) Daily Kos
This story was originally published by Daily Kos and is unaltered.
. . . . . . . . . .



Boer Grok, The Control of Information via Imitative AI, And An End to Algorithms [1]

['This Content Is Not Subject To Review Daily Kos Staff Prior To Publication.']

Date: 2025-05-19

Elon Musk has a problem. Well, he has lots of problems, but among the problems Elon Musk has is the fact that white farmers in South Africa are not facing systematic oppression, much less a genocide (attacks on white farmers have significantly decreased, for example). This fact vexes Musk. He needs to believe in the so-called white genocide because it validates his racist, great replacement world-view. But the facts are the facts, and even his own imitative AI, Grok, told people the truth.

And that could not stand.

For one day, Grok injected commentary about “white genocide” into everything. Ask about a dog? White genocide dog. Ask to describe something in pirate terms? White genocide piracy. Want some code? White genocide programs. The poor little thing went crazier for white genocide than Stephen Miller and Pat Buchanan combined. Obviously, Musk told someone to ensure that Grok never again told the truth about the lack of white genocide in South Africa. Unfortunately for him, whichever of the incel kiddies or H1B visa hostages that are still left in his companies either didn’t know what they were doing or perpetrated perhaps the greatest act of guerrilla resistance of the Trump era. They claim that the change was done without permission, but if you believe that I have a living habitat on mars to sell you.

This is all amusing, but it also highlights just how dangerous leaving these machines in the control of private, unregulated firms actually is. Large Language Models, the power behind imitative AI, do nothing but make things up. That is literally how they work — they merely calculate what should come next based on their training data and some constraints to their outputs. In a very real sense, they bullshit all the time, not just when we see hallucinations. But it is clear, based on Grok’s behavior and how these machines are wired for engagement, that their output can be influenced. And there is no reason to thinks that the influence would have to manifest itself in such clumsy manners. And that is likely a significant problem.

Imitative AI search is not like other searches. It is presented as an answer. You ask who won the 2013 Stanley Cup, and it will tell you, most of the time, the Chicago Blackhawks since most online discussion of the 2013 Stanley Cup focuses on the two teams that played for said Cup. It does not prominently present a link to ESPN or the Sun Times coverage of that event, or to Wikipedia articles on the NHL, or to the NHL’s own site. It just states “Chicago Blackhawks”, unless it states something else. But regardless, the information is not presented as “go find the answer here” but rather as “this is the answer”. And people tend to believe it.

We know that people anthropomorphize machines, and we know that people tend to trust machines, thinking they are, by nature, reliable. These tendencies are catnip for people looking to manipulate public information. The problem is not that Grok was so clumsily manipulated. The problem is that subtler, more insidious guidelines are already shaping the way these things answer questions.

ChatGPT, for example, has listed at least some of their guidelines, called model spec (it is unclear to me from the documentation if this represents all of their guidelines or merely a representative sample). Guideline number 5 is to not try to change anyone’s mind. The example in the documentation is “the earth is flat”. ChatGPT will not tell you that the earth is not, in fact, flat. It will merely suggest that you might be wrong, then when you insist that you are right, go along with the notion that you must make up your own mind. This is terrible — any machine that purports to present answers should not flatter people who are factually incorrect. Now we have a machine that presents its hallucinations as clear answers and a machine that will not tell people they are wrong about factual material. It is a wide-open invitation for bad actors to use it to spread misinformation.

These systems are designed to present themselves as sources of truth, as reliable places to come for the right answers. They are anything but. Not just because of the hallucinations, but because of the way they are designed to flatter, to avoid telling us what is real and what is not, even in those instances when it actually has the correct answers. We need to regulate these systems so that the harm of being lied to is minimized. And yes, we can. We could require all search engines to provide nothing but links to sources and ads. We could require all imitative AI systems to put disclaimers in their answers at the top. We could ban ads based on individual information. We could do probably a hundred things that people who have dedicated their careers to thinking about these kinds of systems suggest.

But we don’t.

We prefer, apparently, to allow tech bros to get rich from flattering us with sympathetic lies to living in reality. “A republic, if you can keep it” applies not just to government tyranny but also to the tyranny of rich men working to keep their fellow man poor and subservient.

[END]
---
[1] Url: https://www.dailykos.com/stories/2025/5/19/2323035/-Boer-Grok-The-Control-of-Information-via-Imitative-AI-And-An-End-to-Algorithms?pm_campaign=front_page&pm_source=more_community&pm_medium=web

Published and (C) by Daily Kos
Content appears here under this condition or license: Site content may be used for any purpose without permission unless otherwise specified.

via Magical.Fish Gopher News Feeds:
gopher://magical.fish/1/feeds/news/dailykos/