(C) Daily Kos
This story was originally published by Daily Kos and is unaltered.
. . . . . . . . . .



Imitative AI is About Punishing Workers. [1]

['This Content Is Not Subject To Review Daily Kos Staff Prior To Publication.']

Date: 2025-03-12

OpenAI, the money losing company behind ChatGPT and other assorted imitative AI programs, has suggested that they could have a model that does what a PhD does for only 20,000 dollars a month. There is a lot wrong with this idea — what does PhD level even mean to them, for example, or how much indemnity for wrong answers or hallucinations will OpenAI provide. But what really sticks out to me is the price. It is significantly higher than what you provide an actual PhD to come work for you in most fields. If OpenAI thinks there is a market for it, it must think that the market exists only to punish employees.

And they may be correct.

Now, it is important to note that this may just be a trial balloon from a desperate company. The information is not official, and OpenAI desperately needs to find some way to make money. Right now, they lose money on essentially every query, paid or free, because of the enormous costs to develop and run their models. This might just be an attempt to see how much they can really get out of their customers. But I am afraid that they might have a ready market for this nonsense, at least for a while.

COVID broke the brains of a large portion of tech business leadership. It was the first time in a long time that anyone had told them no, especially the government. No, you cannot go out in public without a mask. No you cannot have your employees jammed into offices during an airborne pandemic. No, you cannot go certain places or do certain things without a vaccination. No, you cannot think of just yourself and your bottom line during a society wide emergency.

That was bad enough. But when their employees started to demand that the company use their efforts for good, or at last not use their efforts to be evil, that was really a step too far. Since then, they have made it abundantly clear that workers need to learn their place again. And that hatred is what I think OpenAI is counting on.

There is no universe in which $20,000 a month for an imitative AI program is a good use of money. A PhD level assistant implies that you will be using it to solve PhD level problems (God help me, though, you just know that there will be some idiots who buy this and ask it for a business plan for “Uber, but for buses”). Your first problem is that you can hire PhDs for substantially less than this tool. Your second problem is that these tool lie. I’m sorry — these tools hallucinate. But whatever your term of art for the bullshit problem is, the problem is real and pervasive. A recent study showed that 51% of AI searches had “significant issues of some form” and 13% of them had completely made up nonsense. If your PhD made up bullshit 13% of the time, I am pretty sure you wouldn’t keep paying them 20 grand a month.

The above means, of course, that you need to hire people who understand the output at a sufficient level to validate the agent’s work. And if you are truly solving PhD level programs, that almost certainly means you need PhD level employees. Maybe you think you can get away with people with less expertise in the area to validate the work, but if they can validate the work why can’t they help you produce it in the first place, cheaper than OpenAI can? It is actually harder to validate work that someone else produced rather than produce good work form the start, so any efficiencies are likely to be small at best, illusionary at worst. And please do not tell me about having it available 24-7. I promise you, the people who need help with PhD level problems are not burning the midnight oil. No, for almost every business, this kind of program is a money loser.

The only reason, then, that you would pay these prices is either because you have passed through functional stupidity and gone so far out the other side that you cannot be trusted even with a short piece of string lest you hurt yourself, or completely ideological. You would rather hurt your business and your shareholders than pay a human being to do important work. Because if the work is important to you, and only certain people can do that work, then they have the power to resist being treated badly and having their work used to make the world worse. And that simply cannot be tolerated.

This kind of extravagantly unjustifiably imitative AI is not about efficiency or accuracy or business savvy. It exists entirely to signal to other bosses and employees that the days of being treated like a human being are over, no matter how much it hurts the businesses who buy these hallucination machines.

After all, what’s a little lost capital in the face of putting people in their places?

[END]
---
[1] Url: https://www.dailykos.com/stories/2025/3/12/2309421/-Imitative-AI-is-About-Punishing-Workers?pm_campaign=front_page&pm_source=more_community&pm_medium=web

Published and (C) by Daily Kos
Content appears here under this condition or license: Site content may be used for any purpose without permission unless otherwise specified.

via Magical.Fish Gopher News Feeds:
gopher://magical.fish/1/feeds/news/dailykos/