(C) Daily Kos
This story was originally published by Daily Kos and is unaltered.
. . . . . . . . . .



OpenAI as Narc or Accountability? What's That? [1]

['This Content Is Not Subject To Review Daily Kos Staff Prior To Publication.']

Date: 2025-09-02

Do people still call informers narcs? Am I now, officially, at the “hello, fellow kids” stage of life? Does the “hello, fellow kids” make me even older?

Anyway, OpenAI is going to narc on you.

In response to being sued because ChatGPT encouraged and helped a teenager commit suicide, and likely the flood of stories about how imitative AI programs encourage psychosis in users as well, OpenAI has said that they are going to start monitoring user inputs and reporting them to the police. That is bad, for a number of reasons, some more obvious than others.

First, it is a terrible invasion of privacy. OpenAI, and other imitative AI firms, encourage people to use these systems in ways that are very therapist like. We know that they are designed to encourage people to come back time and time again. And we further know that bringing police, who are not remotely properly trained to handle mental health situations, often makes the situation worse, sometimes fatally worse. Instead of helping, this kind of intervention is more likely to hurt.

Second, OpenAI pushes ChatGPT as a creative aid — something people who have not put the effort in to learn how to craft stories can use to short cut that process. It is bad at that, mind you, but that does not change the fact that people are encouraged to use it that way. Now imagine an automated process that routes information to the police. Pity the poor thriller and murder mystery writers — at best, they will end up wasting police time. At worst, some of those writers are going to have their lives disrupted by these reports.

But none of that is as important as the simple fact that OpenAI is refusing to take responsibility for its product. They know that the product is dangerous, both to the users’ mental health and possibly even their physical safety. They know that their engagement practices likely make the situation worse. Any decently run company would pull the product and repair it before re-releasing it. And it used to be that if the company wasn’t decently run, the government would step in and ensure that the firm did the right thing. Car recalls, for example, offer an excellent model. But, since OpenAI was born on the internet, we collectively refuse to hold them to even minimal standards of accountability. We appear to have decided that accountability does not exist on the internet, and everyone is supposed to be fine with that.

Well, it’s not fine. It is another example of the rot at the heart of late-stage capitalism, the sense that everyone should be on their own, that predation and harm are not the responsibility of the government to outlaw fix, nor the responsibility of the firms to prevent. Everyone on their own, and the devil take the hindmost. Well, we don’t have to accept that. We can have a civilization, not just a collection of atomic economic units, each trying to abuse the next for profit. But it has to start with not pretending that what OpenAI is being allowed to get away with is acceptable. It has to start with forcing the government to bring back accountability. Because without accountability, we get companies whose solution to dangerous products is to do essentially nothing and expect us to be happy to be preyed upon.

[END]
---
[1] Url: https://www.dailykos.com/stories/2025/9/2/2341395/-OpenAI-as-Narc-or-Accountability-What-s-That?pm_campaign=front_page&pm_source=more_community&pm_medium=web

Published and (C) by Daily Kos
Content appears here under this condition or license: Site content may be used for any purpose without permission unless otherwise specified.

via Magical.Fish Gopher News Feeds:
gopher://magical.fish/1/feeds/news/dailykos/