(C) Daily Kos
This story was originally published by Daily Kos and is unaltered.
. . . . . . . . . .
The story behind the story is AI generated news [1]
['This Content Is Not Subject To Review Daily Kos Staff Prior To Publication.']
Date: 2025-04-13
I clicked a Google link to a news story that had the Chinese admitting that they have been doing cyber attacks against U.S. infrastructure with the reason being our continued support of Taiwan.
It was on Investing.com. At the end was a disclaimer that said it had been "generated with the support of AI."
Just was does "with the support" mean? Also, what does "reviewed by an editor" mean?
The Washington Post has a both-sideser AI bot at work. They also have one to go through comments, write a summary of them and categorize them. Suppose the Investing.com editor is also AI?
I assume that T&C mean their Terms and Conditions, but they are nowhere to be found on the site. The best I could find was an ABOUT US and a investment risk disclaimer.
The article references a Wall Street Journal report, and indeed there was one. So what these folks are doing is letting AI scrape websites for them, write the article and have an editor review it. I assume they have a subscription to the WSJ. But does the WSJ subscription allow you to do this? I took a look. There is nothing to prevent them from doing this.
There are some AI laws in states and federal laws, but they have nothing to do with content, data scraping or review. There are proposed laws, but nothing yet finalized. All you can do is sue for copyright infringement.
We have seen instances where AI has just plain made stuff up. Questions can be answered with lies.
Then I found apps available that all generate news stories.
There were so many I had to stop listing them. To have that many AI news generators just made me cringe and worry what will happen to trusted news sources. The Washington Post already left the building.
There are ones that not only generates the news, but also a realistic newscaster to read it on the screen.
There are channels that make no bones about the newscaster. There are ones in South Korea and India, one called Channel 1 where everything is AI. There's even Fox News 26 in Houston that uses AI Studios.
The Guardian did a story on Il Foglio, a newspaper in Italy that has a section completely AI generated. The article noted that in the stories, there were no quotes from people. The last page was AI generated letters to the editor which would seem to be absolutely ridiculous. AI writing letters to itself.
Poynter noted some things The Guardian missed, like misspellings and misinformation. Reuters was misspelled as "Redutrs." An article about Donald Trump's lies had inaccuracies and lacks attribution for anything. It shows the creation under some images by ChatGPT and Grok, but doesn't say what AI was used to write the articles. Also missing is an AI ethics policy. Just like Investing.com.
The Washington Post has an AI policy at the very end of every other news gathering topic and method you can think of. "We are transparent about how and when we use AI."
They are using it to sift through lots of text, without saying where that text comes from. Also going through thousands of images. I have to ask what that purpose is for? They say they will not use AI to generate pictures or videos without disclosing its use. I'm trying to figure out what a valid reason to ever use such is. We've had a discussion here about the use of AI on images and videos. Every single AI has to be trained on something and whatever it's trained on, a human being had to create. It's like the old saying about a thousand monkeys with typewriters and coming up with Shakespeare. It doesn't happen. AI simulates that it is happening. Then at the end, The Washington Post almost claims innocence.
"We believe in the value of our intellectual property and will protect the Integrity of our work wherever it is used, in new settings and old."
You will defend your own human generated material, but you're using other people's in using AI. Okay for me, but not for thee.
Here's one you probably missed. Google dropped its policy not to use AI for weapons or surveillance.They do a really good song and dance for public relations purposes, but it doesn't stop the fact that they did drop the policy. There are a lot of articles about it, but the topic here is news and the use of AI in it.
Some of the best thoughts are from the US Copyright Office (72 page PDF), in a paper titled Copyright and Artificial Intelligence. It's Part One. Put out last July, it covers "Digital Replicas." Part 2 came out at the end of January called Copyrightability (52 pages PDF).
With all the AI news generators and even ones that are proud to have artificial avatars speaking the news, I see a danger to accuracy. I wonder what happens when there is no attribution of sources. It's like a social media influencer giving an opinion. It's meaningless entertainment, not journalism.
There are so many problems with the whole process of material theft to create the AI model in the first place. It's akin to hacking into computers to get what you want. There's a right and wrong to AI. Doing it with news just seems like a bad idea on the face of it.
[END]
---
[1] Url:
https://www.dailykos.com/stories/2025/4/13/2316080/-The-story-behind-the-story-is-AI-generated-news?pm_campaign=front_page&pm_source=more_community&pm_medium=web
Published and (C) by Daily Kos
Content appears here under this condition or license: Site content may be used for any purpose without permission unless otherwise specified.
via Magical.Fish Gopher News Feeds:
gopher://magical.fish/1/feeds/news/dailykos/