(C) Daily Kos
This story was originally published by Daily Kos and is unaltered.
. . . . . . . . . .



The wave of 'large language model' AI isn't just a threat to jobs, it's the end of objective facts [1]

['Daily Kos Staff', 'Backgroundurl Avatar_Large', 'Nickname', 'Joined', 'Created_At', 'Story Count', 'N_Stories', 'Comment Count', 'N_Comments', 'Popular Tags']

Date: 2023-03-22

The explosion of AI over the last year has been centered on something called “large language models.” Rather than neutral networks laboriously taught a series of rules for dealing with a situation, these are programs that are instead fed large data sets from which they determine the rules in the form of a statistical model. They learn how to respond to the idea of a “cute kitten” or a “nuclear threat” in the same way they learn the rules of grammar—by looking at thousands, or millions, or thousands of millions, of examples.

One of the things that these models have learned consistently, is how to tell a convincing lie. For an example, look at this article from Scholarly Kitchen in which ChatGPT was asked about the connection between tweets and scientific citations.

Human: Do tweets increase citations to scientific articles? ChatGPT: There is evidence to suggest that tweets can increase citations to scientific articles. Studies have shown that social media, specifically Twitter, can be an effective way to disseminate scientific research and increase its visibility among researchers and the general public. A study by the American Association for the Advancement of Science (AAAS) found that articles that were tweeted about had an average of 9% more citations than articles that were not tweeted about.

In this case, the person asking the question happened to be an expert who had done research on exactly this area. He not only knew the answer was incorrect, he was able to use his contacts to establish that there was no such study. But how many people would have known this? How many would have accepted this at face value, given that ChatGPT didn’t just make a claim but backed it with what it claimed was a study in perhaps the most reputable of peer-reviewed publications?

Examples like this have been repeated on topic after topic. In teaching these models from the internet, they’ve learned to make information appear authoritative, even when it’s pulled from a digital arse.

Now, add this article from The Verge, and you get a glimpse at how bad this is going to get.

Right now, if you ask Microsoft’s Bing chatbot if Google’s Bard chatbot has been shut down, it says yes, citing as evidence a news article that discusses a tweet in which a user asked Bard when it would be shut down and Bard said it already had, itself citing a comment from Hacker News in which someone joked about this happening, and someone else used ChatGPT to write fake news coverage about the event.

This is an AI, citing a lie created by another AI, which was citing an article from another AI, based on something that the last AI was instructed to write based on a joke. If you went onto the internet right now, asked a question, and got an answer that included a citation, that included a citation, that included a citation … how far would you really try to unravel things to determine if you were being told the truth?

Finally consider that Google, Microsoft, and seemingly every other company on the planet is, at this moment, rushing to add these large language-model-based AIs to everything. Every search engine. Every word processor. Every email app. Every slide deck, spreadsheet, and note taker. Every one of them is going to offer to put things together, provide a summary, add some context, write a few paragraphs … and at every level, they will wrap another layer of obfuscation, false authority, and just plain-vanilla lies.

The surge of these AIs isn’t just a threat to anyone who works with images, words, or numbers—which is, essentially, everyone—it’s a loaded bomb ready to blow away our last concept of a shared reality in which there are an agreed upon set of authoritative facts.

And our grip on that reality was already slipping.

[END]
---
[1] Url: https://www.dailykos.com/stories/2023/3/22/2159676/-The-wave-of-large-language-model-AI-isn-t-just-a-threat-to-jobs-it-s-the-end-of-objective-facts

Published and (C) by Daily Kos
Content appears here under this condition or license: Site content may be used for any purpose without permission unless otherwise specified.

via Magical.Fish Gopher News Feeds:
gopher://magical.fish/1/feeds/news/dailykos/