Subj : Re: ChatGPT Writing
To : Nightfox
From : jimmylogan
Date : Wed Dec 03 2025 07:57 am
-=> Nightfox wrote to jimmylogan <=-
Ni> Re: Re: ChatGPT Writing
Ni> By: jimmylogan to Nightfox on Tue Dec 02 2025 11:45 am
Ni>> One thing I've seen it quite a bit with is when asking ChatGPT to make a
Ni>> JavaScript function or something else about Synchronet.. ChatGPT doesn't
Ni>> know much about Synchronet, but it will go ahead and make up something it
Ni>> thinks will work with Synchronet, but might be very wrong. We had seen
Ni>> that quite a bit with the Chad Jipiti thing that was posting on Dove-Net
Ni>> a while ago.
ji> Yeah, I've been given script or instructions that are outdated, or just
ji> flat out WRONG - but I don't think that's the same as AI hallucinations...
ji> :-)
ji> Maybe my definition of 'made up data' is different. :-)
Ni> What do you think an AI hallucination is?
Ni> AI writing things that are wrong is the definition of AI
Ni> hallucinations.
Ni>
https://www.ibm.com/think/topics/ai-hallucinations
Ni> "AI hallucination is a phenomenon where, in a large language model
Ni> (LLM) often a generative AI chatbot or computer vision tool, perceives
Ni> patterns or objects that are nonexistent or imperceptible to human
Ni> observers, creating outputs that are nonsensical or altogether
Ni> inaccurate."
Outdated code or instructions is not 'nonsensical' just wrong.
This is the first time I've seen the objective definition. I was told,
paraphrasing here, 'AI will hallucinate - if it doesn't know the answer
it will make something up and profess that it is true.'
If you ask me a question and I give you an incorrect answer, but I
believe that it is true, am I hallucinating? Or am I mistaken? Or is
my information outdated?
You see what I mean? Lots of words, but hard to nail it down. :-)
... Spilled spot remover on my dog. :(
--- MultiMail/Mac v0.52
� Synchronet � Digital Distortion: digitaldistortionbbs.com