Day 343 – Humans Hallucinate Too

I was talking to someone today about AI, specifically large language models (LLMs). They commented, “I cannot use that because AI hallucinates too much; I cannot trust it.” I reflected on this thought for a while. I have heard this objection before, but that got me thinking. If AI is hallucinating, then humans are hallucinating and, in fact, are probably the cause of the hallucination. Although what we are doing with these large models is incredible, the primary function is not that complicated to understand. Effectively, the model has built parameters or has been trained on a large amount of existing human content. A generative model will then produce output that effectively predicts the next sequence of text based on probabilities. If I were to input “the Quick Brown Fox” with no other explanation, then there is no doubt it would predict “jumps over the lazy dog” as the next sequence of words to follow.

An AI hallucination occurs when an artificial intelligence model produces output that is not grounded in reality or in the data it was trained on. In other words, the AI generates responses or facts that are incorrect, fabricated, or irrelevant, even though it presents them confidently as true. Hallucinations happen because these models generate text based on patterns and probabilities learned from large datasets but don’t have a true understanding of facts or logic. This can lead to issues like providing false information, creating imaginary references, or misinterpreting questions.

I have firsthand experience with this. I was writing an article, and I decided to take a quick shortcut and ask an AI engine about a reference that I was thinking about. I was interested in a quote from a particular explorer. I was lazy and did not want to look it up, so I asked an LLM. I got a response that sounded factual to me. It used proper names that I recognized and places that seemed to coincide, and it even gave the name of the book the quote came from. I used this background material in my article when talking about this quote. Unfortunately for me, a historian chanced upon my article and gave me a severe dressing down. The “facts” were completely wrong and not even remotely true. They sounded good and were related to the topic but not based on reality. This is a hallucination.

Later, I asked the model what confidence it had in the response, and it came back and said it had very little confidence and, in fact, it was completely wrong. Then, proceeded to give me the correct data. The correct book, the correct time period and the correct explorer contribution to the quotation. So, I got burned and learned a hard lesson, one that I will never repeat. I will use AI; it is a massive productivity tool, but I always check facts now and compare them against sources. If I can confirm 2 – 3 reputable sources, then I have some degree of confidence. I have even found some tools that help aid in this process.

So, back to my story. The person who told me this was indeed correct; hallucinations happen. However, I came to this exciting consideration. Do humans do the same thing? I certainly do. I always try to predict and preempt the next thought. I often hallucinate understanding and consider something that came to my mind as fact, even though I have no verifiable way to know that it is true. Heck, I will even tell myself a story so often that I will begin to believe it happened. I will exaggerate a story, fabricate a story, and modify a story to fit my narrative. Do not judge; this is a human characteristic shared by all of us. So when an AI engine does the same thing, we freak out. Why? Is it because we expect machines to be perfect, or is it frightening to know that the model reflects back to us how humanity behaves?

Come to think of it, most of these large models are trained on Internet data or on collections of human writings and images. If the engine was trained on our writing and predicted what we would normally write and told us something that is not true, would that be a hallucination or just a highly accurate reflection of humanity? This just dawned on me. If we do not want our AI concoctions to hallucinate, then maybe we should start telling the truth more often. Perhaps we should make meaningful contributions to the “Internet” rather than sarcastic memes of our pet cats?  So should we blame AI that was trained on our data, or blame the source, ourselves?

In conclusion, while it’s easy to point fingers at AI for its hallucinations, we must recognize that these models are merely mirroring the data we provide. AI doesn’t invent from thin air; it predicts based on patterns derived from human content. If it “hallucinates,” perhaps it is because we, as humans, often do the same. Our tendency to embellish, exaggerate, or even misremember influences the very data AI learns from. This reflection should inspire us not just to improve AI but to be more truthful and intentional in the content we produce. After all, if we desire accuracy from AI, we must first model it ourselves.

Of course, I still like cat memes. Keep those coming.

Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments
Share the Post:

Recent Blogs

0
Would love your thoughts, please comment.x
()
x