https://www.psypost.org/scholars-ai-isnt-hallucinating-its-bullshitting/
"Large language models, such as OpenAI’s ChatGPT, have revolutionized the way
artificial intelligence interacts with humans, producing text that often seems
indistinguishable from human writing. Despite their impressive capabilities,
these models are known for generating persistent inaccuracies, often referred
to as “AI hallucinations.” However, in a paper published in
Ethics and
Information Technology, scholars Michael Townsen Hicks, James Humphries, and
Joe Slater from the University of Glasgow argue that these inaccuracies are
better understood as “bullshit.”
Large language models (LLMs) are sophisticated computer programs designed to
generate human-like text. They achieve this by analyzing vast amounts of
written material and using statistical techniques to predict the likelihood of
a particular word appearing next in a sequence. This process enables them to
produce coherent and contextually appropriate responses to a wide range of
prompts.
Unlike human brains, which have a variety of goals and behaviors, LLMs have a
singular objective: to generate text that closely resembles human language.
This means their primary function is to replicate the patterns and structures
of human speech and writing, not to understand or convey factual information.
The term “AI hallucination” is used to describe instances when an LLM like
ChatGPT produces inaccurate or entirely fabricated information. This term
suggests that the AI is experiencing a perceptual error, akin to a human seeing
something that isn’t there. However, this metaphor is misleading, according to
Hicks and his colleagues, because it implies that the AI has a perspective or
an intent to perceive and convey truth, which it does not.
To better understand why these inaccuracies might be better described as
bullshit, it is helpful to look at the concept of bullshit as defined by
philosopher Harry Frankfurt. In his seminal work, Frankfurt distinguishes
bullshit from lying. A liar, according to Frankfurt, knows the truth but
deliberately chooses to say something false. In contrast, a bullshitter is
indifferent to the truth. The bullshitter’s primary concern is not whether what
they are saying is true or false but whether it serves their purpose, often to
impress or persuade.
Frankfurt’s concept highlights that bullshit is characterized by a disregard
for the truth. The bullshitter does not care about the accuracy of their
statements, only that they appear convincing or fit a particular narrative.
The scholars argue that the output of LLMs like ChatGPT fits Frankfurt’s
definition of bullshit better than the concept of hallucination. These models
do not have an understanding of truth or falsity; they generate text based on
patterns in the data they have been trained on, without any intrinsic concern
for accuracy. This makes them akin to bullshitters — they produce statements
that can sound plausible without any grounding in factual reality."
Via Wayne Radinsky.
Cheers,
*** Xanni ***
--
mailto:xanni@xanadu.net Andrew Pam
http://xanadu.com.au/ Chief Scientist, Xanadu
https://glasswings.com.au/ Partner, Glass Wings
https://sericyb.com.au/ Manager, Serious Cybernetics