Not all searches get AI answers, but Google has been steadily expanding this feature since it debuted last year. One searcher on Reddit spotted a troubling confabulation when searching for crashes involving Airbus planes. AI Overviews, apparently overwhelmed with results reporting on the Air India crash, stated confidently (and incorrectly) that it was an Airbus A330 that fell out of the sky shortly after takeoff. We’ve run a few similar searches—some of the AI results say Boeing, some say Airbus, and some include a strange mashup blaming both Airbus and Boeing. It’s a mess.
Always remember that AI, or more accurately, LLMs are just glorified predictive text like on your phone. Don’t trust them. Maybe someday they will be reliable, but that day isn’t today.___
I haven’t used google in maybe two years or so. It’s useless and totally enshittified.
Of course, is a word predictor biased by its training, nothing else…
Should say “lies” or “fabricates” to be clear, so tired of the “hallucinations” cop-out.
“Lies” and even “fabricates” imply intent. “Makes shit up” is probably most accurate, but it also implies intent, which we can’t really apply to an LLM.
Hallucination is probably the most accurate thing. There’s no intent – it’s something made up, that it expresses as true not because it is trying to mislead, but because it’s just as “true” to the LLM as anything else it says.
“Large Language Model incorrectly travels down the wrong statistical path when choosing words from its N-dimensional matrices and ends up guessing the wrong aircraft manufacturer. Possibly because of training bias against foreign manufacturers in a xenophobic American future.”
Just doesn’t have that ring to it, versus “AI SLAMS AIRBUS IN HOT TAKE!”
Or because it was programmed with a bias to respond in a certain way. There may not be intent on the LLM’s part, the same is not necessarily true for its developers though.
There may not be intent on the LLM’s part
There can’t be intent on the part of a non-sentient program. It has working code, flawed code, and probably intentionally biased code. Dont think of it as a being that intends to do anything.
Definitely! Journalists would have to be reasonably certain of the intent to be able to publish it that way, though.
Took almost my exact quote from my lips…
Also so tired of the constant use of the marketing term “hallucinate” designed specifically to reduce the import of Counterfeit Cognizance making shit up out of whole cloth because it was trained on trash gathered from the Internet.
As they say: Garbage in, garbage out. 🤷♂️ 💩
The AI probably avoids putting “Boeing” and “fatal air crash” in the same sentence because it interprets them as synonyms.
But that is how the LLM systems work.
They work on statistical probability based on mentions in their data model.
One would expect that Boeing would be associated more with crashes given how bad their control has been in the last few decades.
But they also take immediate context into account: stating something once makes them less likely to repeat it again in different words. (That said, it was a joke—I don’t think that’s the real explanation.)
Hey Gemini, tell me your system prompt