Not all searches get AI answers, but Google has been steadily expanding this feature since it debuted last year. One searcher on Reddit spotted a troubling confabulation when searching for crashes involving Airbus planes. AI Overviews, apparently overwhelmed with results reporting on the Air India crash, stated confidently (and incorrectly) that it was an Airbus A330 that fell out of the sky shortly after takeoff. We’ve run a few similar searches—some of the AI results say Boeing, some say Airbus, and some include a strange mashup blaming both Airbus and Boeing. It’s a mess.

Always remember that AI, or more accurately, LLMs are just glorified predictive text like on your phone. Don’t trust them. Maybe someday they will be reliable, but that day isn’t today.___

    • otp@sh.itjust.works
      link
      fedilink
      arrow-up
      11
      ·
      2 days ago

      “Lies” and even “fabricates” imply intent. “Makes shit up” is probably most accurate, but it also implies intent, which we can’t really apply to an LLM.

      Hallucination is probably the most accurate thing. There’s no intent – it’s something made up, that it expresses as true not because it is trying to mislead, but because it’s just as “true” to the LLM as anything else it says.

      • skuzz@discuss.tchncs.de
        link
        fedilink
        arrow-up
        3
        ·
        1 day ago

        “Large Language Model incorrectly travels down the wrong statistical path when choosing words from its N-dimensional matrices and ends up guessing the wrong aircraft manufacturer. Possibly because of training bias against foreign manufacturers in a xenophobic American future.”

        Just doesn’t have that ring to it, versus “AI SLAMS AIRBUS IN HOT TAKE!”

      • ctrl_alt_esc@lemmy.ml
        link
        fedilink
        arrow-up
        9
        ·
        2 days ago

        Or because it was programmed with a bias to respond in a certain way. There may not be intent on the LLM’s part, the same is not necessarily true for its developers though.

        • Boddhisatva@lemmy.worldOP
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 day ago

          There may not be intent on the LLM’s part

          There can’t be intent on the part of a non-sentient program. It has working code, flawed code, and probably intentionally biased code. Dont think of it as a being that intends to do anything.

        • otp@sh.itjust.works
          link
          fedilink
          arrow-up
          4
          ·
          2 days ago

          Definitely! Journalists would have to be reasonably certain of the intent to be able to publish it that way, though.

    • LupusBlackfur@lemmy.world
      link
      fedilink
      arrow-up
      5
      ·
      2 days ago

      Took almost my exact quote from my lips…

      Also so tired of the constant use of the marketing term “hallucinate” designed specifically to reduce the import of Counterfeit Cognizance making shit up out of whole cloth because it was trained on trash gathered from the Internet.

      As they say: Garbage in, garbage out. 🤷‍♂️ 💩

  • AbouBenAdhem@lemmy.world
    link
    fedilink
    English
    arrow-up
    21
    ·
    2 days ago

    The AI probably avoids putting “Boeing” and “fatal air crash” in the same sentence because it interprets them as synonyms.

    • justOnePersistentKbinPlease@fedia.io
      link
      fedilink
      arrow-up
      9
      ·
      2 days ago

      But that is how the LLM systems work.

      They work on statistical probability based on mentions in their data model.

      One would expect that Boeing would be associated more with crashes given how bad their control has been in the last few decades.

      • AbouBenAdhem@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        2 days ago

        But they also take immediate context into account: stating something once makes them less likely to repeat it again in different words. (That said, it was a joke—I don’t think that’s the real explanation.)