

Also, Air India goes by the unfortunate initialism “AI”, which will really gum up the headlines here.
Also, Air India goes by the unfortunate initialism “AI”, which will really gum up the headlines here.
Can you explain the difference between understanding the question and generating the words that might logically follow?
I mean, it’s pretty obvious. Take someone like Rowan Atkinson whose death has been misreported multiple times. If you ask a computer system “Is Rowan Atkinson Dead?” you want it to understand the question and give you a yes/no response based on actual facts in its database. A well designed program would know to prioritize recent reports as being more authoritative than older ones. It would know which sources to trust, and which not to trust.
An LLM will just generate text that is statistically likely to follow the question. Because there have been many hoaxes about his death, it might use that as a basis and generate a response indicating he’s dead. But, because those hoaxes have also been debunked many times, it might use that as a basis instead and generate a response indicating that he’s alive.
So, if he really did just die and it was reported in reliable fact-checked news sources, the LLM might say “No, Rowan Atkinson is alive, his death was reported via a viral video, but that video was a hoax.”
but why should we assume that shows some lack of understanding
Because we know what “understanding” is, and that it isn’t simply finding words that are likely to appear following the chain of words up to that point.
Oh yeah, I forgot about how they add a “v” sound to it.
You can even drop the “a” and “g”. There isn’t even “intelligence” here. It’s not thinking, it’s just spicy autocomplete.
How do you pronounce “Mrs” so that there’s an “r” sound in it?
And people are trusting these things to do jobs / parts of jobs that humans used to do.
Imagine asking a librarian “What was happening in Los Angeles in the Summer of 1989?” and that person fetching you … That’s modern LLMs in a nutshell.
I agree, but I think you’re still being too generous to LLMs. A librarian who fetched all those things would at least understand the question. An LLM is just trying to generate words that might logically follow the words you used.
IMO, one of the key ideas with the Chinese Room is that there’s an assumption that the computer / book in the Chinese Room experiment has infinite capacity in some way. So, no matter what symbols are passed to it, it can come up with an appropriate response. But, obviously, while LLMs are incredibly huge, they can never be infinite. As a result, they can often be “fooled” when they’re given input that semantically similar to a meme, joke or logic puzzle. The vast majority of the training data that matches the input is the meme, or joke, or logic puzzle. LLMs can’t reason so they can’t distinguish between “this is just a rephrasing of that meme” and “this is similar to that meme but distinct in an important way”.
then continue to shill it for use cases it wasn’t made for either
The only thing it was made for is “spicy autocomplete”.
If you’ve ever heard Germans try to pronounce “squirrel”, it’s hilarious. I’ve known many extremely bilingual Germans who couldn’t pronounce it at all. It came out sounding roughly like “squall”, or they’d over-pronounce the “r” and it would be “squi-rall”
Sure, one economic unit with massive tariffs between the three parts of that one economic unit.
The US had a period of “greatness” shortly after WWII.
Why was the US “great” in the 1950s and 1960s?
So yeah, people’s grandpas were able to buy a house and support a family working a menial job for a brief period after WWII. But, that’s not because of some fundamental characteristic about the US that makes it better. It’s mostly because the US was fortunate enough to be on the opposite side of the planet from one of the most destructive wars in history.
The “less lethal” means “less lethal than a bullet”. They are less lethal than a bullet, but there’s a reason that they don’t use the term “non-lethal”.
You weren’t there supporting Trump? Did you have a Bernie sign or something? That must have been popular with the Trump supporters.
Why were you there on Jan 6th. Did you really believe the election had been stolen?
So could tulip bulbs, for a while.