polite leftists make more leftists

☞ 🇨🇦 (it’s a bit of a fixer-upper eh) ☜

more leftists make revolution

  • 0 Posts
  • 59 Comments
Joined 1 year ago
cake
Cake day: March 2nd, 2024

help-circle
  • in what context? LLMs are extremely good at bridging from natural language to API calls. I dare say it’s one of the few use cases that have decisively landed on “yes, this is something LLMs are actually good at.” Maybe not five nines of reliability, but language itself doesn’t have five nines of reliability.


  • The claim is not that all LLMs are agents, but rather that agents (which incorporate an LLM as one of their key components) are more powerful than an LLM on its own.

    We don’t know how far away we are from recursive self-improvement. We might already be there to be honest; how much of the job of an LLM researcher can already be automated? It’s unclear if there’s some ceiling to what a recursively-improved GPT4.x-w/e can do though; maybe there’s a key hypothesis it will never formulate on the quest for self-improvement.




  • I suppose if you’re going to be postmodernist about it, but that’s beyond my ability to understand. The only complete solution I know to Theseus’ Ship is “the universe is agnostic as to which ship is the original. Identity of a composite thing is not part of the laws of physics.” Not sure why you put scare quotes around it.



  • Hallucinations aren’t relevant to my point here. I’m not defending that AIs are a good source of information, and I agree that hallucinations are dangerous (either that or misusing LLMs is dangerous). I also admit that for language learning, artifacts caused from tokenization could be very detrimental to the user.

    The point I am making is that LLMs struggling with these kind of tokenization artifacts is poor evidence for drawing any conclusions about their behaviour on other tasks.


  • Because LLMs operate at the token level, I think it would be a more fair comparison with humans to ask why humans can’t produce the IPA spelling words they can say, /nɔr kæn ðeɪ ˈizəli rid θɪŋz ˈrɪtən ˈpjʊrli ɪn aɪ pi ˈeɪ/ despite the fact that it should be simple to – they understand the sounds after all. I’d be impressed if somebody could do this too! But that most people can’t shouldn’t really move you to think humans must be fundamentally stupid because of this one curious artifact. Maybe they are fundamentall stupid for other reasons, but this one thing is quite unrelated.



  • Congrats, you’ve discovered reductionism. The human brain also doesn’t know things, as it’s composed of electrical synapses made of molecules that obey the laws of physics and direct one’s mouth to make words in response to signals that come from the ears.

    Not saying LLMs don’t know things, but your argument as to why they don’t know things has no merit.





  • The Rowan Atkinson thing isn’t misunderstanding, it’s understanding but having been misled. I’ve literally done this exact thing myself, say something was a hoax (because in the past it was) but then it turned out there was newer info I didn’t know about. I’m not convinced LLMs as they exist today don’t prioritize sources – if trained naively, sure, but these days they can, for instance, integrate search results, and can update on new information. If the LLM can answer correctly only after checking a web search, and I can do the same only after checking a web search, that’s a score of 1-1.

    because we know what “understanding” is

    Really? Who claims to know what understanding is? Do you think it’s possible there can ever be an AI (even if different from an LLM) which is capable of “understanding?” How can you tell?




  • These are good points. Fair enough, I would retract my statement to her being perhaps the most famous instead of most influential. Fame of course has its own influence though, so it’s still a big problem. A win against JK rowling could possibly be better than a win against Matt Walsh.

    I disagree that it’s misognyistic to have such an opinion of JK Rowling. In fact, I think it is misogynistic to suggest that because she’s a woman, we shouldn’t take her at her word for fear that our hatred of her might be motivated by misogyny instead of rationality.


  • The LLM isn’t aware of its own limitations in this regard. The specific problem of getting an LLM to know what characters a token comprises has not been the focus of training. It’s a totally different kind of error than other hallucinations, it’s almost entirely orthogonal, but other hallucinations are much more important to solve, whereas being able to count the number of letters in a word or add numbers together is not very important, since as you point out, there are already programs that can do that.

    At the moment, you can compare this perhaps to the Paris in the the Spring illusion. Why don’t people know to double-check the number of 'the’s in a sentence? They could just use their fingers to block out adjacent words and read each word in isolation. They must be idiots and we shouldn’t trust humans in any domain.



  • Can you explain the difference between understanding the question and generating the words that might logically follow? I’m aware that it’s essentially a more powerful version of how auto-correct works, but why should we assume that shows some lack of understanding at a deep level somehow?