• Pennomi@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    2 days ago

    Spatial reasoning has always been a weakness of LLMs. Other symptoms include the inability to count and no concept of object permanence.

    • givesomefucks@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 days ago

      Yeah, but it’s chess…

      The LLM doesn’t have to imagine a board, if you feed it the rules of chess and the dimensions of the board it should be able to “play in its head”.

      For a human to have that kind of working memory would be a genius level intellect and years of practice at the game.

      But human working memory is shit compared to virtually every other animal. This and processing speed is supposed to be AI’s main draw.

      • Jerkface (any/all)@lemmy.ca
        link
        fedilink
        English
        arrow-up
        4
        ·
        2 days ago

        It doesn’t have a head like that. It places things in a conceptual space, not a numerical space. To it, a number is just an adjective, like a colour. It is learning to play chess by looking for language-like patterns in the game’s transcript. It is never attempting to model the contents of the board in it’s “mind”.

      • Rhaedas@fedia.io
        link
        fedilink
        arrow-up
        4
        ·
        2 days ago

        LLMs can be good at openings. Not because it is thinking through the rules or planning strategies, but because opening moves are likely in most general training data from various sources. It’s copying the most probable reaction to your move, based on lots of documentation. This can of course break down when you stray from a typical play style, as it has less to choose from in the options of probability, and only a few moves in there won’t be any more since there’s a huge number of possible moves.

        I.e., there’s no calculations involved. When you play a LLM at chess, you’re playing a list of common moves in history.

        An even simpler example would be to tell the LLM that its last move was illegal. Even knowing the rules you just told it, it will agree and take it back. This comes from being trained to give satisfying replies to a human prompt.

      • PlzGivHugs@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        4
        ·
        2 days ago

        The LLM doesn’t have to imagine a board, if you feed it the rules of chess and the dimensions of the board it should be able to “play in its head”.

        That assumes it knows how to play chess. It doesn’t. It know how to have a passable conversation. Asking it to play chess is like putting bread into a blender and being confused when it doesn’t toast.

        But human working memory is shit compared to virtually every other animal. This and processing speed is supposed to be AI’s main draw.

        Processing speed and memory in the context of writing. Give it a bunch of chess boards or chess notation and it has no idea which it needs to remember, nonetheless where/how to move. If you want an AI to play chess, you train it on chess gameplay, not books and Reddit comments. AI isn’t a general use tool.

        • givesomefucks@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 days ago

          if you feed it the rules of chess and the dimensions of the board it should be able to “play in its head”.

          You’d save a lot of time typing, if you spent a little more reading…

          • PlzGivHugs@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            3
            ·
            edit-2
            2 days ago

            You seem to be missing what I’m saying. Maybe a biological comparison would help:

            An octopus is extrmely smart, moreso than even most mammels. It can solve basic logic puzzles, learn and navigate complex spaces, and plan and execute different and adaptive stratgies to humt prey. In spite of this, it can’t talk or write. No matter what you do, training it, trying to teach it, or even trying to develop an octopus specific language, it will not be able to understand language. This isn’t because the octopus isn’t smart, its because its evolved for the purpose of hunting food and hiding from predators. Its brain has developed to understand how physics works and how to recognize patterns, but it just doesn’t have the ability to understand how to socialize, and nothing can change that short of rewiring its brain. Hand it a letter and it’ll try and catch fish with it rather than even considering trying to read it.

            AI is almost the reverse of this. An LLM has “evolved” (been trained) to write stuff that sounds good, but has little emphasis on understanding what it writes. The “understanding” is more about patterns in writting rather than underlying logic. This means that if the LLM encounters something that isn’t standard language, it will “flail” and start trying to apply what it knows, regardless of how well it applies. In the chess example, this might be, for example, just trying to respond with the most common move, regardless of if it can be played. Ultimately, no matter what you input into it, an LLM is trying to find and replicate patterns in language, not underlying logic.