

Although the chatbot had been given a “baseline board” to learn the game and identify pieces, it kept mixing up rooks and bishops, misread moves, and “repeatedly lost track” of where its pieces were. To make matters worse, as Caruso explained, ChatGPT also blamed Atari’s icons for being “too abstract to recognize” — but when he switched the game over to standard notation, it didn’t perform any better.
For an hour-and-a-half, ChatGPT “made enough blunders to get laughed out of a 3rd grade chess club” while insisting over and over again that it would win “if we just started over,” Caruso noted. (And yes, it’s kind of creepy that the chatbot apparently referred to itself and the human it was interfacing with as “we.”)
It’s fucking insane it couldn’t keep track of a board…
And it’s concerning how confident it is that it will work, because the idiots asking it stuff will believe it. It’ll keep failing and keep saying next time will work, because it’s built to maximize engagement.
Yeah, but it’s chess…
The LLM doesn’t have to imagine a board, if you feed it the rules of chess and the dimensions of the board it should be able to “play in its head”.
For a human to have that kind of working memory would be a genius level intellect and years of practice at the game.
But human working memory is shit compared to virtually every other animal. This and processing speed is supposed to be AI’s main draw.