(C) Daily Kos
This story was originally published by Daily Kos and is unaltered.
. . . . . . . . . .



ChatGPT seems to understand chess [1]

['This Content Is Not Subject To Review Daily Kos Staff Prior To Publication.']

Date: 2025-01-16

Would you like to spend $200 a month on a ChatGPT subscription? Yeah, me neither. If the free version isn’t the solution to all of your problems, how do you know the premium version is? How do you know you won’t waste a lot of time refining prompts on the premium version?

The efficacy of ChatGPT, at least the free version, tends to be inversely proportional to how important the assignment is. If a random thought pops into your head and you want to run it by ChatGPT, it might give you phenomenal results. Or it might give you garbage, but you’re not emotionally invested in it being good.

But for something so important that you would actually consider paying $200 a month? That is definitely less than what it would cost to hire a professional copywriter, or a lawyer, or a consultant. But ChatGPT is not held to the same standards of professional conduct that you would rightfully expect from a doctor or a lawyer, nor should it be.

ChatGPT might be good for studying a hobby. Like, in my case, chess. I’m not a professional chess player at any level. I don’t need to hire a professional chess coach to prepare me for some important upcoming match. And even if I was, I wouldn’t rely on ChatGPT to substitute for an actual human coach who has actually experienced what it’s like to be in a chess match with money and reputation on the line.

On the other hand, for helping me out with a casual study of chess, ChatGPT might just be the ticket. Probably more for the study of openings and middlegame than for endgames.

ChatGPT is considered by some to be the foremost exponent of “artificial intelligence” to date. That's a broad term to begin with, and it gets misused even more broadly.

Artificial intelligence “does not exist but it will ruin everything anyway,” is the title of one of astrophysicist Angela Collier’s YouTube Videos (in that video she quotes the price for ChatGPT Pro as $196 a month).

Much more pointed, one of stand-up comedian Adam Conover’s YouTube videos is titled “A.I. is B.S.” In that video, Conover explains that we’ve reached the point where absolutely any computer program that takes user input and gives some output based on that input is called “A.I.”

Take for example a little program that takes in an integer and tells you if it’s prime or not. You can tell a number like 38500 is not prime just by looking at it. But the example program would probably start by checking whether or not 2 divides the number evenly.

Now consider a number like 38525. The example program would probably try 2 and 3 as divisors, hopefully it would at least be optimized to skip over 4 and go straight to 5, which in this example is the least prime divisor.

That’s not artificial intelligence. That’s just going through a very simple algorithm with some basic optimizations.

On GitHub I saw a Tic-Tac-Toe program from a few years ago, when the author, call him “Thomas,” was a student. And he called it an “A.I.” If I was Thomas, I would not have called it that. To determine what move to make against a human player, the program checks if it could win on its pending move, otherwise it checks if the human player could win on his next move, and if so, the computer thwarts him. That’s not artificial intelligence either.

Tic-Tac-Toe is a game that has been completely studied and understood. There are almost 27,000 possible games, or slightly more than 255,000 if you count rotations and reflections as distinct games.

Chess is definitely a more complex game than Tic-Tac-Toe. The number of possible games is finite, but so large it might as well be infinite. Even so, there are positions for which almost any human player who understands the rules of the game can determine his or her next move just by looking at the board. Consider for example this Black to play scenario:

Black to play. Should Black play Ne2+? Or might there be a more effective move? FEN 6k1/p4pp1/P1Brp3/1PN3Q1/2b2n2/1N6/5PPP/5RK1 b - - 0 1

The obvious move is ... Bxf1, right? Black hopes White will take the bait and play Kxf1, then Black can play … Rd1 for the win. It is much less obvious what White should play instead of Kxf1, at least to me. Still, it should be obvious that Kxf1 is a bad move, especially because the king has no “Luft” (either of the f2 or h2 pawns off their starting squares).

White to play. Should White capture that impertinent Black bishop? FEN: 6k1/p4pp1/P1Brp3/1PN3Q1/5n2/1N6/5PPP/5bK1 w - - 0 1

Human players still play by intuition, but computers don’t need “artificial intelligence” to beat human players. It is my theory that a computer being able to think through all possible moves in the next two or three turns is sufficient to beat any human player.

In the scenario above, a human playing Black would probably not waste time thinking about silly moves like … Rd7 or … Rd8. But a computer probably would consider such moves, deciding that they’re not worthwhile because White would likely respond with Bxd7 or Nxd7 for the former or Qxd8+ for the latter. Thus down a rook, it would be much harder for Black to win (notice Black doesn’t have a queen at this point, and the pawns are all far away from promotion).

In short, the computer would make its move not because it has any hunch that it might be the best move, but because it has considered every alternative that is possible under the rules of the game.

Endgame puzzles are much easier to solve than middlegame puzzles and certainly opening puzzles. In an endgame puzzle, the goal is almost always to achieve checkmate. When I get an endgame puzzle wrong, I usually wind up seeing that the difference between my move and the correct move is checkmate in three or two as opposed to checkmate in two or one.

But in an opening puzzle? When I get an opening puzzle wrong, I look at it and don’t understand what the point of it was.

As a way to procrastinate on my own chess program for the Java Virtual Machine, I’ve been making a systematic study of openings and defenses, examining all twenty possible first moves for White in games against the Chess.com bots and against Stockfish on Lichess.

When I play 1. c4, the computer often answers 1. … e5, sometimes 1. … e6. Either way, I’m inclined to play 2. Nf3. And then the computer might respond with 2. ... a6?

White to play. FEN: rnbqkbnr/1ppp1ppp/p3p3/8/2P5/5N2/PP1PPPPP/RNBQKB1R w KQkq - 0 3

This is one manifestation of the Agincourt defense. It actually has its own page on Chess.com’s Openings Database, along with the more common continuations of the Agincourt defense. According to that page, the top players are Ulf Andersson, Dmitry Andreikin, Levon Aronian, Viktor Korchnoi and Vladimir Kramnik.

But I’m wondering if those top players are really ever choosing … a6 for their second move as Black. It also seems an unlikely move for beginners, as they generally want to make bolder, more dramatic moves, however ill-advised such moves might be.

Maybe Black is worried White might play Nxc7 or Nc7 and the queenside rook would have no way to elude capture. But how long would it take White to move either knight close enough to make that move? Is Black taking the queen out too early, so she’s not available to play … Qxc7? And if the Black king is not castled kingside by that point, or moved off his initial square in any other way, Black might as well consider the kingside rook lost.

Point is, I couldn’t figure out a good reason for Black to play 2. … a6. So I asked ChatGPT. It came up with five reasons:

To prepare a queenside fianchetto To prevent White from putting a knight or a bishop on b5 Flexibility Psychological factors Waiting move

Follow the link to read ChatGPT’s elaborations.

For the first reason, Black’s plan might be something like 3. … b5 4. … Bb7. Plausible, I suppose.

The second reason sounds like a premature worry to me. White’s queenside knight could get there with 3. Nc3 (or Na3) … 4. Nb5, but before trying to play Nxc7 or Nc7 to fork Black’s king and rook, I would want to have White’s queenside bishop on f4.

Also, if White wants to put a bishop on b5, the pawn on c4 needs to move out of the way first, to say nothing of the pawn on e2. So, worrying about White putting any piece on b5 is extremely premature for the second move of the game.

The third and fourth reasons seem much more plausible to me. And the fifth reason seems the most plausible. A “waiting” move allows “Black to observe White's plans before committing pawns or pieces to specific squares.”

Moving a more central Black pawn, or any of the Black knights to the center of the board would give a much clearer indication to White of what Black is planning.

If ChatGPT does understand chess, that’s only because it has absorbed thousands of books and articles about chess. It is impressive that it has done that and it’s also impressive that it can give plausible answers to questions about chess based on the content it has absorbed.

But we should not mistake that for actual intelligence about chess, nor should we assume that ChatGPT can have any original thoughts about chess. And we should certainly not mistake ChatGPT’s proficiency to gather and regurgitate data for actual intelligence in general.

[END]
---
[1] Url: https://www.dailykos.com/stories/2025/1/16/2288457/-ChatGPT-seems-to-understand-chess?pm_campaign=front_page&pm_source=more_community&pm_medium=web

Published and (C) by Daily Kos
Content appears here under this condition or license: Site content may be used for any purpose without permission unless otherwise specified.

via Magical.Fish Gopher News Feeds:
gopher://magical.fish/1/feeds/news/dailykos/