Top chess players, until recently, held their own against even the most powerful chess playing computers. These machines could calculate far deeper than their human opponents and yet the humans claimed an advantage: intuition. A computer searches a huge number of positions and then finds the best. For an experienced human chess player, the good moves “suggest themselves.” How that is possible is presumably a very important mystery, but I wonder how one could demonstrate that qualitatively the thought process is different.
Having been somewhat obsessed recently with Scrabble, I thought of the following experiment. Suppose we write a computer program that tries to create words from scrabble tiles using a simple brute-force method. The computer has a database of words. It randomly combines letters and checks whether the result is in its database and outputs the most valuable word it can identify in a fixed length of time. Now consider a contest between to computers programmed in the same way which differ only in the size of their database, the first knowing a subset of the words known by the second. The task is to come up with the best word from a fixed number of tiles. Clearly the second would do better, but I am interested in how the advantage varies with the number of tiles. Presumably, the more tiles the greater the advantage.
I want to compare this with an analogous contest between a human and a computer to measure how much faster a superior human’s advantage increases in the number of tiles. Take a human scrabble player with a large vocabulary and have him play the same game against a fast computer with a small vocuabulary. My guess is that the human’s advantage (which could be negative for a small number of tiles) will increase in the number of tiles, and faster than the stronger computer’s advantage increased in the computer-vs-computer scenario.
Now there may be many reasons for this, but what I am trying to get at is this. With many tiles, brute-force search quickly plateaus in terms of effectiveness because the additional tiles act as noise making it harder for the computer to find a word in its database. But when humans construct words, the words “suggest themselves” and increasing the number of tiles facilitates this (or at least hinders it more slowly than it hinders brute-force.)

4 comments
Comments feed for this article
June 29, 2009 at 6:46 pm
Scott
If you haven’t read “Word Freaks” by Stephen Fastis, then you might want to skim through it. He describes a player who makes a living by testing electronic versions of Scrabble to make sure the computer isn’t cheating by drawing better tiles. This might be one of the people you’d want to talk to.
Also, there’s an electronic version of Scrabble provided through isc.ro, and a number of top rated players are on there. The program also has bot opponents, and perhaps you could get a way to talk with whomever programmed it.
June 29, 2009 at 8:41 pm
Ben
I think the problem might be compounded by the fact that the strategic component of scrabble/chess would also be subject to potential scale issues. As the number of tiles grows or number of pieces grows it is possible that expert knowledge or “intuition” would gain an advantage.
I’m not convinced either of these is true though. It depends on how much knowledge the expert with good intuition has as the game size increases. It is likely that the percentage of known words of length X decreases with X implying that a computer might gain an advantage conditional on search capacity because it knows words of length X just as well as it knows words of any length.
Poker is another very interesting game to check out here; as the game of poker you’re playing become more complicated in terms of the strategic space do humans of machines gain an advantage? It could probably be studied empirically using the adaptive poker program developed at ALBERTA.
June 30, 2009 at 7:00 pm
mike
Brute force suggests a broad rather then deep style of searching. It seems to me that humans tend to think more structurally; we build higher concepts out of lower ones, and then reuse those higher concepts instead of always reasoning from first principles. There is direction to human thought. We feel that certain moves naturally follow other ones, which gives us the ability to reason teleologically. It’s not as much that humans look more steps ahead, so much that they take larger steps.
Perhaps then, one way of analyzing games might be to compare the breadth of the choice space at any given point with the combinatorial depth/potential.
For example, in chess, the first player starts out with 20 different possible moves (16p + 4 kn), which then increases as the game opens up. The depth varies, but at the start of the game it’s probably around 25-30 moves for each player, 50-60 total. On the other hand, the end game can have a very large number of choices per move, but there is only a few moves left to reason about.
Another example would be Go. You start with 361 (19×19) choices, but as the game progress the space of choices per turn steadily shrinks. Compared to chess, most of the complexity in Go is weighted, at least based on this superficial analysis, towards the start of the game where the depth is greatest.
I’m not as familiar with Scrabble, but it seems to be similar to chess in that most of the complexity seems to exist near the end game.
In general though, I think that the more you can trace the outcome of a game back to decisions made near the start of the game, the more human friendly the game should be.
June 30, 2009 at 7:04 pm
mike
Oh, it just now occurs to me based on that analysis, that perhaps in chess the best strategy for a human might be to open up the game as early as possible, to maximize your advantage.