To one of my friends one of Google's engineers presented his data at Berkeley which I saw through a telacast at Merced comparing optimized with unoptimized algorithms sorting a dataset for accuracy. What he found was that as the size of the dataset increased (and assuming related increases in machine power) was that the unoptimized algorithms outperformed and better handled different test cases (such as different languages) much more easily. In fact the optimized algorithm accuracy rose at a linear rate while the unoptimized algorithm increased at an exponential rate.
So I wonder if that's a problem with the chess competition, the dataset might be too small to future proof the algorithm. Hmm....
Here is the link.