How Computers Think in Chess

How Computers "Think" in Chess

Over time, a number of people have raised interesting questions about computer (artificial) intelligence and chess, what chess engines really do and how, and how far that technology could reach in comparison to human intelligence. 

I'd like to offer some thoughts and pointers for those of you who are interested in exploring this topic deeper or even just knowing how engines "think".  I have graduate degrees in computer science, and I've explored AI (artificial intelligence) in particular at some depth, hence my knowledge of the topic. 

So sit back and relax for a quick crash course on chess engines and how far artificial intelligence reaches today:

- Chess engines recognize a number of patterns (typical chess positions); the more patterns and the more refined those patterns are, the higher the quality of the engine.  Some of those are endgame patterns (e.g., akin to the Nalimov tablebase), while others are middlegame patterns (e.g., good moves to play against isolated pawns, etc.).  They also have vast opening databases, so they "know" what has been played by strong players before, and how those games have continued and ended.

- Chess engines evaluate a position in part based on the material balance (where each piece has a preset value, which may change based on how developed and active each piece is in the specific game, which is itself a positional characteristic) and in part based on positional characteristics (which come from those patterns I talked about).  The base material unit is 1 pawn, which is approximately equal to 1.00, so if you see an engine give an evaluation of +1.00, that means it deems the position of white as equivalent to one where that side has 1 extra pawn (even though all of that advantage may come from positional characteristics, not material ones).  Typically, an advantage of +1.00 (or more) is sufficient for that side to win the game, if played correctly, while an advantage below that threshold indicates a likelihood for a draw.  (If black has an advantage, the evaluation would be expressed in negative numbers but with the same magnitude, e.g., -0.58.)  Naturally, there is no hard line between one and the other, but that's a reasonable rule of thumb to keep in mind.

- Each move has two parts -- white's move and black's move.  Each of those two is a called half-move.  In slightly more technical terms, this "half-move" term is useful in chess engines (as you will find out if you study Artificial Intelligence) when the engine does what is called "alpha-beta search and pruning".  The latter is a fancy term for building a tree of possibilities and then evaluating it one step at a time as it traverses that tree from bottom (leaves) to top (root).  (In computer science trees are not like those in nature.)  So, each half-move corresponds to one level of depth of your tree.  At a given level, the engine attempts to maximize the evaluation function (which corresponds to picking a move that is as strong as possible for you, to increase the evaluation function the most), while at the next level it attempts to minimize that function (which corresponds to picking the opponent's best move, which from your perspective is the worst you can possibly expect to confront -- that's why it's minimizing the function there).

- Modern engines are not primitive, so they cut out some variations early and do not waste time (or memory) to explore them in depth.  (This is the "alpha-beta pruning" of the tree of possible moves.)  Sometimes this pruning may result in a perfectly valid sacrifice being ignored (which is why engines tend to not be very good at sac'ing lines), but most of the time that saves a huge amount of time, which can be better spent on evaluating other potentially useful moves.  The memory issue is not as severe in fact, since once the engine throws away a given possibility, it clears up and reuses that memory, while retaining only an encoding of the potentially fruitful continuations.

- Engines use heuristics (in human parlance these are rules of thumb -- principles that may or may not be true, but most of the time are true), and the more these heuristics they know about (e.g., difference between rook pawns and other pawns, difference between passed pawns and backward pawns, some strategic and positional considerations, etc.) the stronger the engines can be.  This also includes knowledge of typical patterns, which is why computers are good at solving typical chess puzzles -- because puzzles are a distilled form of patterns.

- On the topic of whether engines can get better over time, the answer is "Absolutely yes."  Both in terms of how deep they are able to think, as well as how truly intellingent they can become (i.e., ability to see and recognize more patterns beyond raw calculation).  If you read books on the functioning of the human brain, you'll realize that people in that branch of science believe that even we, humans, learn and act based purely on pattern matching: recognizing patterns in our environment and acting on them, based on subconscious and conscious actions that are programmed in our psyche.
The depth of exploration of a chess engine (unless limited by a human using a specific parameter) depends on the processing speed of your computer (a combination of clock speed, memory size, bus transfer speed, disk seek speed, etc.): a more powerful computer is able to compute more in a given chunk of time.  I used to be able to beat some engines at level 5 years ago; now I struggle at level 2 -- that's how far they've come (I assume I am no worse than I used to be).

- As far as whether chess engines will ever be able to express in human terms why they do something and not other, that's one of the Holy Grails of artificial intelligence.  While with the use of expert systems one can gain some traction in the area, the difficulty comes from the need to reverse engineer their raw calculation into something resembling a thought process (in the end, engines do raw calculation more than anything else and are exceedingly good at it).  Humans are for the time being only able to balance somewhat the strength of chess engines based on better intuition and knowledge of patterns (what to look for, what not to look for, etc.) -- but that has its limits which aren't advancing nearly as fast as machines' calculating power.
Those who are interested to explore more can look up the topic "Turing test" ( -- this is a litmus test of intellingence: in laymen's language, it's about whether a computer could fool a human into being unable to distinguish between human-generated answers and computer-generated answers to a set of questions that the human poses.  Clearly, we're not quite there yet, but it's getting closer with time.

- As to whether it's good or not to use chess engines:
GMs get a strong buddy for little or no money when they use an engine.  It's not easy to find someone at your level when you're a very strong player and want to train, say, in the middle of the night, or prepare a novelty for tomorrow's tournament game even while you're asleep!  Engines can do that sort of thing quite well if you give them enough time.  Also, they can be very good at hinting possible hidden resources, which to a master-level player is sufficient (they can figure out the meaning behind a move because of their vast experience, if you only give them a slight hint, with no need for an in-depth explanation). 
With casual players, however, this is often not the case, so a strong engine is much less helpful to someone who is still learning the basics of the game, because they need a real coach instead, someone who can train them to see and phrase the ideas and strategies in human-understandable language.


- To understand the similarities and differences between human chess players and modern computer engines, one can assume (which is a fair assumption, close to reality) that a modern chess computer/engine resembles a human chess player who has all of the following characteristics:

  • highly disciplined (i.e., reliably consistent in the decisions it makes);
  • completely unemotional (i.e., a loss and a win are simply two states, neither of affects the machine's ability to play more games equally strongly and consistently; also while computers' strength of play depends on various parameters, none of them have to do with who the opponent is in the sense of any hierarchies/cults);
  • extremely knowledgeable (i.e., has full and unclouded knowledge of openings, endgame positions, middle game strategic heuristics, and other patterns that have been programmed into it);
  • very strong (i.e., capable of beating even GMs, for the most advanced engines);
  • makes almost no calculation errors, except those due to the limited window of how many moves ahead it's programmed (or allowed by practical time constraints) to look;
  • reliant solely on logical inferences based on pre-existing heuristics and memory (of positions, openings, patterns), but has no substitute for human intuition ("gut feeling");
  • (generally) unable to learn from his/her own mistakes by itself, or to derive new knowledge without explicit new programming by humans.

These last couple of aspects are the humans' greatest enduring advantage over computers chess programs.  They are the fundamental reasons allowing the strongest GMs to still beat computers occasionally (though humans are increasingly wary of trying, for fear of what might happen to their image if they failed).


Note:  A lot of additional content has been added in the Q&A section below, as people kept asking me specific questions, to which I have provided answers there.  Be sure to look through that section for more interesting aspects of the ins and outs of computer engines.


I hope this article has been useful.  Please ask further questions in the comments and I'll be glad to offer my perspective, based on what I know on the topic.


  • 3 years ago


    Another aspect of chess that engines are weak in is psychology. Indeed, current engines have no concept of psychology and do not use it in deciding which of several lines to play. For example, one can use one's understanding of the other player's playing style/preferences, phobias and psychological stressors to gain an advantage. For example, if you know that a particular player avoids accepting sacrifices that he does not fully understand all lines and if the opponent appears confident, then one should make one's move with confidence and vigor if one wants the opponent to not accept the sacrifice; conversely, one should move tentatively and appear underconfident if one wants the sacrifice accepted. I had a friend who would always go into a deep think (lasting at least 5 minutes) if I spent some time looking like I was deep in thought and then stood up before making my move. That would convince that I had a deep an devious plan up my sleeve.

  • 4 years ago


    Yet another friend recently wrote to me privately with feedback about this article -- thank you all for the interest and insightful comments all along -- and noted something worth mentioning about this topic.

    The fellow's comments essentially had to do with the issue of which areas chess engines are weak at versus which areas they excel in.

    There are three main areas of general weakness of chess engines, even to this day -- these are areas where an engine's positional evaluation can often be imprecise or even outright misleading.
    These areas of (relative) chess engine weakness -- and the main reasons for them -- are as follows:

    • endgames (in general) -- while endgames involve a lot of calculations (which engines are strong at), they also branch out significantly because there's so much space on the board, and this makes reaching the bottom of each sub-variation (at which point it can be reliably evaluated) very costly.  Because engines, for pragmatic reasons, often have a preset limit of moves (depth) at which they explore, or time within which they produce their recommendations, they can't reach that bottom and hence make unreliable evaluations.  Humans approach the subject quite differently: they look for patterns (e.g., king goes over to the other side, bishop presses on these squares, knight blocks opponent's pieces, etc.) and reason from that perspective mainly, with some addition of calculation but not predominantly.  There is nothing technically limiting that prevents engines from learning this pattern-based thinking, but chess programmers just haven't done it deeply enough yet.  I offered a plausible explanation about this in one of my previous posts on the topic: there just isn't much incentive to dig as deeply, since engines are already too strong for humans (their customers) to match.
    • positional, long-range maneuvering -- while engines understand concepts like open files, weak squares, etc., the set of ideas that define the exquisite sense of a strong GM elude them (as they elude most players but those at the very top).  So engines can't do what they haven't been taught precisely, and Carlsen or Kramnik aren't exactly in the business of trying to teach them those subtleties anytime soon.
    • positional sacrifices -- these tend to be difficult for engines, because a pawn (or a piece) has a precise evaluation associated with it (including with reference to where on the board it sits and how actively engaged it can be in the ongoing play), whereas a positional exchange sacrifice (one of the more subtle aspects of GM play, much like prophylactic moves) usually has a long-range goal of trading something material for something that can't be pinpointed exactly but has an intuitive feel to it (human GMs often disagree about whether a particular positional sacrifice is or isn't worth the cost in material -- until they've seen enough evidence that the pattern in play compensates for it).  
      There's also the issue of psychology which can play with sacrifices.  Engines (and computers in general) don't have a psyche, so they are unimpressed whether you play dull slow moves or sharp attacking chess, but humans aren't like that.  So what works against humans often works not because it's perfect objectively (e.g., see Tal's games) but because the opponents are placed under a level of psychological pressure that adversely affects their decision making and clarity.  So a positional sacrifice (or any sacrifice, really) may work more effectively against a human than against a cool-"headed" engine.
    I hope this additional bit has been valuable to those of you who strive to understand how chess engines work, and how they don't work in principle.
  • 5 years ago


    Great read!

  • 6 years ago


    Here's another question and my response, following up on my previous post about the economics of the evolution of search engines.

    Q: So if it's a money issue, I still don't get the money making idea behind 6 pieces left on the board in the Nalimov tablebases. I mean, who's going to fork out money for that? 

    A: The money issue comes, I believe, for a different reason. 

    To compete with other search engines, a product needs to show areas of meaningful distinction and superiority, so that users who pay money for a search engine would know why engine X is better than engine Y.  The question then becomes whether the producers of engine X can find profit in investing into the development of a particular feature, because if the feature is nice but only a small number of people can appreciate and use it, it's not going to be economically feasible to invest in it.  I believe, this is why, at least partly, features like the engine: 

    (a) being able to cover endgames thoroughly deeper than 6 pieces (the current limit shown in the Nalimov tablebases); 

    (b) being able to plan better in the endgames (rather than merely calculate),

    are not the highest priority on the chess engine developers' list.

    After all, strong engines are already rated 3000 ELO and above, so the return on investment in raising their rating by another 50-100 points is not so great, as they will still be stronger than the strongest human player ever, so human users of those engines will choose one engine over another not necessarily by its sheer strength (it's too strong anyway), but rather by other features such as its usability and usefulness in areas that the humans users find meaningful for themselves.

    This may mean that there's a chance that engines will focus their future development on being able to explain their moves to humans, rather than merely show long, and for many players except the most advanced often cryptic, lines.  Let's hope that this is the case.  Certainly, there are technical and research challenges to be overcome in that respect, more so than economic, but these challenges don't seem infeasible to tackle to a reasonable degree.

  • 6 years ago


    I was recently asked why engines were believed to be relatively weaker in endgame play, and here is the answer I produced:

    The thing that computers don't do well is think in patterns (which humans, incidentally, can do quite well), e.g., "king goes over to the queen side, rook defends second rank, pawns are in such-and-such formation".  Instead, a chess engine knows how to calculate moves well by using its sheer processing power.  Extensions allow engines to also use existing databases for both openings and endings (so Nalimov's tablebases are part of some modern engines). 

    But those exhaustively computed endgames rely on the fact that there's very few pieces left on the board.  With each additional piece placed on the board, the number of options and combinations grows exponentially, so it quickly becomes impossible for the machine to evaluate every single move in a reasonable amount of time and it's left to guess the future by however deep it has been able to calculate.  This is where people apply common sense and seeing patterns, but computers are not programmed to do these things.  They could be programmed for it, but it's going to be expensive and non-trivial to do so, and from an economic standpoint the return on investment will be questionable, I believe -- which is why it hasn't happened yet.

    It is interesting that in the end it comes down to pure economics (and some advanced technological challenges, but by no means are they insurmountable) as to why certain aspects are not implemented in modern chess engines.  That may change over time, as economics has often been observed to shift incentives depending on the overall societal and business environment.

  • 6 years ago


    I invite people to read the comments in addition to the article, as they illuminate some additional issues that are not addressed directly in the main text, but may be of interest nonetheless.
  • 6 years ago


    hicentunc:  I don't think most (or all) of today's engines are capable of what you're describing.  (We discussed this same shortcoming in the pair of messages between me and Elroch earlier in this thread.) 

    However, fundamentally there is no obstacle to implementing what you're describing.  It's just more work to make it accurate and on par with the engine's other strengths, and it also would take much more involvement from actual strong chess players, rather than purely from computer programmers who know about chess.  I therefore expect the chess engines of the (not so distant) future to acquire this capability, gradually perhaps.

  • 6 years ago


    Another difference between engine and human players is that the former doesn't use schematic thinking as a human could do to form a plan in an ending or assess a position with fortress-like features.

    For example, can today's top engines correctly assess a transition into an opposite coloured bishop endgame, provided there are enough pawns left so that they can't use their endgame tablebases ?

  • 6 years ago


    I've now added two more paragraphs toward the end of the article, reflecting my understanding of the fundamental similarities and differences between human chess players and modern computer engines.

  • 6 years ago


    It was very interesting to read you article Valentin. I'm software engineer but I was not familiar with chess algorithms. As to AI I think we are very far away from achieving it. 

  • 7 years ago


    Very good point, Elroch

    There's nothing that fundamentally prevents chess engines from spending some time thinking "how can I get from here to there" -- assuming that they have "there" as some desirable pattern that they want to pursue as advantageous.  Most chess programs were not designed with this mode, originally, and I assume that many still don't have it to this day. 

    That would be a different type of thought/evaluation process -- it's not a simple search algorithm (which is very mechanistic in nature), but it's a planning algorithm.  Incidentally, both search and planning are main branches of AI (the field of artificial intelligence).  Search was an earlier discovery and the focus of much work in the field in the earlier years of AI, whereas planning is a more modern approach with strong application in robotic tasks.  Advanced robots today do use planning, they don't search as much -- so computer chess engines can do that too, with the right design.

  • 7 years ago


    There is a general difference between computer and human views of a position that computers think only "where can I get to from here", and humans are sometimes able to think "how can I get to there", which is advantageous. Unfortunately it is not usually enough to swing the balance in favour of humans, against current highly efficient programs on fast hardware!

  • 7 years ago



    ~ ~ thanks for the input. much appreciated

  • 7 years ago


    Answering a couple of the questions people have raised below:

    - Although I don't know the details of Rybka, I know that engines generally differ from each other based on how well they do heuristics and how many patterns they recognize.  Since Rybka was developed with the instrumental help of strong human chessmasters, it's likely that they were able to put in more useful heuristics and more patterns that Rybka can recognize.  This is likely more important than raw computing speed and optimizations related to that.

    - To emulate weaker play, you can do a number of things: (a) restrict the time that the engine is allowed to "think" for each move; (b) restrict the depth up to which the engine is allowed to explore any position; (c) restrict the use of opening databases or endgame tablebases or advanced heuristics; etc.  Some of these are usually configurable parameters that a human player can set, while others are internally controlled.

    - To efficiently search for position matches (in order to avoid re-analyzing something that has already been analyzed), the programming mechanism is indeed based on the use of hashtables.  When a new position is reached, its representation is hashed (the exact representation can differ between engines, but shorter representations are generally preferable, as the hashing tends to be easier to do and does not consume more time than necessary) and the resulting hash-value is then searched in the existing hash-table of all values: if it's already there, then you don't need to analyze this or dig deeper; if not, put it in the hash-table.

    Note that the hash-value needs to be relatively short, since the hash-table needs to fit in memory in order for the operation to be fast.  This constrains the length to about 16-32 bits long.

  • 7 years ago


    VERY intersting! Thanks for the info!

  • 7 years ago


    Do you have any idea why Rybka has been such a superior software for so many years ?

    And do you know what are the programming tricks used to emulate weaker play for an engine (they're obviously not working very well, so I'm curious what the programmers have been trying to do).

  • 7 years ago


    Valentin, great post, very interesting, this could produce quite a debate within the Team TPOA, maybne posting it as a forum ;-)

    great mate.

Back to Top

Post your reply: