Alphabeta pruning
From Wikipedia, the free encyclopedia
Alphabeta pruning is a search algorithm which seeks to reduce the number of nodes that are evaluated in the search tree by the minimax algorithm. It is a search with adversary algorithm used commonly for machine playing of twoplayer games (Tictactoe, Chess, Go, etc.). It stops completely evaluating a move when at least one possibility has been found that proves the move to be worse than a previously examined move. Such moves need not be evaluated further. Alphabeta pruning is a sound optimization in that it does not change the result of the algorithm it optimizes.
Contents 
[edit] History
Allen Newell and Herbert Simon who used what John McCarthy calls an "approximation"^{[1]} in 1958 wrote that alphabeta "appears to have been reinvented a number of times".^{[2]} Arthur Samuel had an early version and Richards, Hart, Levine and/or Edwards found alphabeta independently in the United States.^{[3]} McCarthy proposed similar ideas during the Dartmouth Conference in 1956 and suggested it to a group of his students including Alan Kotok at MIT in 1961.^{[4]} Alexander Brudno independently discovered the alphabeta algorithm, publishing his results in 1963. ^{[5]} Donald Knuth and Ronald W. Moore refined the algorithm in 1975^{[6]}^{[7]} and it continued to be advanced.
[edit] Improvements over naive minimax
The benefit of alphabeta pruning lies in the fact that branches of the search tree can be eliminated. The search time can in this way be limited to the 'more promising' subtree, and a deeper search can be performed in the same time. Like its predecessor, it belongs to the branch and bound class of algorithms. The optimization reduces the effective depth to slightly more than half that of simple minimax if the nodes are evaluated in an optimal or near optimal order (best choice for side on move ordered first at each node).
With an (average or constant) branching factor of b, and a search depth of d ply, the maximum number of leaf node positions evaluated (when the move ordering is pessimal) is O(b*b*...*b) = O(b^{d}) – the same as a simple minimax search. If the move ordering for the search is optimal (meaning the best moves are always searched first), the number of leaf node positions evaluated is about O(b*1*b*1*...*b) for odd depth and O(b*1*b*1*...*1) for even depth, or . In the latter case, the effective branching factor is reduced to its square root, or, equivalently, the search can go twice as deep with the same amount of computation.^{[8]} The explanation of b*1*b*1*... is that all the first player's moves must be studied to find the best one, but for each, only the best second player's move is needed to refute all but the first (and best) first player move – alphabeta ensures no other second player moves need be considered. If b=40 (as in chess), and the search depth is 12 ply, the ratio between optimal and pessimal sorting is a factor of nearly 40^{6} or about 4 billion times.
Normally during alphabeta, the subtrees are temporarily dominated by either a first player advantage (when many first player moves are good, and at each search depth the first move checked by the first player is adequate, but all second player responses are required to try and find a refutation), or vice versa. This advantage can switch sides many times during the search if the move ordering is incorrect, each time leading to inefficiency. As the number of positions searched decreases exponentially each move nearer the current position, it is worth spending considerable effort on sorting early moves. An improved sort at any depth will exponentially reduce the total number of positions searched, but sorting all positions at depths near the root node is relatively cheap as there are so few of them. In practice, the move ordering is often determined by the results of earlier, smaller searches, such as through iterative deepening.
The algorithm maintains two values, alpha and beta, which represent the minimum score that the maximizing player is assured of and the maximum score that the minimizing player is assured of respectively. Initially alpha is negative infinity and beta is positive infinity. As the recursion progresses the "window" becomes smaller. When beta becomes less than alpha, it means that the current position cannot be the result of best play by both players and hence need not be explored further.
Additionally, this algorithm can be trivially modified to return an entire Principal Variation in addition to the score. Some more aggressive algorithms such as MTD(f) do not easily permit such a modification.
[edit] Pseudocode
function alphabeta(node, depth, α, β) (* β represents previous player best choice  doesn't want it if α would worsen it *) if node is a terminal node or depth = 0 return the heuristic value of node foreach child of node α := max(α, alphabeta(child, depth1, β, α)) (* use symmetry, β becomes subsequently pruned α *) if β≤α break (* Beta cutoff *) return α (* Initial call *) alphabeta(origin, depth, infinity, +infinity)
[edit] Heuristic improvements
Further improvement can be achieved without sacrificing accuracy, by using ordering heuristics to search parts of the tree that are likely to force alphabeta cutoffs early. For example, in chess, moves that take pieces may be examined before moves that do not, or moves that have scored highly in earlier passes through the gametree analysis may be evaluated before others. Another common, and very cheap, heuristic is the killer heuristic, where the last move that caused a betacutoff at the same level in the tree search is always examined first. This idea can be generalized into a set of refutation tables.
Alphabeta search can be made even faster by considering only a narrow search window (generally determined by guesswork based on experience). This is known as aspiration search. In the extreme case, the search is performed with alpha and beta equal; a technique known as zerowindow search, nullwindow search, or scout search. This is particularly useful for win/loss searches near the end of a game where the extra depth gained from the narrow window and a simple win/loss evaluation function may lead to a conclusive result. If an aspiration search fails, it is straightforward to detect whether it failed high (high edge of window was too low) or low (lower edge of window was too high). This gives information about what window values might be useful in a research of the position.
[edit] Other algorithms
More advanced algorithms that are even faster while still being able to compute the exact minimax value are known, such as Negascout and MTDf.
Since the minimax algorithm and its variants are inherently depthfirst, a strategy such as iterative deepening is usually used in conjunction with alphabeta so that a reasonably good move can be returned even if the algorithm is interrupted before it has finished execution. Another advantage of using iterative deepening is that searches at shallower depths give moveordering hints that can help produce cutoffs for higher depth searches much earlier than would otherwise be possible.
Algorithms like SSS*, on the other hand, use the bestfirst strategy. This can potentially make them more timeefficient, but typically at a heavy cost in spaceefficiency.^{[citation needed]}
[edit] See also
 Pruning (algorithm)
 Branch and bound
 Minimax
 Combinatorial optimization
 Negamax
 Transposition table
 MTD(f)
 Negascout
 Killer heuristic
[edit] References
 ^ McCarthy, John (LaTeX2HTML 27 November 2006). "Human Level AI Is Harder Than It Seemed in 1955". http://wwwformal.stanford.edu/jmc/slides/wrong/wrongsli/wrongsli.html. Retrieved on 20061220.
 ^ Newell, Allen and Herbert A. Simon (March 1976). "Computer Science as Empirical Inquiry: Symbols and Search" (PDF). Communications of the ACM, Vol. 19, No. 3. http://archive.computerhistory.org/projects/chess/related_materials/text/23.Computer_science_as_empirical_inquiry/23.Computer_science_as_empirical_inquiry.newell_simon.1975.ACM.062303007.pdf. Retrieved on 20061221.
 ^ Richards, D.J. and Hart, T.P. (4 December 1961 to 28 October 1963). "The AlphaBeta Heuristic (AIM030)". Massachusetts Institute of Technology. http://hdl.handle.net/1721.1/6098. Retrieved on 20061221.
 ^ Kotok, Alan (XHTML 3 December 2004). "MIT Artificial Intelligence Memo 41". http://www.kotok.org/AI_Memo_41.html. Retrieved on 20060701.
 ^ Marsland, T.A. (May 1987). "Computer Chess Methods (PDF) from Encyclopedia of Artificial Intelligence. S. Shapiro (editor)" (PDF). J. Wiley & Sons. 159171. http://www.cs.ualberta.ca/~tony/OldPapers/encyc.mac.pdf. Retrieved on 20061221.
 ^ * Knuth, D. E., and Moore, R. W. (1975). "An Analysis of AlphaBeta Pruning". Artificial Intelligence Vol. 6, No. 4: 293–326.

 Reprinted as Chapter 9 in Knuth, Donald E. (2000). Selected Papers on Analysis of Algorithms. Stanford, California: Center for the Study of Language and Information  CSLI Lecture Notes, no. 102. ISBN 1575862123. OCLC 222512366. http://wwwcsfaculty.stanford.edu/~knuth/aa.html.

 ^ Abramson, Bruce (June 1989). "Control Strategies for TwoPlayer Games". ACM Computing Surveys, Vol. 21, No. 2. http://www.theinformationist.com/pdf/constrat.pdf/. Retrieved on 20080820.
 ^ Russell, Stuart J.; Norvig, Peter (2003), Artificial Intelligence: A Modern Approach (2nd ed.), Upper Saddle River, NJ: Prentice Hall, ISBN 0137903952, http://aima.cs.berkeley.edu/
[edit] External links
 http://www.emunix.emich.edu/~evett/AI/AlphaBeta_movie/sld001.htm
 http://sern.ucalgary.ca/courses/CPSC/533/W99/presentations/L1_5B_McCullough_Melnyk/
 http://sern.ucalgary.ca/courses/CPSC/533/W99/presentations/L2_5B_Lima_Neitz/search.html
 http://www.maths.nott.ac.uk/personal/anw/G13GAM/alphabet.html
 http://chess.verhelst.org/search.html
 http://www.frayn.net/beowulf/index.html
 http://hal.inria.fr/docs/00/12/15/16/PDF/RR6062.pdf
 Minimax (with or without alphabeta pruning) algorithm visualization  game tree solving (Java Applet)