Backward chaining

From Wikipedia, the free encyclopedia

Jump to: navigation, search

Backward chaining (or backward reasoning) is an inference method used in artificial intelligence. It is one of two methods of reasoning that uses inference rules – the other is forward chaining, also known as modus ponens. Backward chaining is implemented in logic programming by SLD resolution.

Backward chaining starts with a list of goals (or a hypothesis) and works backwards from the consequent to the antecedent to see if there is data available that will support any of these consequents. An inference engine using backward chaining would search the inference rules until it finds one which has a consequent (Then clause) that matches a desired goal. If the antecedent (If clause) of that rule is not known to be true, then it is added to the list of goals (in order for your goal to be confirmed you must also provide data that confirms this new rule).

For example, suppose that the goal is to conclude the color of my pet Fritz, given that he croaks and eats flies, and that the rule base contains the following four rules:

  1. If X croaks and eats flies – Then X is a frog
  2. If X chirps and sings – Then X is a canary
  3. If X is a frog – Then X is green
  4. If X is a canary – Then X is yellow

This rule base would be searched and the third and fourth rules would be selected, because their consequents (Then Fritz is green, Then Fritz is yellow) matches the goal (to determine Fritz's color). It is not yet known that Fritz is a frog, so both the antecedents (If Fritz is a frog, If Fritz is a canary) are added to the goal list. The rule base is again searched and this time the first two rules are selected, because their consequents (Then X is a frog, Then X is a canary) match the new goals that were just added to the list. The antecedent (If Fritz croaks and eats flies) is known to be true and therefore it can be concluded that Fritz is a frog, and not a canary. The goal of determining Fritz's color is now achieved (Fritz is green if he is a frog, and yellow if he is a canary, but since he croaks and eats flies, he is a frog, and, therefore, he is green).

Because the list of goals determines which rules are selected and used, this method is called goal-driven, in contrast to data-driven forward-chaining inference. The backward chaining approach is often employed by expert systems.

Programming languages such as Prolog, Knowledge Machine and ECLiPSe support backward chaining within their inference engines.

[edit] In popular culture

This process is a staple of crime fiction, where the investigator is faced with an effect (the crime) for which there are a number of possible causes (suspects). Sir Arthur Conan Doyle's creation, Sherlock Holmes, arguably fiction's most famous detective, put it this way:

When one has eliminated the impossible, whatever is left, however improbable, must be the truth.

[edit] See also

Personal tools