Model checking

From Wikipedia, the free encyclopedia

Jump to: navigation, search

In the field of logic in computer science, model checking refers to the following problem: Given a simplified model of a system, test automatically whether this model meets a given specification. Typically, the systems one has in mind are hardware or software systems, and the specification contains safety requirements such as the absence of deadlocks and similar critical states that can cause the system to crash.

In order to solve such a problem algorithmically, both the model of the system and the specification are formulated in some precise mathematical language: To this end, it is formulated as a task in Logic, namely to check whether a given structure satisfies a given logical formula. The concept is general and applies to all kinds of logics and suitable structures. A simple model-checking problem is verifying whether a given formula in the propositional logic is satisfied by a given structure.

Contents

[edit] Overview

An important class of model checking methods have been developed for checking models of hardware and software designs where the specification is given by a temporal logic formula. Pioneering work in the model checking of temporal logic formulae was done by E. M. Clarke and E. A. Emerson[1][2][3] and by J. P. Queille and J. Sifakis[4]. Clarke, Emerson, and Sifakis shared the 2007 Turing Award for their work on model checking.[5][6]

Model checking is most often applied to hardware designs. For software, because of undecidability (see computability theory) the approach cannot be fully algorithmic; typically it may fail to prove or disprove a given property.

The structure is usually given as a source code description in an industrial hardware description language or a special-purpose language. Such a program corresponds to a finite state machine (FSM), i.e., a directed graph consisting of nodes (or vertices) and edges. A set of atomic propositions is associated with each node, typically stating which memory elements are one. The nodes represent states of a system, the edges represent possible transitions which may alter the state, while the atomic propositions represent the basic properties that hold at a point of execution.

Formally, the problem can be stated as follows: given a desired property, expressed as a temporal logic formula p, and a structure M with initial state s, decide if M,s \models p. If M is finite, as it is in hardware, model checking reduces to a graph search.

[edit] Model checking tools

Model checking tools face a combinatorial blow up of the state-space, commonly known as the state explosion problem, that must be addressed to solve most real-world problems. There are several approaches to combat this problem.

  1. Symbolic algorithms avoid ever building the graph for the FSM; instead, they represent the graph implicitly using a formula in propositional logic. The use of binary decision diagrams (BDDs) was made popular by the work of Ken McMillan.[7]
  2. Bounded model checking algorithms unroll the FSM for a fixed number of steps k and check whether a property violation can occur in k or fewer steps. This typically involves encoding the restricted model as an instance of SAT. The process can be repeated with larger and larger values of k until all possible violations have been ruled out (cf. Iterative deepening depth-first search).
  3. Partial order reduction can be used (on explicitly represented graphs) to reduce the number of independent interleavings of concurrent processes that need to be considered. The basic idea is that if it does not matter, for the kind of things one intends to prove, whether A or B is executed first, then it is a waste of time to consider both the AB and the BA interleavings.
  4. Abstraction attempts to prove properties on a system by first simplifying it. The simplified system usually does not satisfy exactly the same properties as the original one so that a process of refinement may be necessary. Generally, one requires the abstraction to be sound (the properties proved on the abstraction are true of the original system); however, most often, the abstraction is not complete (not all true properties of the original system are true of the abstraction). An example of abstraction is, on a program, to ignore the values of non boolean variables and to only consider boolean variables and the control flow of the program; such an abstraction, though it may appear coarse, may be in fact be sufficient to prove e.g. properties of mutual exclusion.
  5. Counter-example guided abstraction refinement (CEGAR) begins checking with a coarse (imprecise) abstraction and iteratively refines it. When a violation (counter-example) is found, the tool analyzes it for feasibility (i.e., is the violation genuine or the result of an incomplete abstraction?). If the violation is feasible, it is reported to the user; if it is not, the proof of infeasibility is used to refine the abstraction and checking begins again.[8]

Model checking tools were initially developed to reason about the logical correctness of discrete state systems, but have since been extended to deal with real-time and limited forms of hybrid systems.

[edit] See also

Tools
Related techniques
History

[edit] References

  1. ^ Allen Emerson, E.; Clarke, Edmund M. (1980), "Characterizing correctness properties of parallel programs using fixpoints", Automata, Languages and Programming, doi:10.1007/3-540-10003-2_69 
  2. ^ Edmund M. Clarke, E. Allen Emerson: "Design and Synthesis of Synchronization Skeletons Using Branching-Time Temporal Logic". Logic of Programs 1981: 52-71.
  3. ^ Clarke, E. M.; Emerson, E. A.; Sistla, A. P. (1986), "Automatic verification of finite-state concurrent systems using temporal logic specifications", ACM Transactions on Programming Languages and Systems 8: 244, doi:10.1145/5397.5399 
  4. ^ Queille, J. P.; Sifakis, J. (1982), "Specification and verification of concurrent systems in CESAR", International Symposium on Programming, doi:10.1007/3-540-11494-7_22 
  5. ^ Press Release: ACM Turing Award Honors Founders of Automatic Verification Technology
  6. ^ USACM: 2007 Turing Award Winners Announced
  7. ^ * Symbolic Model Checking, Kenneth L. McMillan, Kluwer, ISBN 0-7923-9380-5, also online.
  8. ^ Clarke, Edmund; Grumberg, Orna; Jha, Somesh; Lu, Yuan; Veith, Helmut (2000), "Counterexample-Guided Abstraction Refinement", Computer Aided Verification 1855: 154, doi:10.1007/10722167_15 

[edit] External links

[edit] Further reading

This article was originally based on material from the Free On-line Dictionary of Computing, which is licensed under the GFDL.

Personal tools