Concurrency control
From Wikipedia, the free encyclopedia
This article needs additional citations for verification. Please help improve this article by adding reliable references (ideally, using inline citations). Unsourced material may be challenged and removed. (January 2009) |
In computer science, especially in the fields of computer programming (see also concurrent programming, parallel programming), operating systems (see also parallel computing) , multiprocessors, and databases, concurrency control ensures that correct results for concurrent operations are generated, while getting those results as quickly as possible.
Contents |
[edit] Concurrency control in databases
Concurrency control in database management systems (DBMS) ensures that database transactions are performed concurrently without the concurrency violating the data integrity of a database. Executed transactions should follow the ACID rules, as described below. The DBMS must guarantee that only serializable (unless Serializability is intentionally relaxed), recoverable schedules are generated. It also guarantees that no effect of committed transactions is lost, and no effect of aborted (rolled back) transactions remains in the related database.
[edit] Transaction ACID rules
- Atomicity - Either the effects of all or none of its operations remain when a transaction is completed - in other words, to the outside world the transaction appears to be indivisible, atomic.
- Consistency - Every transaction must leave the database in a consistent state.
- Isolation - Transactions cannot interfere with each other. Providing isolation is the main goal of concurrency control.
- Durability - Successful transactions must persist through crashes.
[edit] Concurrency control mechanism
The main categories of concurrency control mechanisms are:
- Optimistic - Delay the synchronization for a transaction until its end without blocking (read, write) operations, and then abort transactions that violate desired synchronization rules.
- Pessimistic - Block operations of transaction that would cause violation of synchronization rules.
Many methods for concurrency control exist. Major methods, which have each many variants, include:
- Two phase locking
- Conflict (serializability, precedence) graph checking
- Timestamp ordering
- Commitment ordering
- Multiversion concurrency control
- Index concurrency control (for synchronizing indexes)
Almost all implemented concurrency control mechanisms, typically quite efficient, guarantee schedules that are conflict serializable, unless relaxed forms of serializability are allowed (depending on application) for better performance.
[edit] Concurrency control in operating systems
Operating systems, especially real-time operating systems, need to maintain the illusion that many tasks are all running at the same time. Such multitasking is fairly simple when all tasks are independent from each other. However, when several tasks try to use the same resource, or when tasks try to share information, it can lead to confusion and inconsistency. The task of concurrent computing is to solve that problem. Some solutions involve "locks" similar to the locks used in databases, but they risk causing problems of their own such as deadlock. Other solutions are lock-free and wait-free algorithms.
[edit] See also
- Mutual exclusion
- Isolation (computer science)
- Serializability
- Schedule
- Multiversion concurrency control
- Global concurrency control
- Concurrent programming
[edit] References
- Philip A. Bernstein, Vassos Hadzilacos, Nathan Goodman: Concurrency Control and Recovery in Database Systems, Addison Wesley Publishing Company, 1987, ISBN 0-20110-715-5