Concurrent computing

From Wikipedia, the free encyclopedia

Jump to: navigation, search

Concurrent computing is a form of computing in which programs are designed as collections of interacting computational processes that may be executed in parallel.[1] Concurrent programs can be executed sequentially on a single processor by interleaving the execution steps of each computational process, or executed in parallel by assigning each computational process to one of a set of processors that may be in close proximity or distributed across a network. The main challenges in designing concurrent programs are ensuring the correct sequencing of the interactions or communications between different computational processes, and coordinating access to resources that are shared between processes.[1] A number of different methods can be used to implement concurrent programs, such as implementing each computational process as an operating system process, or implementing the computational processes as a set of threads within a single operating system process.

Pioneers in the field of concurrent computing include Edsger Dijkstra, Per Brinch Hansen, and C. A. R. Hoare.

Contents

[edit] Concurrent interaction and communication

In some concurrent computing systems communication between the concurrent components is hidden from the programmer (e.g., by using futures), while in others it must be handled explicitly. Explicit communication can be divided into two classes:

Shared memory communication 
Concurrent components communicate by altering the contents of shared memory locations (exemplified by Java and C#). This style of concurrent programming usually requires the application of some form of locking (e.g., mutexes (means mutual exclusion), semaphores, or monitors) to coordinate between threads.
Message passing communication 
Concurrent components communicate by exchanging messages (exemplified by Erlang and occam). The exchange of messages may be carried out asynchronously (sometimes referred to as "send and pray", although it is standard practice to resend messages that are not acknowledged as received), or may use a rendezvous style in which the sender blocks until the message is received. Message-passing concurrency tends to be far easier to reason about than shared-memory concurrency, and is typically considered a more robust, although slower, form of concurrent programming. A wide variety of mathematical theories for understanding and analyzing message-passing systems are available, including the Actor model, and various process calculi. Message passing can be efficiently implemented on symmetric multiprocessors using shared coherent memory.

[edit] Coordinating access to resources

One of the major issues in concurrent computing is preventing concurrent processes from interfering with each other. For example, consider the following algorithm for making withdrawals from a checking account represented by the shared resource balance:

1  bool withdraw(int withdrawal)
2  {
3     if( balance >= withdrawal )
4     {
5         balance -= withdrawal;
6         return true;
7     } 
8     return false;
9  }

Suppose balance=500, and two concurrent processes make the calls withdraw(300) and withdraw(350). If line 3 in both operations executes before line 5 both operations will find that balance > withdrawal evaluates to true, and execution will proceed to subtracting the withdrawal amount. However, since both processes perform their withdrawals, the total amount withdrawn will end up being more than the original balance. These sorts of problems with shared resources require the use of concurrency control, or non-blocking algorithms.

Because concurrent systems rely on the use of shared resources (including communications mediums), concurrent computing in general requires the use of some form of arbiter somewhere in the implementation to mediate access to these resources.

Unfortunately, while many solutions exist to the problem of a conflict over one resource, many of those "solutions" have their own concurrency problems such as deadlock when more than one resource is involved.

[edit] Advantages

  • Increased application throughput - parallel execution of a concurrent program allows the number of tasks completed in certain time period to increase.
  • High responsiveness for input/output - input/output-intensive applications mostly wait for input or output operations to complete. Concurrent programming allows the time that would be spent waiting to be used for another task.
  • More appropriate program structure - some problems and problem domains are well-suited to representation as concurrent tasks or processes.

[edit] Concurrent programming languages

Concurrent programming languages are programming languages that use language constructs for concurrency. These constructs may involve multi-threading, support for distributed computing, message passing, shared resources (including shared memory) or futures (known also as promises).

Today, the most commonly used programming languages that have specific constructs for concurrency are Java and C#. Both of these languages fundamentally use a shared-memory concurrency model, with locking provided by monitors (although message-passing models can and have been implemented on top of the underlying shared-memory model). Of the languages that use a message-passing concurrency model, Erlang is probably the most widely used in industry at present.

Many concurrent programming languages have been developed more as research languages (e.g. Pict) rather than as languages for production use. However, languages such as Erlang, Limbo, and occam have seen industrial use at various times in the last 20 years. Languages in which concurrency plays an important role include:

Many other languages provide support for concurrency in the form of libraries (on level roughly comparable with the above list).

[edit] Models of concurrency

There are several models of concurrent computing, which can be used to understand and analyze concurrent systems. These models include:

[edit] See also

[edit] References

  1. ^ a b Ben-Ari, Mordechai (2006). Principles of Concurrent and Distributed Programming (2nd ed.). Addison-Wesley. ISBN 978-0-321-31283-9. 
  2. ^ Carl Hewitt. ActorScriptTM: Implementing massive local and nonlocal concurrency. http://knol.google.com/k/carl-hewitt-httpcarlhewittinfo/actorscripttm/pcxtp4rx7g1t/18#. 

[edit] Further reading

[edit] External links

Personal tools