Transactional memory

From Wikipedia, the free encyclopedia

Jump to: navigation, search

Transactional memory attempts to simplify parallel programming by allowing a group of load and store instructions to execute in an atomic way. It is a concurrency control mechanism analogous to database transactions for controlling access to shared memory in concurrent computing.


[edit] Hardware vs. software transactional implementations

Hardware transactional memory systems may have modifications in processors, cache and bus protocol to support transactions.[1][2][3][4][5]

Software transactional memory provides transactional memory semantics in a software runtime library and requires minimal hardware support (typically an atomic compare and swap operation, or equivalent).

Load-Link/Store-Conditional (LL/SC) offered by many RISC processors can be viewed as the most basic transactional memory support. However, LL/SC usually operates on data that is the size of a native machine word.

[edit] Motivation

The motivation of transactional memory lies in the programming interface of parallel programs. The goal of a transactional memory system is to transparently support the definition of regions of code that are considered a transaction, that is, that have atomicity, consistency and isolation requirements. Transactional memory allows writing coding like in this example:

def transfer_money(from, to, amount):
        from = from - amount
        to = to + amount

In the code, the block defined by "transaction" has the atomicity, consistency and isolation guarantees and the underlying transactional memory implementation must assure those guarantees transparently.

[edit] Implementations

[edit] References

  • Laurus, J.R. and Rajwar, R. Transactional Memory, Morgan & Claypool, 2006.

[edit] External links

Personal tools