InfiniBand

From Wikipedia, the free encyclopedia

Jump to: navigation, search
The panel of an InfiniBand switch

InfiniBand is a switched fabric communications link primarily used in high-performance computing. Its features include quality of service and failover, and it is designed to be scalable. The InfiniBand architecture specification defines a connection between processor nodes and high performance I/O nodes such as storage devices. It is a superset of the Virtual Interface Architecture.

Contents

[edit] Description

Effective theoretical throughput in different configurations
  Single (SDR) Double (DDR) Quad (QDR)
1X 2 Gbit/s 4 Gbit/s 8 Gbit/s
4X 8 Gbit/s 16 Gbit/s 32 Gbit/s
12X 24 Gbit/s 48 Gbit/s 96 Gbit/s

Like Fibre Channel, PCI Express, Serial ATA, and many other modern interconnects, InfiniBand is a point-to-point bidirectional serial link intended for the connection of processors with high speed peripherals such as disks. It supports several signalling rates and, as with PCI Express, links can be bonded together for additional bandwidth.

[edit] Signalling rate

The serial connection's signalling rate is 2.5 gigabit per second (Gbit/s) in each direction per connection. InfiniBand supports double (DDR) and quad data (QDR) speeds, for 5 Gbit/s or 10 Gbit/s respectively, at the same data-clock rate.

Links use 8B/10B encoding — every 10 bits sent carry 8bits of data — so that the useful data transmission rate is four-fifths the raw rate. Thus single, double, and quad data rates carry 2, 4, or 8 Gbit/s respectively.

Links can be aggregated in units of 4 or 12, called 4X or 12X. A quad-rate 12X link therefore carries 120 Gbit/s raw, or 96 Gbit/s of useful data. Most systems today use either a 4X 2.5 Gbit/s (SDR) or 5 Gbit/s (DDR) connection. InfiniBand QDR was already demonstrated during 2007, with expectations of productions systems during 2008. Larger systems with 12x links are typically used for cluster and supercomputer interconnects and for inter-switch connections.

[edit] Latency

The single data rate switch chips have a latency of 200 nanoseconds, and DDR switch chips have a latency of 140 nanoseconds.The end-to-end latency range is from 1.07 microseconds MPI latency (Mellanox ConnectX HCAs) to 1.29 microseconds MPI latency (Qlogic InfiniPath HTX HCAs) to 2.6 microseconds (Mellanox InfiniHost III HCAs).[citation needed] Various InfiniBand host channel adapters (HCA) exist in the market today, each with different latency and bandwidth characteristics. InfiniBand also provides RDMA capabilities for low CPU overhead. The latency for RDMA operations is less than 1 microsecond (Mellanox ConnectX HCAs).

[edit] Topology

InfiniBand uses a switched fabric topology, as opposed to a hierarchical switched network like Ethernet.

Like the channel model used in most mainframe computers, all transmissions begin or end at a channel adapter. Each processor contains a host channel adapter (HCA) and each peripheral has a target channel adapter (TCA). These adapters can also exchange information for security or quality of service.

[edit] Messages

Data is transmitted in packets of up to 4 kB that are taken together to form a message. A message can be:

[edit] Programming

One caveat is that InfiniBand has no standard programming interface. The standard only lists a set of "verbs"; functions that must exist. The syntax of these functions is left to the vendors. The most common to date has been the syntax developed by the OpenFabrics Alliance, which was adopted by most of the InfiniBand vendors, both for Linux and Windows.

[edit] History

InfiniBand is the result of the 1999 merger of two competing designs, Future I/O, developed by Compaq, IBM, and Hewlett-Packard, with Next Generation I/O (ngio), developed by Intel, Microsoft, and Sun. From the Compaq side, the roots were derived from Tandem's ServerNet. For a short time before the group came up with a new name, InfiniBand was called System I/O.[1]

InfiniBand was originally envisioned as a comprehensive "system area network" that would connect CPUs and provide all high speed I/O for "back-office" applications. In this role it would potentially replace just about every datacenter I/O standard including PCI, Fibre Channel, and various networks like Ethernet. Instead, all of the CPUs and peripherals would be connected into a single pan-datacenter switched InfiniBand fabric. This vision offered a number of advantages in addition to greater speed, not the least of which is that I/O workload would be largely lifted from computer and storage. In theory, this should make the construction of clusters much easier, and potentially less expensive, because more devices could be shared and they could be easily moved around as workloads shifted. Proponents of a less comprehensive vision saw InfiniBand as a pervasive, low latency, high bandwidth, low overhead interconnect for commercial datacenters, albeit one that might perhaps only connect servers and storage to each other, while leaving more local connections to other protocols and standards such as PCI.[citation needed]

So far InfiniBand has become the de-facto interconnect of choice for high performance computing, and its adoption as seen in the TOP500 supercomputers list is faster than Ethernet.[citation needed] However, note that Top500 uses Linpack for benchmarks, which as a neatly parallel computing task tends to be fairly easy on the interconnect; InfiniBand shouldn't be confused with the custom-built interconnects of vector supercomputers. For example, the NEC SX-9 provides 128 GB/s of low-latency interconnect bandwidth between each computing node, compared to the 96 Gbit/s of an InfiniBand 12X Quad Data Rate link.[original research?]) Enterprise datacenters have seen more limited use. It is used today mostly for performance focused computer cluster applications, and there are some efforts to adapt InfiniBand as a "standard" interconnect between low-cost machines as well. A number of the TOP500 supercomputers have used InfiniBand including the reigning fastest supercompuer, the IBM Roadrunner. In another example of InfiniBand use within high performance computing, the Cray XD1 uses built-in Mellanox InfiniBand switches to create a fabric between HyperTransport-connected Opteron-based compute nodes.[citation needed]

SGI, among others, has also released storage utilizing LSI products with InfiniBand "target adapters". This product essentially competes with architectures such as Fibre Channel, iSCSI, and other traditional storage area networks. Such target adapter-based discs would become a part of the fabric of a given network, in a fashion similar to DEC VMS clustering. The advantage to this configuration would be lower latency and higher availability to nodes on the network (because of the fabric nature of the network).

The cable InfiniBand uses (CX4) is also commonly used to connect SAS Serial Attached SCSI HBAs to external (SAS) disk arrays.[citation needed]

[edit] See also

[edit] References

[edit] External links

Personal tools