HyperTransport
From Wikipedia, the free encyclopedia
HyperTransport (HT), formerly known as Lightning Data Transport (LDT), is a bidirectional serial/parallel high-bandwidth, low-latency point-to-point link that was introduced on April 2, 2001.[1] The HyperTransport Consortium is in charge of promoting and developing HyperTransport technology. The technology is used by AMD and Transmeta in x86 processors, PMC-Sierra, Broadcom, and Raza Microelectronics in MIPS microprocessors, AMD, NVIDIA, VIA and SiS in PC chipsets, HP, Sun Microsystems, IBM, and Flextronics in servers, Cray, Newisys, QLogic, and XtremeData, Inc. in high performance computing, and Cisco Systems in routers.
Contents |
[edit] Overview
[edit] Multi-Link High-Speed interconnect
HyperTransport comes in four speed versions — 1.x, 2.0, 3.0, and 3.1 — which run from 200 MHz to 3.2 GHz. It is also a DDR or "Double Data Rate" connection, meaning it sends data on both the rising and falling edges of the clock signal. This allows for a maximum data rate of 6400 MT/s when running at 3.2 GHz. The operating frequency is auto-negotiated.
HyperTransport supports an auto-negotiated bit width, ranging from two- to 32-link interconnects. The full-width, full-speed, 32-bit interconnect has a transfer rate of 25.6 GB/s (3.2 GHz/link * 2 bits/Hz * 32 links * 1 Byte / 8 bits) per direction, or 51.2 GB/s aggregated bandwidth per link, making it faster than any existing bus standard for PC workstations and servers (such as Intel sponsored PCI Express) as well as making it faster than most bus standards for high-performance computing and networking. Links of various widths can be mixed together in a single system (for example, one 16-link interconnect to another CPU and one 8-link interconnect to a peripheral device), which allows for a wider interconnect between CPUs, and a lower-speed interconnect to peripherals as appropriate. It also supports link splitting, where a single 16-link interconnect can be divided into two 8-link interconnects. The technology also typically has lower latency than other solutions due to its lower overhead.
Electrically, HyperTransport is similar to Low Voltage Differential Signaling (LVDS) operating at 1.2 V.[2] HyperTransport 2.0 added post-cursor transmitter deemphasis. HyperTransport 3.0 added scrambling and receiver phase alignment as well as optional transmitter pre-cursor deemphasis.
[edit] Packet Oriented
HyperTransport is packet-based, where each packet consists of a set of 32-bit words, regardless of the physical width of the link. The first word in a packet always contains a command field. Many packets contains a 40-bit address. An additional 32-bit control packet is prepended when 64-bit addressing is required. The data payload is sent after the control packet. Transfers are always padded to a multiple of 32 bits, regardless of their actual length.
HyperTransport packets enter the interconnect in segments known as bit times. The number of bit times required depends on the link width. HyperTransport also supports system management messaging, signaling interrupts, issuing probes to adjacent devices or processors, I/O transactions, and general data transactions. There are two kinds of write commands supported - posted and non-posted. Posted writes do not require a response from the target. This is usually used for high bandwidth devices such as Uniform Memory Access traffic or Direct memory access transfers. Non-posted writes require a response from the receiver in the form of a "target done". Reads also cause the receiver to generate a read response. HyperTransport supports the PCI consumer-producer ordering model.
[edit] Power Managed
HyperTransport also facilitates power management as it is compliant with the Advanced Configuration and Power Interface specification. This means that changes in processor sleep states (C states) can signal changes in device states (D states), e.g. powering off disks when the CPU goes to sleep. HyperTransport 3.0 added further capabilities to allow a centralized power management controller to implement power management policies.
[edit] Applications for HyperTransport
[edit] Front-Side Bus Replacement
The primary use for HyperTransport is to replace the front-side bus, which is currently different for every type of machine. For instance, a Pentium cannot be plugged into a PCI Express bus. In order to expand the system, the proprietary front-side bus must connect through adapters for the various standard buses, like AGP or PCI Express. These are typically included in the respective controller functions, namely the northbridge and southbridge.
In contrast, HyperTransport is an open specification, published by a multi-company consortium. A single HyperTransport adapter chip will work with a wide spectrum of HyperTransport enabled microprocessors. For example, Broadcom HT-1000 and HT-2000 server controller devices can work with many different HyperTransport enabled microprocessors.
AMD uses HyperTransport as the Front-Side Bus in their Opteron, Athlon 64, Sempron 64, Turion 64, and Phenom families of microprocessors.
[edit] Multiprocessor interconnect
Another use for HyperTransport is as an interconnect for NUMA multiprocessor computers. AMD uses HyperTransport with a proprietary cache coherency extension as part of their Direct Connect Architecture in their Opteron and Athlon 64 FX (Dual Socket Direct Connect (DSDC) Architecture) line of processors. The HORUS interconnect from Newisys extends this concept to larger clusters. The Aqua device from 3Leaf Systems virtualizes and interconnects CPUs, memory, and I/O.
[edit] Router or Switch Bus Replacement
HyperTransport can also be used as a bus in routers and switches. Routers and switches have multiple network interfaces and data has to be forwarded between these ports as fast as possible e.g. a four port 100 Mbit/s Ethernet router needs a maximum 800 Mbit/s of internal bandwidth (100 Mbit/s * 4 ports * 2 directions). HyperTransport greatly exceeds the bandwidth needed for this application.
[edit] Co-processor interconnect
The issue of latency and bandwidth between CPUs and co-processors has usually been the major stumbling block to their practical implementation. Recently, co-processors such as FPGAs have appeared which can access the HyperTransport bus and become first-class citizens on the motherboard. Current generation FPGAs from both of the main manufacturers (Altera and Xilinx) can directly support the HyperTransport interface and have IP Cores available. Companies such as XtremeData, Inc. and DRC take these FPGAs (Xilinx in DRC's case) and create a module that allows FPGAs to be plugged directly into the Opteron socket.
AMD started an initiative named Torrenza in September 21, 2006 to further promote the usage of HyperTransport for plug-in cards and coprocessors. This initiative opened their "Socket F" to plug-in boards such as those from XtremeData and DRC.
[edit] Add-on card connector (HTX and HTX3)
A connector specification that allows a slot-based peripheral to have direct connection to a microprocessor using a HyperTransport interface was released by the HyperTransport Consortium. It is known as HyperTransport eXpansion (HTX). Using a rotated instance of the same mechanical connector as a 16-lane PCI-Express slot (plus an x1 connector for power pins), HTX allows plug-in cards to be developed which support direct access to a CPU and DMA access to the system RAM. The initial card for this slot was the QLogic InfiniPath InfiniBand HCA. IBM and HP, among others, have released HTX compliant systems.
The original HTX standard is limited to 16 bits and 800 MHz.[3]
In August, 2008, the HyperTransport Consortium released HTX3, which extends the clock rate of HTX to 2.6 GHz (5.2 GT/s, 10.7 GTi, 5.2 real GHz data speed, 3 MT/s edit rate) and retains backwards compatibility.[4]
[edit] Testing
The "DUT" test connector[5] is defined to enable standized functional test system interconnection.
[edit] Implementations
- AMD AMD64 and Direct Connect Architecture based CPUs.
- SiByte MIPS cpus from Broadcom
- PMC-Sierra RM9000X2 MIPS CPU
- ht_tunnel from OpenCores project (MPL licence)
- ATI Radeon Xpress 200 for AMD Processor
- NVIDIA nForce chipsets
- nForce Professional MCPs (Media and Communication Processor)
- nForce 4 series
- nForce 500 series
- nForce 600 series
- nForce 700 series
- ServerWorks (now Broadcom) HT-2000 HyperTransport SystemI/O Controller
- The IBM CPC925 and CPC945 PowerPC 970 northbridges
- Raza Thread Processors
- Several open source cores from the HyperTransport Center of Excellence
[edit] HyperTransport frequency specifications
HyperTransport Version |
Year | Max. HT Frequency | Max. Link Width | Max. Aggregate Bandwidth (bi-directional) |
Max. Bandwidth at 16-Bit unidirectional |
Max. Bandwidth at 32-Bit unidirectional* |
---|---|---|---|---|---|---|
1.0 | 2001 | 800 MHz | 32 Bit | 12.8 GB/s | 3.2 GB/s | 6.4 GB/s |
1.1 | 2002 | 800 MHz | 32 Bit | 12.8 GB/s | 3.2 GB/s | 6.4 GB/s |
2.0 | 2004 | 1.4 GHz | 32 Bit | 22.4 GB/s | 5.6 GB/s | 11.2 GB/s |
3.0 | 2006 | 2.6 GHz | 32 Bit | 41.6 GB/s | 10.4 GB/s | 20.8 GB/s |
3.1 | 2008 | 3.2 GHz | 32 Bit | 51.2 GB/s | 12.8 GB/s | 25.6 GB/s |
- AMD Opterons, Athlon64s and later use 16-bit links. Common speeds for these processor interlinks are 800 MHz to 1 GHz (older and multiprocessor systems) and 2 to 2.6 GHz (newer uniprocessors on AM2+ links). While Hypertransport itself is capable of 32-bit links, that bandwidth is not currently utilized by any AMD processors.
[edit] Name
There has been some marketing confusion between the use of HT referring to HyperTransport and the later use of HT to refer to Intel's Hyper-Threading feature on some Pentium 4 based and the newer Intel Core i7 microprocessors. Hyper-Threading is officially known as Hyper-Threading Technology (HTT) or HT-Technology. Because of this potential for confusion, the HyperTransport Consortium always uses the written out form: "HyperTransport".
[edit] See also
- Elastic interface bus
- Fibre Channel
- Front side bus
- Intel QuickPath Interconnect
- List of device bandwidths
- PCI Express
- RapidIO
[edit] References
- ^ HyperTransport Consortium (2001-04-02). API NetWorks Accelerates Use of HyperTransport Technology With Launch of Industry's First HyperTransport Technology-to-PCI Bridge Chip. Press release. http://www.hypertransport.org/consortium/cons_pressrelease.cfm?RecordID=62.
- ^ http://www.hypertransport.org/docs/wp/HT_Overview.pdf
- ^ Emberson, David; Brian Holden (2007-12-12) (PDF). HTX specification. p. p.4. http://www.hypertransport.org/docucontrol/HTC2004105-0040-0011.pdf. Retrieved on 2008-01-30.
- ^ Emberson, David (2008-06-25) (PDF). HTX3 specification. p. p.4. http://www.hypertransport.org/docucontrol/HTC20080701-00030-0001.pdf. Retrieved on 2008-08-17.
- ^ Holden, Brian; Mike Meschke, Ziad Abu-Lebdeh, Renato D’Orfani (PDF). DUT Connector and Test Environment for HyperTransport. http://www.hypertransport.org/docs/spec/HTC20021219-0017-0001.pdf.