Blade server
From Wikipedia, the free encyclopedia
Blade servers are self-contained all-inclusive computer servers with a design optimized to minimize physical space. Whereas a standard rack-mount server can exist with (at least) a power cord and network cable, blade servers have many components removed for space, power and other considerations while still having all the functional components to be considered a computer. A blade enclosure, which can hold multiple blade servers, provides services such as power, cooling, networking, various interconnects and management—though different blade providers have differing principles around what to include in the blade itself (and sometimes in the enclosure altogether). Together, blades and the blade enclosure form the blade system.
In a standard server-rack configuration, 1RU (one rack unit, 19" wide and 1.75" tall) defines the minimum possible size of any equipment. The principal benefit and justification of blade computing relate to lifting this restriction as to minimum size requirements. The most common computer rack form-factor is 42U high, which limits the number of discrete computer devices directly mountable in a rack to 42 components. Blades do not have this limitation; As of 2009[update], densities of up to 128 discrete servers per rack are achievable with the current generation of blade systems.
Contents |
[edit] Server blade
In the purest definition of computing (a Turing machine, simplified here), a computer requires only:
- memory to read input commands and data
- a processor to perform commands manipulating that data
- backing storage to store the results
As of 2009[update] (contrast with the first general-purpose computer) these are implemented as electrical components requiring (DC) power, which produces heat. Other components such as hard drives, power supplies, storage and network connections, basic IO (such as keyboard, video and mouse and serial) etc. only support the basic computing function, yet add bulk, heat and complexity—not to mention moving parts that are more prone to failure than solid-state components.
In practice, systems require all these components if a computer is to perform real-world work. In the blade paradigm, most of these functions are removed from the blade computer, being either provided by the blade enclosure (e.g. DC power supply), virtualized (e.g. iSCSI storage, remote console over IP) or discarded entirely (e.g. serial ports). The blade itself becomes vastly simpler, hence smaller and cheaper to manufacture (in theory).
[edit] Blade enclosure
The enclosure (or chassis) performs many of the non-core computing services found in most computers. Non-blade systems require bulky, hot and space-inefficient components, and may duplicate these across many computers that may or may not perform at capacity. By locating these services in one place and sharing them between the blade computers, the overall utilization becomes more efficient. The specifics of which services are provided and how vary by vendor.
[edit] Power
Computers operate over a range of DC voltages, but utilities deliver power as AC, and at higher voltages than required within computers. Converting this current requires one or more power supply units (or PSUs). To ensure that the failure of one power source does not affect the operation of the computer, even entry-level servers have redundant power supplies, again adding to the bulk and heat output of the design.
The blade enclosure's power supply provides a single power source for all blades within the enclosure. This single power source may come as a power supply in the enclosure or as a dedicated separate PSU supplying DC to multiple enclosures [1][2]. This setup reduces the number of PSUs required to provide a resilient power supply.
The popularity of blade servers, and their own appetite for power, has lead to an increase in the number of rack-mountable UPS units, including units targeted specifically towards blade servers (such as the BladeUPS).
[edit] Cooling
During operation, electrical and mechanical components produce heat, which a system must displace to ensure the proper functioning of its components. Most blade enclosures, like most computing systems, remove heat by using fans.
A frequently underestimated problem when designing high-performance computer systems involves the conflict between the amount of heat a system generates and the ability of its fans to remove the heat. The blade's shared power and cooling means that it does not generate as much heat as traditional servers. Newer[update] blade-enclosure designs feature high-speed, adjustable fans and control logic that tune the cooling to the system's requirements, or even liquid cooling-systems.[3][4]
At the same time, the increased density of blade-server configurations can still result in higher overall demands for cooling with racks populated at over 50% full. This is especially true with early-generation blades. In absolute terms, a fully populated rack of blade servers is likely to require more cooling capacity than a fully populated rack of standard 1U servers. This is because one can fit up to 128 blade servers in the same rack that will only hold 42 1U rack mount servers.[5]
[edit] Networking
Manufacturers of computers increasingly ship their products with high-speed, integrated network interfaces, and most are expandable to allow for the addition of connections that are faster, more resilient and run over different media (copper and fiber). These may require extra engineering effort in the design and manufacture of the blade, consume space in both the installation and capacity for installation (empty expansion slots) and hence result in more complexity. High-speed network topologies require expensive, high-speed integrated circuits and media, while most computers do not utilize all the bandwidth available.
The blade enclosure provides one or more network buses to which the blade will connect, and either presents these ports individually in a single location (versus one in each computer chassis), or aggregates them into fewer ports, reducing the cost of connecting the individual devices. Available ports may be present in the chassis itself, or in networking blades[6][7].
Functionally, a blade chassis can have two types of networking modules: switching or pass-through.
[edit] Storage
While computers typically use hard disks to store operating systems, applications and data, these are not necessarily required locally. Many storage connection methods (e.g. FireWire, SATA,E-SATA SCSI, DAS, Fibre Channel and iSCSI) are readily moved outside the server, though not all are used in enterprise-level installations. Implementing these connection interfaces within the computer presents similar challenges to the networking interfaces (indeed iSCSI runs over the network interface), and similarly these can be removed from the blade and presented individually or aggregated either on the chassis or through other blades.
The ability to boot the blade from a storage area network (SAN) allows for an entirely disk-free blade. This allows more board space to be devoted to extra memory or additional CPUs.
[edit] Other blades
Since blade enclosures provide a standard method for delivering basic services to computer devices, other types of devices can also utilize blade enclosures. Blades providing switching, routing, storage, SAN and fibre-channel access can slot into the enclosure to provide these services to all members of the enclosure.
Systems administrators can use storage blades where a requirement exists for additional local storage.[8] [9]
[edit] Uses
Blade servers function well for specific purposes such as web hosting and cluster computing. Individual blades are typically hot-swappable. As users add more processing power, memory and I/O bandwidth to blade servers, they deal with larger and more diverse workloads.
Although blade server technology in theory allows for open, cross-vendor solutions, the stage of development of the technology as of 2009[update] users encounter fewer problems when using blades, racks and blade management tools all from the same vendor.
Eventual standardization of the technology might[original research?] result in more choices for consumers; as of 2009[update] increasing numbers of third-party software vendors have started to enter this growing field.
Blade servers do not, however, provide the answer to every computing problem. One can view them as a form of productized server-farm that borrows from mainframe packaging, cooling, and power-supply technology. Very large computing tasks may still require server farms of blade servers, and because of blade servers' high power density, can suffer even more acutely from the HVAC problems that affect large conventional server farms.
[edit] History
Developers placed complete microcomputers on cards and packaged them in standard 19-inch racks in the 1970s soon after the introduction of 8-bit microprocessors. This architecture operated in the industrial process control industry as an alternative to minicomputer control-systems. Early models stored programs in EPROM and were limited to a single function with a small realtime executive.
The VMEbus architecture (ca. 1981) defined a computer interface which included implementation of a board-level computer installed in a chassis backplane with multiple slots for pluggable boards to provide I/O, memory, or additional computing. The PCI Industrial Computer Manufacturers Group PICMG developed a chassis/blade structure for the then emerging Peripheral Component Interconnect bus PCI which is called CompactPCI. Common among these chassis based computers was the fact that the entire chassis was a single system. While a chassis might include multiple computing elements to provide the desired level of performance and redundancy, there was always one board in charge, one master board coordinating the operation of the entire system.
PICMG expanded the CompactPCI specification with the use of standard Ethernet connectivity between boards across the backplane. The PICMG 2.16 CompactPCI Packet Switching Backplane specification was adopted in Sept 2001 (PICMG specifications). This provided the first open architecture for a multi-server chassis. PICMG followed with the larger and more feature-rich AdvancedTCA specification targeting the telecom industry's need for a high availability and dense computing platform with extended product life (10+ years). While AdvancedTCA system and boards typically sell for higher prices than blade servers, AdvancedTCA suppliers claim that low operating-expenses and total-cost-of-ownership can make AdvancedTCA-based solutions a cost-effective alternative for many building blocks of the next generation telecom network.
The name blade server appeared when a card included the processor, memory, I/O and non-volatile program storage (flash memory or small hard disk(s)). This allowed manufacturers to package a complete server, with its operating system and applications, on a single card / board / blade. These blades could then operate independently within a common chassis, doing the work of multiple separate server boxes more efficiently. In addition to the most obvious benefit of this packaging (less space-consumption), additional efficiency benefits have become clear in power, cooling, management, and networking due to the pooling or sharing of common infrastructure to supports the entire chassis, rather than providing each of these on a per server box basis.
Houston-based RLX Technologies, which consisted of mostly former Compaq Computer Corp employees, shipped the first modern[weasel words] blade server in 2001[10][dead link]. (Hewlett Packard (HP) acquired RLX in 2005.)
The research firm IDC identified[citation needed][when?] the major players in the blade market as HP and IBM. Other companies selling blade servers include Sun, Egenera, Supermicro, Hitachi, Fujitsu-Siemens, Rackable (Hybrid Blade), Verari Systems, Dell[11] and Intel (by way of reselling the IBM Blade chassis).