Blue Gene

From Wikipedia, the free encyclopedia

Jump to: navigation, search
A Blue Gene/P supercomputer

Blue Gene is a computer architecture project designed to produce several supercomputers, designed to reach operating speeds in the PFLOPS (petaFLOPS) range, and currently reaching sustained speeds of nearly 500 TFLOPS (teraFLOPS). It is a cooperative project among IBM (particularly IBM Rochester MN, and the Thomas J. Watson Research Center), the Lawrence Livermore National Laboratory, the United States Department of Energy (which is partially funding the project), and academia. There are four Blue Gene projects in development: Blue Gene/L, Blue Gene/C, Blue Gene/P, and Blue Gene/Q.

Contents

[edit] Blue Gene/L

The first computer in the Blue Gene series, Blue Gene/L, developed through a partnership with Lawrence Livermore National Laboratory (LLNL), originally had a theoretical peak performance of 360 TFLOPS, and scored over 280 TFLOPS sustained on the Linpack benchmark. During an upgrade in 2007 the performance increased to 478 TFLOPS sustained and 596 TFLOPS peak.

The term Blue Gene/L sometimes refers to the computer installed at LLNL, and sometimes refers to the architecture of that computer. As of November 2006, there are 27 computers on the Top500 list using the Blue Gene/L architecture. All these computers are listed as having an architecture of eServer Blue Gene Solution.

The block scheme of the Blue Gene/L ASIC including dual PowerPC 440 cores.

In December 1999, IBM announced a $100 million research initiative for a five-year effort to build a massively parallel computer, to be applied to the study of biomolecular phenomena such as protein folding. The project has two main goals: to advance our understanding of the mechanisms behind protein folding via large-scale simulation, and to explore novel ideas in massively parallel machine architecture and software. This project should enable biomolecular simulations that are orders of magnitude larger than current technology permits. Major areas of investigation include: how to use this novel platform to effectively meet its scientific goals, how to make such massively parallel machines more usable, and how to achieve performance targets at a reasonable cost, through novel machine architectures. The design is built largely around the previous QCDSP and QCDOC supercomputers.

In November 2001, Lawrence Livermore National Laboratory joined IBM as a research partner for Blue Gene.

On September 29, 2004, IBM announced that a Blue Gene/L prototype at IBM Rochester (Minnesota) had overtaken NEC's Earth Simulator as the fastest computer in the world, with a speed of 36.01 TFLOPS on the Linpack benchmark, beating Earth Simulator's 35.86 TFLOPS. This was achieved with an 8-cabinet system, with each cabinet holding 1,024 compute nodes. Upon doubling this configuration to 16 cabinets, the machine reached a speed of 70.72 TFLOPS by November 2004 , taking first place in the Top500 list.

On March 24, 2005, the US Department of Energy announced that the Blue Gene/L installation at LLNL broke its speed record, reaching 135.5 TFLOPS. This feat was possible because of doubling the number of cabinets to 32.

On the Top500 list,[1] Blue Gene/L installations across several sites worldwide took 3 out of the 10 top positions, and 13 out of the top 64. Three racks of Blue Gene/L are housed at the San Diego Supercomputer Center and are available for academic research.

On October 27, 2005, LLNL and IBM announced that Blue Gene/L had once again broken its speed record, reaching 280.6 TFLOPS on Linpack, upon reaching its final configuration of 65,536 "compute nodes" (i.e., 216 nodes) and an additional 1024 "I/O nodes" in 64 air-cooled cabinets. The LLNL Blue Gene/L uses Lustre and GPFS to access a 900TB filesystem.

Blue Gene/L is also the first supercomputer ever to run over 100 TFLOPS sustained on a real world application, namely a three-dimensional molecular dynamics code (ddcMD), simulating solidification (nucleation and growth processes) of molten metal under high pressure and temperature conditions. This won the 2005 Gordon Bell Prize.

On June 22, 2006, NNSA and IBM announced that Blue Gene/L has achieved 207.3 TFLOPS on a quantum chemical application (Qbox).[2] On Nov 14, 2006, at Supercomputing 2006,[3] Blue Gene/L has been awarded the winning prize in all HPC Challenge Classes of awards.[4] A team from the IBM Almaden Research Lab and the University of Nevada on April 27, 2007 ran an artificial neural network almost half as complex as the brain of a mouse for the equivalent of ten seconds.[5]

In November 2007, the LLNL Blue Gene/L remained at the number one spot as the world's fastest supercomputer. It had been upgraded since the last measurement, and was now almost three times as fast as the second fastest, a Blue Gene/P system.

On June 18, 2008, the new Top500 List marked the first time a Blue Gene system was not the leader in the Top500 since it had assumed that position, being topped by IBM's Cell-based Roadrunner system which was the only system to surpass the mythical petaflop mark. Top500 List announced that the Blue Gene/P is currently the fastest supercomputer in the world for open science and ranked third fastest overall.[6]

[edit] Major features

The Blue Gene/L supercomputer is unique in the following aspects:

  • Trading the speed of processors for lower power consumption.
  • Dual processors per node with two working modes: co-processor (1 user process/node: computation and communication work is shared by two processors) and virtual node (2 user processes/node)
  • System-on-a-chip design
  • A large number of nodes (scalable in increments of 1024 up to at least 65,536)
  • Three-dimensional torus interconnect with auxiliary networks for global communications, I/O, and management
  • Lightweight OS per node for minimum system overhead (computational noise)[7]

[edit] Architecture

One Blue Gene/L node card
A schematic overview of a Blue Gene/L supercomputer

Each Compute or I/O node is a single ASIC with associated DRAM memory chips. The ASIC integrates two 700 MHz PowerPC 440 embedded processors, each with a double-pipeline-double-precision Floating Point Unit (FPU), a cache sub-system with built-in DRAM controller and the logic to support multiple communication sub-systems. The dual FPUs give each Blue Gene/L node a theoretical peak performance of 5.6 GFLOPS (gigaFLOPS). Node CPUs are not cache coherent with one another.

Compute nodes are packaged two per compute card, with 16 compute cards plus up to 2 I/O nodes per node board. There are 32 node boards per cabinet/rack.[8] By integration of all essential sub-systems on a single chip, each Compute or I/O node dissipates low power (about 17 watts, including DRAMs). This allows very aggressive packaging of up to 1024 compute nodes plus additional I/O nodes in the standard 19" cabinet, within reasonable limits of electrical power supply and air cooling. The performance metrics in terms of FLOPS per watt, FLOPS per m2 of floorspace and FLOPS per unit cost allow scaling up to very high performance.

Each Blue Gene/L node is attached to three parallel communications networks: a 3D toroidal network for peer-to-peer communication between compute nodes, a collective network for collective communication, and a global interrupt network for fast barriers. The I/O nodes, which run the Linux operating system, provide communication with the world via an Ethernet network. The I/O nodes also handle the filesystem operations on behalf of the compute nodes. Finally, a separate and private Ethernet network provides access to any node for configuration, booting and diagnostics.

Blue Gene/L compute nodes use a minimal operating system supporting a single user program. Only a subset of POSIX calls are supported, and only one process may be run at a time. Programmers need to implement green threads in order to simulate local concurrency.

Application development is usually performed in C, C++, or Fortran using MPI for communication. However, some scripting languages such as Ruby have been ported to the compute nodes.[9]

To allow multiple programs to run concurrently, a Blue Gene/L system can be partitioned into electronically isolated sets of nodes. The number of nodes in a partition must be a positive integer power of 2, and must contain at least 25 = 32 nodes. The maximum partition is all nodes in the computer. To run a program on Blue Gene/L, a partition of the computer must first be reserved. The program is then run on all the nodes within the partition, and no other program may access nodes within the partition while it is in use. Upon completion, the partition nodes are released for future programs to use.

With so many nodes, component failures are inevitable. The system is able to electrically isolate faulty hardware to allow the machine to continue to run.

[edit] Plan 9 support

A team composed of members from Bell-Labs, IBM Research, Sandia National Labs, and Vita Nuova have completed a port of Plan 9 to Blue Gene/L. Plan 9 kernels are running on both the compute nodes and the I/O nodes. The Ethernet, Torus, Collective Network, Barrier Network, and Management networks are all supported.[10][11]

[edit] Cyclops64 (Blue Gene/C)

Blue Gene/C (now renamed to Cyclops64) is a sister-project to Blue Gene/L. It is a massively parallel, supercomputer-on-a-chip cellular architecture. It was slated for release in early 2007 but has been delayed.

[edit] Blue Gene/P

A Blue Gene/P node card
A schematic overview of a Blue Gene/P supercomputer

On June 26, 2007, IBM unveiled Blue Gene/P, the second generation of the Blue Gene supercomputer. Designed to run continuously at 1 PFLOPS (petaFLOPS), it can be configured to reach speeds in excess of 3 PFLOPS. Furthermore, it is at least seven times more energy efficient than any other supercomputer, accomplished by using many small, low-power chips connected through five specialized networks. Four 850 MHz PowerPC 450 processors are integrated on each Blue Gene/P chip. The 1-PFLOPS Blue Gene/P configuration is a 294,912-processor, 72-rack system harnessed to a high-speed, optical network. Blue Gene/P can be scaled to an 884,736-processor, 216-rack cluster to achieve 3-PFLOPS performance. A standard Blue Gene/P configuration will house 4,096 processors per rack.[12]

On November 12, 2007, the first system, JUGENE, with 65536 processors is running in the Jülich Research Centre in Germany with a performance of 167 TFLOPS.[13] It is the fastest supercomputer in Europe and the sixth fastest in the world. The first laboratory in the United States to receive the Blue Gene/P was Argonne National Laboratory. The first racks of the Blue Gene/P shipped in fall 2007. The first installment was a 111-teraflop system, which has approximately 32,000 processors, and was operational for the US research community in spring 2008.[14] The full Intrepid system is ranked #3 on the June 2008 Top 500 list. [15]

In February 2009 it was announced that JUGENE wil be upgraded to reach petaflops performance in June 2009, making it the first petascale supercomputer in Europe. The new configuration will include 294 912 processor cores, 144 terabyte memory, 6 petabyte storage in 72 racks. The new configuaration will incorporate a new water cooling system that will reduce the cooling cost substantially.[16][17][18]

[edit] Web-scale platform

A team from IBM Research has ported Linux to the compute nodes and demonstrated generic Web 2.0 workloads running at scale on Blue Gene/P. Their paper published in the ACM Operating Systems Review describes a kernel driver that tunnels Ethernet over the tree network, which results in all-to-all TCP/IP connectivity.[19] Running standard Linux software like MySQL their performance results on SpecJBB rank among the highest on record.

[edit] Blue Gene/Q

The last known supercomputer design in the Blue Gene series, Blue Gene/Q is aimed to reach 20 Petaflops in the 2011 time frame. It will continue to expand and enhance the Blue Gene/L and /P architectures with higher frequency at much improved performance per watt. Blue Gene/Q will have a similar number of nodes but many more cores per node.[20] Exactly how many cores per chip the BG/Q will have is currently somewhat unclear, but 8 or even 16 is possible, with 1 GB of memory per core.

The archetypal Blue Gene/Q system called Sequoia will be installed at Lawrence Livermore National Laboratory in 2011 as a part of the Advanced Simulation and Computing Program running nuclear simulations and advanced scientific research. It will consist of 98,304 compute nodes comprising 1.6 million processor cores and 1.6 PB memory in 96 racks covering an area of about 3000 square feet, drawing 6 megawatt of power.[21]

[edit] See also

[edit] References

[edit] External links

Personal tools