SIMD
From Wikipedia, the free encyclopedia
Single Instruction |
Multiple Instruction |
|
---|---|---|
Single Data |
SISD | MISD |
Multiple Data |
SIMD | MIMD |
In computing, SIMD (Single Instruction, Multiple Data; colloquially, "vector instructions") is a technique employed to achieve data level parallelism.
Contents |
[edit] History
Supercomputers, popular in the 1980s such as the CRAY X-MP were called "vector processors." The CRAY X-MP had up to four vector processors which could function independently or work together using a programming model called "autotasking". Autotasking was similar to OpenMP. These machines had very fast scalar processors and also vector processors for long vector computations, for example, adding two vectors of 100 numbers each. The CRAY X-MP vector processors were pipelined and had multiple functional units. Pipelining allowed for one single instruction to move a long array of numbers sequentially into a vector register. Multiple registers, compute units, pipelining, and chaining allowed vector computers to compute Z = X*Y+V/W rapidly by streaming data into registers to hide memory latency, overlapping computations, and producing a resultant (Z) at each clock cycle.[1]
The first era of SIMD machines was characterized by supercomputers such as the Thinking Machines CM-1 and CM-2. These machines had many limited functionality processors that would work in parallel. For example, each of 64,000 processors in a Thinking Machines CM-2 would execute the same instruction at the same time so that you could do 64,000 multiplies on 64,000 pairs of numbers at a time.
Supercomputing moved away from the SIMD approach when inexpensive scalar MIMD approaches based on commodity processors such as the Intel i860 XP [2] became more powerful, and interest in SIMD waned. Later, personal computers became common, and became powerful enough to support real-time gaming. This created a mass demand for a particular type of computing power, and microprocessor vendors turned to SIMD to meet the demand. The first widely-deployed SIMD for gaming was Intel's MMX extensions to the x86 architecture. IBM and Motorola then added AltiVec to the POWER architecture, and there have been several extensions to the SIMD instruction sets for both architectures. All of these developments have been oriented toward support for real-time graphics, and are therefore oriented toward vectors of two, three, or four dimensions. When new SIMD architectures need to be distinguished from older ones, the newer architectures are then considered "short-vector" architectures. A modern supercomputer is almost always a cluster of MIMD machines, each of which implements (short-vector) SIMD instructions. A modern desktop computer is often a multiprocessor MIMD machine where each processor can execute short-vector SIMD instructions.
[edit] DSPs
A separate class of processors exist for this sort of task, commonly referred to as Digital Signal Processors, or DSPs. The main difference between DSP and other SIMD-capable CPUs is that the DSPs are self-contained processors with their own (often difficult to use[citation needed]) instruction set, while SIMD-extensions rely on the general-purpose portions of the CPU to handle the program details, and the SIMD instructions handle the data manipulation only. DSPs also tend to include instructions to handle specific types of data, sound or video for instance, while SIMD systems are considerably of more generic purpose. DSPs generally operate in Scratchpad RAM driven by DMA transfers initiated from the host system and are unable to access external memory.
Some DSP include SIMD instruction sets. The inclusion of SIMD units in general purpose processors has supplanted the use of DSP chips in computer systems, though they continue to be used in embedded applications. A sliding scale exists - the Cell's SPUs and the Ageia Physics Processing Unit could be considered half way between CPUs & DSPs, in that they are optimized for numeric tasks & operate in local store, but they can autonomously control their own transfers thus are in effect true CPUs.
[edit] Advantages
An application that may take advantage of SIMD is one where the same value is being added (or subtracted) to a large number of data points, a common operation in many multimedia applications. One example would be changing the brightness of an image. Each pixel of an image consists of three values for the brightness of the red, green and blue portions of the color. To change the brightness, the R G and B values are read from memory, a value is added (or subtracted) from them, and the resulting values are written back out to memory.
With a SIMD processor there are two improvements to this process. For one the data is understood to be in blocks, and a number of values can be loaded all at once. Instead of a series of instructions saying "get this pixel, now get the next pixel", a SIMD processor will have a single instruction that effectively says "get lots of pixels" ("lots" is a number that varies from design to design). For a variety of reasons, this can take much less time than "getting" each pixel individually, like with traditional CPU design.
Another advantage is that SIMD systems typically include only those instructions that can be applied to all of the data in one operation. In other words, if the SIMD system works by loading up eight data points at once, the add
operation being applied to the data will happen to all eight values at the same time. Although the same is true for any superscalar processor design, the level of parallelism in a SIMD system is typically much higher.
[edit] Disadvantages
- Not all algorithms can be vectorized. For example, a flow-control-heavy task like code parsing wouldn't benefit from SIMD.
- Currently, implementing an algorithm with SIMD instructions usually requires human labor; most compilers don't generate SIMD instructions from a typical C program, for instance. Vectorization in compilers is an active area of computer science research. (Compare vector processing.)
- Programming with particular SIMD instruction sets can involve numerous low-level challenges.
- SSE2 has restrictions on data alignment; programmers used to the x86 architecture may not expect this.
- Gathering data into SIMD registers and scattering it to the correct destination locations is tricky and can be inefficient.
- Specific instructions like rotations or three-operand addition aren't in some SIMD instruction sets.
- Instruction sets are architecture-specific: old processors and non-x86 processors lack SSE2 entirely, for instance, so programmers must provide non-vectorized implementations (or different vectorized implementations) for them. Similarly, the next-generation instruction sets from Intel and AMD will be incompatible with each other (see SSE5 and AVX).
- The early MMX instruction set shared a register file with the floating-point stack, which caused inefficiencies when mixing floating-point and MMX code. SSE2 corrects this.
[edit] Chronology
The first use of SIMD instructions was in vector supercomputers of the early 1970s such as the CDC Star-100 and the Texas Instruments ASC. Vector processing was especially popularized by Cray in the 1970s and 1980s.
Later machines used a much larger number of relatively simple processors in a massively parallel processing-style configuration. Some examples of this type of machine included:
- ILLIAC IV, circa 1974
- ICL Distributed Array Processor (DAP), circa 1974
- Burroughs Scientific Processor, circa 1976
- Geometric-Arithmetic Parallel Processor, from Martin Marietta, starting in 1981, continued at Lockheed Martin, then at Teranex and Silicon Optix
- Massively Parallel Processor (MPP), from NASA/Goddard Space Flight Center, circa 1983-1991
- Connection Machine, models 1 and 2 (CM-1 and CM-2), from Thinking Machines Corporation, circa 1985
- MasPar MP-1 and MP-2, circa 1987-1996
- Zephyr DTC computer from Wavetracer, circa 1991
- Xplor, from Pyxsys, Inc., circa 2001
There were many others from that era too.
[edit] Hardware
Small-scale (64 or 128 bits) SIMD has become popular on general-purpose CPUs, starting in 1989 with the introduction of the Digital Equipment Corporation VAX Vector instructions in the Rigel chip set[1], and continuing through 1997 and later with Motion Video Instructions (MVI) for Alpha. SIMD instructions can be found, to one degree or another, on most CPUs, including the IBM's AltiVec and SPE for PowerPC, HP's PA-RISC Multimedia Acceleration eXtensions (MAX), Intel's MMX and iwMMXt, SSE, SSE2, SSE3 and SSSE3, AMD's 3DNow!, ARC's ARC Video subsystem, SPARC's VIS, Sun's MAJC, ARM's NEON technology, MIPS' MDMX (MaDMaX) and MIPS-3D. The IBM, Sony, Toshiba co-developed Cell Processor's SPU's instruction set is heavily SIMD based.
Modern Graphics Processing Units are often wide SIMD implementations, capable of branches, loads, and stores on 128 or 256 bits at a time.
Future processors promise greater SIMD capability: Intel's AVX instructions will process 256 bits of data at once, and Intel's Larrabee GPU promises to 512-bit SIMD registers on each of its cores (VPU - Wide Vector Processing Units).
[edit] Software
SIMD instructions are widely used to process 3D graphics, although modern graphics cards with embedded SIMD have largely taken over this task from the CPU. Some systems also include permute functions that re-pack elements inside vectors, making them particularly useful for data processing and compression. They are also used in cryptography.[2][3][4] The trend of general-purpose computing on GPUs (GPGPU) may lead to wider use of SIMD in the future.
Adoption of SIMD systems in personal computer software was at first slow, due to a number of problems. One was that many of the early SIMD instruction sets tended to slow overall performance of the system due to the re-use of existing floating point registers. Other systems, like MMX and 3DNow!, offered support for data types that were not interesting to a wide audience and had expensive context switching instructions to switch between using the FPU and MMX registers. Compilers also often lacked support requiring programmers to resort to assembly language coding.
SIMD on x86 had a slow start. The introduction of 3DNow! by AMD and SSE by Intel confused matters somewhat, but today the system seems to have settled down (after AMD adopted SSE) and newer compilers should result in more SIMD-enabled software. Intel and AMD now both provide optimized math libraries that use SIMD instructions, and open source alternatives like libSIMD and SIMDx86 have started to appear.
Apple Computer had somewhat more success, even though they entered the SIMD market later than the rest. AltiVec offered a rich system and can be programmed using increasingly sophisticated compilers from Motorola, IBM and GNU, therefore assembly language programming is rarely needed. Additionally, many of the systems that would benefit from SIMD were supplied by Apple itself, for example iTunes and QuickTime. However, in 2006, Apple computers moved to Intel x86 processors. Apple's APIs and development tools (XCode) were rewritten to use SSE2 and SSE3 instead of AltiVec. Apple was the dominant purchaser of PowerPC chips from IBM and Freescale Semiconductor and even though they abandoned the platform, further development of AltiVec is continued in several Power Architecture designs from Freescale, IBM and P.A. Semi.
SIMD within a register, or SWAR, is a range of techniques and tricks used for performing SIMD in general-purpose registers on hardware that doesn't provide any direct support for SIMD instructions. This can be used to exploit parallelism in certain algorithms even on hardware that does not support SIMD directly.
[edit] Commercial applications
Though it has generally proven difficult to find sustainable commercial applications for SIMD-only processors, one that has had some measure of success is the GAPP, which was developed by Lockheed Martin and taken to the commercial sector by their spin-off Teranex. The GAPP's recent incarnations have become a powerful tool in real-time video processing applications like conversion between various video standards and frame rates (NTSC to/from PAL, NTSC to/from HDTV formats, etc.), deinterlacing, image noise reduction, adaptive video compression, and image enhancement.
A more ubiquitous application for SIMD is found in video games: nearly every modern video game console since 1998 has incorporated a SIMD processor somewhere in its architecture. The Sony PlayStation 2 was unusual in that its vector-float units could function as autonomous DSPs executing their own instruction streams, or as coprocessors driven by ordinary CPU instructions. 3D graphics applications tend to lend themselves well to SIMD processing as they rely heavily on operations with 4-dimensional vectors. Microsoft's Direct3D 9.0 now chooses at runtime processor-specific implementations of its own math operations, including the use of SIMD-capable instructions.
One of the very recent processors to use vector processing is the Cell Processor developed by IBM in cooperation with Toshiba and Sony. It uses a number of SIMD processors (each with independent RAM and controlled by a general purpose CPU) and is geared towards the huge datasets required by 3D and video processing applications.
Larger scale commercial SIMD processors are available from ClearSpeed Technology, Ltd. and Stream Processors, Inc. ClearSpeed's CSX600 (2004) has 96 cores each with 2 double-precision floating point units while the CSX700 (2008) has 192. Stream Processors is headed by computer architect Bill Dally. Their Storm-1 processor (2007) contains 80 SIMD cores controlled by a MIPS CPU.
[edit] References
- ^ DEC Rigel
- ^ RE: SSE2 speed, showing how SSE2 is used to implement SHA hash alorithms
- ^ Salsa20 speed; Salsa20 software, showing a stream cipher implemented using SSE2
- ^ Subject: up to 1.4x RSA throughput using SSE2, showing RSA implemented with SSE2
[edit] External links
- SIMD architectures (2000)
- Cracking Open The Pentium 3 (1999)
- Short Vector Extensions in Commercial Microprocessor
- Article about Optimizing the Rendering Pipeline of Animated Models Using the Intel Streaming SIMD Extensions
- SIMD history and performance comparison
|
|