Shader
From Wikipedia, the free encyclopedia
A shader in the field of computer graphics is a set of software instructions, which is used primarily to calculate rendering effects on graphics hardware with a high degree of flexibility. Shaders are used to program the graphics processing unit (GPU) programmable rendering pipeline, which has mostly superseded the fixed-function pipeline that allowed only common geometry transformation and pixel shading functions; with shaders, customized effects can be used.
Contents |
[edit] Introduction
Initially, shaders were used to perform pixel shading only (see Pixel shader). However, the term stuck and is used for other graphics pipeline stages now, too.
As Graphics Processing Units evolved, major graphics software libraries such as OpenGL and Direct3D began to exhibit enhanced ability to program these new GPUs by defining special shading functions in their API.
[edit] Types of shaders
The Direct3D and OpenGL graphic libraries use three types of shaders.
- Vertex shaders are run once for each vertex given to the graphics processor. The purpose is to transform each vertex's 3D position in virtual space to the 2D coordinate at which it appears on the screen (as well as a depth value for the Z-buffer). Vertex shaders can manipulate properties such as position, color, and texture coordinate, but cannot create new vertices. The output of the vertex shader goes to the next stage in the pipeline, which is either a geometry shader if present or the rasterizer otherwise.
- Geometry shaders can add and remove vertices from a mesh. Geometry shaders can be used to generate geometry procedurally or to add volumetric detail to existing meshes that would be too costly to process on the CPU. If geometry shaders are being used, the output is then sent to the rasterizer.
- Pixel shaders, also known as fragment shaders, calculate the color of individual pixels. The input to this stage comes from the rasterizer, which fills in the polygons being sent through the graphics pipeline. Pixel shaders are typically used for scene lighting and related effects such as bump mapping and color toning. (Direct3D uses the term "pixel shader," while OpenGL uses the term "fragment shader." The latter is arguably more correct, as there is not a one-to-one relationship between calls to the pixel shader and pixels on the screen. The most common reason for this is that pixel shaders are often called many times per pixel for every object that is in the corresponding space, even if it is occluded; the Z-buffer sorts this out later.)
The unified shader model unifies the three aforementioned shaders in OpenGL and Direct3D 10. See NVIDIA faqs.
As these shader types are processed within the GPU pipeline, the following gives an example how they are embedded in the pipeline:
[edit] Simplified graphic processing unit pipeline
- The CPU sends instructions (compiled shading language programs) and geometry data to the graphics processing unit, located on the graphics card.
- Within the vertex shader, the geometry is transformed and lighting calculations are performed.
- If a geometry shader is in the graphic processing unit, some changes of the geometries in the scene are performed.
- The calculated geometry is triangulated (subdivided into triangles).
- Triangles are transformed into pixel quads (one pixel quad is a 2 × 2 pixel primitive).
[edit] Parallel processing
Shaders are written to apply transformations to a large set of elements at a time, for example, to each pixel in an area of the screen, or for every vertex of a model. This is well suited to parallel processing, and most modern GPUs have a multi-core design to facilitate this, vastly improving efficiency of processing.
[edit] Programming shaders
OpenGL (version 1.5 and newer) provides a C-like Shader language called OpenGL Shading Language, or GLSL. There are also interfaces for the Cg shader language, developed by Nvidia, which is syntactically somewhat similar to GLSL.
In the Microsoft Direct3D API (Direct3D 9 and newer), shaders are programmed with High Level Shader Language, or HLSL.
[edit] See also
- List of common shading algorithms
- GPGPU allows general-purpose computations on the GPU. For example, nVidia's CUDA programming language.
[edit] External links
[edit] Further reading
- GLSL: OpenGL Shading Language @ Lighthouse 3D - GLSL Tutorial
- Steve Upstill: The RenderMan Companion: A Programmer's Guide to Realistic Computer Graphics, Addison-Wesley, ISBN 0-201-50868-0
- David S. Ebert, F. Kenton Musgrave, Darwyn Peachey, Ken Perlin, Steven Worley: Texturing and modeling: a procedural approach, AP Professional, ISBN 0-12-228730-4. Ken Perlin is the author of Perlin noise, an important procedural texturing primitive.
- Randima Fernando, Mark Kilgard. The Cg Tutorial: The Definitive Guide to Programmable Real-Time Graphics, Addison-Wesley Professional, ISBN 0-321-19496-9
- Randi J. Rost: OpenGL Shading Language, Addison-Wesley Professional, ISBN 0-321-19789-5
- Riemer's DirectX & HLSL Tutorial: HLSL Tutorial using DirectX with lots of sample code
- GPGPU: general purpose GPU
- MSDN: Pipeline Stages (Direct3D 10)
[edit] References
- ^ Search ARB_shader_objects for the issue "32) Can you explain how uniform loading works?". This is an example of how a complex data structure must be broken in basic data elements.
- ^ Required machinery has been introduced in OpenGL by ARB_multitexture but this specification is no more available since its integration in core OpenGL 1.2.
- ^ Search again ARB_shader_objects for the issue "25) How are samplers used to access textures?". You may also want to check out "Subsection 2.14.4 Samplers".