How to install shaders on Minecraft? What are shaders and how to install shaders on minecraft Need shaders

Introduction

The world of 3D graphics, including games, is filled with terms. Terms that do not always have the only correct definition. Sometimes the same things are called differently, and vice versa, the same effect can be called in the game settings either "HDR", then "Bloom", then "Glow", then "Postprocessing". Most people from the developers boasting about what they built into their graphics engine, it is not clear what was really meant.

The article is intended to help understand what some of these words, most often used in such cases, mean. Within the framework of this article, we will not talk about all the terms of 3D graphics, but only about those that have become more widespread recently as distinctive features and technologies used in game graphics engines and as names for the graphic settings of modern games. To get started, I strongly recommend that you familiarize yourself with.

If something in this article and in Alexander's articles is not clear to you, then it makes sense to start from the earliest, from. These articles are already somewhat outdated, of course, but the basic, most initial and important data is there. We will talk about more "high-level" terms. You should have the basic concepts of 3D real-time graphics and the design of the graphics pipeline. On the other hand, don't expect math formulas, academic precision, and code examples - that's not what this article is for. Terms

List of terms described in the article:

shader

A shader in a broad sense is a program for visually determining the surface of an object. This can be a description of lighting, texturing, post-processing, etc. Shaders grew out of the work of Cook (Cook "s shade trees) and Perlin (Perlin's pixel stream language). Now the most famous shaders are the RenderMan Shading Language. Programmable shaders were first introduced in Pixar's RenderMan, several types of shaders are defined there: light source shaders, surface shaders, displacement shaders, volume shaders, imager shaders.These shaders are most often executed in software by mainframe processors and do not have a full hardware implementation.Further, many researchers described languages ​​similar to RenderMan, but they were already designed for hardware acceleration: the PixelFlow system (Olano and Lastra), Quake Shader Language (used by id Software in the Quake III game's graphics engine, which described multi-pass rendering), etc. Peercy et al. developed a technique for running programs with loops and conditions on traditional hardware architectures using multiple rendering passes. RenderMan shaders were broken into several Only the passes that were combined in the framebuffer. Later, languages ​​appeared that we see hardware accelerated in DirectX and OpenGL. This is how shaders were adapted for real-time graphics applications.

Early video chips were not programmable and performed only pre-programmed actions (fixed-function), for example, the lighting algorithm was rigidly fixed in the hardware, and nothing could be changed. Then, video chip companies gradually introduced programmability elements into their chips, at first it was very weak capabilities (NV10, known as NVIDIA GeForce 256, was already capable of some primitive programs), which did not receive software support in the Microsoft DirectX API, but with Over time, the possibilities have been constantly expanding. The next step was for both NV20 (GeForce 3) and NV2A (the video chip used in the Microsoft Xbox game console), which became the first chips with hardware support for DirectX API shaders. The version of Shader Model 1.0/1.1, which appeared in DirectX 8, was very limited, each shader (especially pixel ones) could be relatively small in length and combine a very limited set of commands. Later, Shader Model 1 (SM1 for short) was enhanced with Pixel Shaders version 1.4 (ATI R200), which offered more flexibility but was also too limited. Shaders of that time were written in the so-called assembly shader language, which is close to assembler for universal processors. Its low level makes it difficult to understand the code and programming, especially when the program code is large, because it is far from the elegance and structure of modern programming languages.

Shader Model 2.0 (SM2), which appeared in DirectX 9 (which was supported by the ATI R300 video chip, which became the first GPU to support shader model version 2.0), greatly expanded the capabilities of real-time shaders, offering longer and more complex shaders and a noticeably expanded set of commands. The possibility of floating point calculations in pixel shaders was added, which was also a major improvement. DirectX 9, in the form of SM2 capabilities, also introduced a high-level shader language (HLSL), which is very similar to the C language. And an efficient compiler that translates HLSL programs into low-level code "understandable" by the hardware. Moreover, several profiles are available, designed for different hardware architectures. Now, a developer can write one HLSL shader code and compile it using DirectX into an optimal program for the video chip installed by the user. After that, chips from NVIDIA, NV30 and NV40, came out, which improved the capabilities of hardware shaders one more step, adding even longer shaders, the possibility of dynamic transitions in vertex and pixel shaders, the ability to fetch textures from vertex shaders, etc. Since then, there have been no qualitative changes was not, they are expected towards the end of 2006 in DirectX 10 ...

All in all, shaders have added a lot of new possibilities to the graphics pipeline to transform and light vertices and personalize pixels in the way that the developers of each particular application want. And yet, the capabilities of hardware shaders have not yet been fully disclosed in applications, and with the increase in their capabilities in each new generation of hardware, we will soon see the level of those same RenderMan shaders that once seemed unattainable for gaming video accelerators. So far, in the real-time shader models currently supported by hardware video accelerators, only two types of shaders are defined: and (in the DirectX 9 API definition). In the future, DirectX 10 promises to add .

Vertex Shader (Vertex Shader)

Vertex shaders are programs executed by video chips that perform mathematical operations with vertices (vertex, they make up 3D objects in games), in other words, they provide the ability to execute programmable algorithms for changing vertex parameters and lighting them (T&L - Transform & Lighting) . Each vertex is defined by several variables, for example, the position of a vertex in 3D space is defined by coordinates: x, y and z. Vertices can also be described by color characteristics, texture coordinates, and so on. Vertex shaders, depending on the algorithms, change this data in the course of their work, for example, by calculating and writing new coordinates and/or color. That is, the input to the vertex shader is data about one vertex of the geometry that is currently being processed. Usually these are coordinates in space, normal, color components and texture coordinates. The resulting data of the executed program serves as input to the next part of the pipeline, the rasterizer makes a linear interpolation of the input data for the surface of the triangle and for each pixel executes the corresponding pixel shader. A very simple and rough (but illustrative, I hope) example: the vertex shader allows you to take a 3D sphere object and use the vertex shader to turn it into a green cube :).

Before the advent of the NV20 video chip, developers had two ways: either use their own programs and algorithms that change the vertex parameters, but then all the calculations would be done by the CPU (software T&L), or rely on fixed algorithms in video chips with support for hardware transformation and lighting (hardware T&L ). The very first DirectX shader model meant a big step forward from fixed functions for transforming and vertex lighting to fully programmable algorithms. It became possible, for example, to execute the skinning algorithm completely on video chips, and before that, the only possibility was their execution on universal central processors. Now, with the capabilities greatly improved since the time of the mentioned NVIDIA chip, you can already do a lot with vertices using vertex shaders (except for creating them, perhaps) ...

Examples of how and where vertex shaders are applied:

Pixel Shader

Pixel shaders are programs executed by the video chip during rasterization for each pixel of the image, they perform texture sampling and / or mathematical operations on the color and depth value (Z-buffer) of pixels. All pixel shader instructions are executed pixel by pixel after the geometry transformation and lighting operations are completed. The pixel shader, as a result of its work, produces the final pixel color value and Z-value for the next stage of the graphics pipeline, blending. The simplest example of a pixel shader that can be given is banal multitexturing, just mixing two textures (diffuse and lightmap, for example) and applying the calculation result to a pixel.

Before the advent of video chips with hardware support for pixel shaders, developers had only the usual multitexturing and alpha blending capabilities, which significantly limited the possibilities for many visual effects and did not allow doing much of what is now available. And if something else could be done programmatically with geometry, then with pixels - no. Early versions of DirectX (up to and including 7.0) always performed all calculations per-vertex and offered very limited per-pixel lighting functionality (remember EMBM - environment bump mapping and DOT3) in recent versions. Pixel shaders made it possible to illuminate any surface pixel by pixel, using materials programmed by the developers. Version 1.1 pixel shaders that appeared in the NV20 (in the sense of DirectX) could not only do multitexturing, but also much more, although most games using SM1 simply used traditional multitexturing on most surfaces, performing more complex pixel shaders only on part of the surfaces, for creating a variety of special effects (everyone knows that water is still the most common example of using pixel shaders in games). Now, after the advent of SM3 and video chips that support them, the possibilities of pixel shaders have grown to the point where even raytracing can be done with their help, albeit with some limitations for now.

Examples of using pixel shaders:

Procedural Textures (Procedural Textures)

Procedural textures are textures that are described by mathematical formulas. Such textures do not take up space in the video memory, they are created by the pixel shader "on the fly", each of their elements (texel) is obtained as a result of the execution of the corresponding shader commands. The most common procedural textures are: different types of noise (for example, fractal noise), wood, water, lava, smoke, marble, fire, etc., that is, those that can be described mathematically relatively simply. Procedural textures also allow animated textures with just a slight modification of the math formulas. For example, clouds made in this way look quite decent both in dynamics and in statics.

The advantages of procedural textures also include an unlimited level of detail for each texture, there will simply be no pixelation, the texture, as it were, is always generated to the size necessary for its display. Of great interest is the animated one, with its help you can make waves on the water, without the use of pre-calculated animated textures. Another plus of such textures is that the more they are used in the product, the less work for artists (though more for programmers) to create ordinary textures.

Unfortunately, procedural textures have not yet been properly used in games, in real applications it is still often easier to load a regular texture, video memory volumes are growing by leaps and bounds, the most modern accelerators already have 512 megabytes of dedicated video memory, which you need more than - something to occupy. Moreover, they still often do the opposite - to speed up mathematics in pixel shaders, lookup tables (LUTs) are used - special textures containing pre-calculated values ​​obtained as a result of calculations. In order not to count several mathematical commands for each pixel, they simply read pre-computed values ​​from the texture. But the farther, the stronger the emphasis should be shifted precisely towards mathematical calculations, take the same new generation ATI video chips: RV530 and R580, which have 12 and 48 pixel processors for every 4 and 16 texture units, respectively. Especially when it comes to 3D textures, because if 2D textures can be placed in the accelerator's local memory without any problems, then 3D textures require much more memory.

Examples of procedural textures:

Bump Mapping/Specular Bump Mapping

Bumpmapping is a technique for simulating bumps (or microrelief, if you prefer) on a flat surface without much computational effort and geometry changes. For each pixel of the surface, a lighting calculation is performed based on the values ​​in a special height map called a bumpmap. This is usually an 8-bit black and white texture and the texture color values ​​are not mapped like normal textures but are used to describe the roughness of the surface. The color of each texel determines the height of the corresponding point in the terrain, larger values ​​mean a greater height above the original surface, and smaller values, respectively, less. Or vice versa.

The degree of illumination of a point depends on the angle of incidence of the light rays. The smaller the angle between the normal and the light beam, the greater the illumination of the surface point. That is, if we take a flat surface, then the normals at each point will be the same and the illumination will also be the same. And if the surface is uneven (in fact, almost all surfaces in reality), then the normals at each point will be different. And the illumination is different, at one point it will be more, at the other - less. Hence the principle of bumpmapping - to simulate irregularities for different points of the polygon, normals to the surface are set, which are taken into account when calculating per-pixel illumination. As a result, a more natural image of the surface is obtained, bumpmapping gives the surface greater detail, such as bumps on the brick, pores on the skin, etc., without increasing the geometric complexity of the model, since the calculations are carried out at the pixel level. Moreover, when the position of the light source changes, the illumination of these irregularities changes correctly.

Of course, vertex lighting is much simpler computationally, but it looks too unrealistic, especially with relatively low-poly geometry, color interpolation for each pixel cannot reproduce values ​​greater than the calculated values ​​for the vertices. That is, the pixels in the middle of the triangle cannot be brighter than the fragments near the top. Therefore, areas with a sharp change in lighting, such as glare and lights that are very close to the surface, will not be physically displayed correctly, and this will be especially noticeable in dynamics. Of course, the problem can be partially solved by increasing the geometric complexity of the model, by dividing it into more vertices and triangles, but pixel-by-pixel lighting would be the best option.

To continue, it is necessary to recall the components of lighting. The color of a surface point is calculated as the sum of ambient, diffuse and specular components from all lights in the scene (ideally from all, often many are neglected). The contribution to this value from each light source depends on the distance between the light source and a point on the surface.

Lighting components:

And now let's add bumpmapping to this:

Uniform (ambient) lighting component - approximation, "initial" lighting for each point of the scene, in which all points are illuminated equally and the illumination does not depend on other factors.
The diffuse component of the illumination depends on the position of the light source and on the normal of the surface. This component of lighting is different for each vertex of the object, which gives them volume. The light no longer fills the surface with the same shade.
Glare (specular) component of illumination is manifested in the glare of the reflection of light rays from the surface. To calculate it, in addition to the position vector of the light source and the normal, two more vectors are used: the gaze direction vector and the reflection vector. The specular lighting model was first proposed by Phong (Phong Bui-Tong). These highlights greatly increase the realism of the image, because rare real surfaces do not reflect light, so the specular component is very important. Especially in motion, because the glare immediately shows a change in the position of the camera or the object itself. In the future, researchers came up with other ways to calculate this component, more complex (Blinn, Cook-Torrance, Ward), taking into account the distribution of light energy, its absorption by materials and scattering in the form of a diffuse component.

So, Specular Bump Mapping is obtained in this way:

And let's see the same on the example of the game, Call of Duty 2:


The first fragment of the picture is rendering without bumpmapping () at all, the second (top-right) is bumpmapping without a specular component, the third one is with a specular component of normal size, which is used in the game, and the last one, on the right-bottom, is with the maximum possible value of the specular component.

As for the first hardware application, some types of bump mapping (Emboss Bump Mapping) began to be used back in the days of video cards based on NVIDIA Riva TNT chips, but the techniques of that time were extremely primitive and were not widely used. The next known type was Environment Mapped Bump Mapping (EMBM), but only Matrox video cards had hardware support for it in DirectX at that time, and again its use was very limited. Then came Dot3 Bump Mapping and the video chips of the time (GeForce 256 and GeForce 2) required three passes to fully execute such a mathematical algorithm, as they are limited to two textures used at the same time. Starting with NV20 (GeForce3), it became possible to do the same in one pass using pixel shaders. Further more. They began to use more effective techniques, such as.

Examples of using bumpmapping in games:


Displacement Mapping is a method of adding detail to 3D objects. Unlike bumpmapping and other per-pixel methods, when only the illumination of a point is correctly modeled by height maps, but its position in space does not change, which gives only the illusion of increasing surface complexity, displacement maps allow you to get real complex 3D objects from vertices and polygons, without restrictions, inherent to pixel-by-pixel methods. This method changes the position of the vertices of the triangles, shifting them along the normal by an amount based on the values ​​in the displacement maps. A displacement map is usually a black and white texture, and the values ​​in it are used to determine the height of each point on the object's surface (the values ​​can be stored as 8-bit or 16-bit numbers), similar to a bumpmap. Often, displacement maps are used (in which case they are also called height maps) to create a ground surface with hills and valleys. Since the terrain is described by a 2D displacement map, it is relatively easy to deform it if necessary, since it only requires modifying the displacement map and rendering the surface from it in the next frame.

Visually, the creation of a landscape using the overlay of displacement maps is shown in the picture. The initial ones were 4 peaks and 2 polygons, in the end we got a full-fledged piece of the landscape.

The big advantage of overlaying displacement maps is not just the ability to add details to a surface, but the almost complete creation of an object. A low-poly object is taken, split (tessellated) into a larger number of vertices and polygons. The vertices resulting from the tessellation are then normal-displaced based on the value read in the displacement map. As a result, we get a complex 3D object from a simple one using the appropriate displacement map:


The number of triangles created by tessellation must be large enough to capture all the detail given by the displacement map. Sometimes additional triangles are created automatically using N-patches or other methods. Displacement maps are best used in conjunction with bumpmapping to create fine details where proper pixel-by-pixel lighting is sufficient.

Displacement mapping first received support in DirectX 9.0. This was the first version of this API to support the Displacement Mapping technique. DX9 supports two types of displacement map overlays, filtered and presampled. The first method was supported by the already forgotten MATROX Parhelia video chip, and the second - by ATI RADEON 9700. The filtered method differs in that it allows using mip levels for displacement maps and applying trilinear filtering for them. In this method, the mip level of the displacement map is selected for each vertex based on the distance from the vertex to the camera, i.e. the level of detail is automatically selected. In this way, an almost uniform splitting of the scene is achieved, when the triangles are approximately the same size.

Overlaying displacement maps can essentially be thought of as a method of compressing geometry, using displacement maps reduces the amount of memory required for a certain amount of detail in a 3D model. Cumbersome geometric data is replaced by simple 2D displacement textures, usually 8-bit or 16-bit. This reduces the memory and bandwidth requirements required to deliver geometric data to the video chip, and these limitations are one of the main ones for today's systems. Or, for equal bandwidth and memory requirements, overlaying displacement maps allows for much more geometrically complex 3D models. The use of models of much less complexity, when units of thousands are used instead of tens or hundreds of thousands of triangles, also makes it possible to speed up their animation. Or improve it by applying more complex complex algorithms and techniques, such as cloth simulation.

Another benefit is that using displacement maps turns complex 3D polygonal meshes into multiple 2D textures that are easier to manipulate. For example, for an organization, you can use regular mipmapping to overlay displacement maps. Also, instead of relatively complex algorithms for compressing 3D meshes, you can use the usual texture compression methods, even JPEG-like ones. And for the procedural creation of 3D objects, you can use the usual algorithms for two-dimensional textures.

But displacement maps also have some limitations, they cannot be applied in all situations. For example, smooth objects that don't contain a lot of fine detail will be better represented as standard polygon meshes or other higher-level surfaces like Bezier curves. On the other hand, very complex models such as trees or plants are also not easily represented by displacement maps. There are also usability issues, it almost always requires specialized utilities, because it is very difficult to create displacement maps directly (unless we are talking about simple objects like terrain). Many of the problems and limitations of displacement maps are the same as those of , since the two methods are essentially two different representations of a similar idea.

As an example from real games, I'll give a game that uses texture fetching from a vertex shader, a feature that appeared in NVIDIA NV40 video chips and shader model 3.0. Vertex texturing can be used for a simple method of overlaying displacement maps, completely performed by the video chip, without tessellation (splitting into more triangles). The use of such an algorithm is limited, they only make sense if the maps are dynamic, that is, they will change in the process. For example, this is a rendering of large water surfaces, which is what Pacific Fighters did:


Normalmapping is an improved version of the bumpmapping technique described earlier, an extended version of it. Bumpmapping was developed by Blinn back in 1978, the surface normals of this terrain mapping method are modified based on information from bump maps. Whereas bumpmapping only modifies the existing normal for surface points, normalmapping completely replaces normals by fetching their values ​​from a specially prepared normal map. These maps are usually textures with pre-computed normal values ​​stored in them, represented as RGB color components (however, there are special formats for normal maps, including compressed ones), in contrast to 8-bit black and white height maps in bumpmapping.

In general, like bumpmapping, this is also a "cheap" method for adding detail to models of relatively low geometric complexity, without using more real geometry, only more advanced. One of the most interesting applications of the technique is to significantly increase the detail of low-poly models using normal maps obtained by processing the same model of high geometric complexity. Normal maps contain a more detailed description of the surface than bumpmapping and allow you to represent more complex shapes. Ideas for obtaining information from highly detailed objects were voiced in the mid-90s of the last century, but then it was about using for . Later, in 1998, ideas were presented to transfer normal map detail from high poly models to low poly models.

Normal maps provide a more efficient way to store detailed surface data than simply using a large number of polygons. The only major limitation is that they are not well suited for large details, because normal mapping does not actually add polygons or change the shape of the object, it only gives the appearance of it. This is just a simulation of the details, based on the calculation of lighting at the pixel level. On the extreme polygons of the object and large angles of inclination of the surface, this is very clearly visible. Therefore, the most reasonable way to apply normal mapping is to make the low poly model detailed enough to retain the object's basic shape, and use normal maps to add finer details.

Normal maps are usually created from two versions of a model, a low poly and a high poly. The low-poly model consists of a minimum of geometry, basic shapes of the object, and the high-poly model contains everything you need for maximum detail. Then, with the help of special utilities, they are compared with each other, the difference is calculated and stored in a texture called a normal map. When creating it, you can additionally use a bump map for very small details that cannot be modeled even in a high-poly model (skin pores, other small depressions).

Normal maps were originally represented as regular RGB textures, where the R, G and B color components (from 0 to 1) are interpreted as X, Y and Z coordinates. Each texel in a normal map is represented as a surface point normal. Normal maps can be of two types: with coordinates in model space (general coordinate system) or tangent space (the term in Russian is "tangent space", the local coordinate system of a triangle). The second option is more commonly used. When normal maps are presented in model space, they must have three components, since all directions can be represented, and when in tangent space, you can get by with two components, and get the third in the pixel shader.

Modern real-time applications are still far behind pre-rendered animation in terms of image quality, this primarily concerns the quality of lighting and the geometric complexity of scenes. The number of vertices and triangles calculated in real time is limited. Therefore, methods to reduce the amount of geometry are very important. Before normal mapping, several such methods were developed, but low-poly models, even with bump mapping, turn out to be noticeably worse than more complex models. Although normal mapping has a few disadvantages (the most obvious is that since the model remains low-poly, this is easily seen from its angular borders), the final rendering quality is noticeably improved, leaving the geometric complexity of the models low. Recently, an increase in the popularity of this technique and its use in all popular game engines is clearly visible. The reason for this is a combination of excellent resulting quality and a simultaneous reduction in the requirements for the geometric complexity of the models. The normal mapping technique is now used almost everywhere, all new games use it as widely as possible. Here is just a short list of famous PC games using normal mapping: Far Cry, Doom 3, Half-Life 2, Call of Duty 2, FEAR, Quake 4. All of them look much better than the games of the past, also due to the use of maps normals.

There is only one negative consequence of using this technique - an increase in the volume of textures. After all, the normal map greatly affects how the object will look, and it must be of a sufficiently large resolution, so the requirements for video memory and its bandwidth are doubled (in the case of uncompressed normal maps). But now video cards with 512 MB of local memory are already being produced, its bandwidth is constantly growing, compression methods have been developed specifically for normal maps, so these small restrictions are not very important, in fact. The effect that normal mapping gives is much greater, allowing the use of relatively low-poly models, reducing the memory requirements for storing geometric data, improving performance and giving a very decent visual result.

Parallax Mapping/Offset Mapping

Normalmapping, developed back in 1984, was followed by Relief Texture Mapping, introduced by Olivera and Bishop in 1999. This is a method for texture mapping based on depth information. The method did not find application in games, but its idea contributed to the continuation of work on parallax mapping and its improvement. Kaneko introduced parallax mapping in 2001, which was the first efficient method for displaying the parallax effect per pixel. In 2004, Welsh demonstrated the use of parallax mapping on programmable video chips.

This method has probably the most different names. I will list those that I met: Parallax Mapping, Offset Mapping, Virtual Displacement Mapping, Per-Pixel Displacement Mapping. In the article, for brevity, the first name is used.
Parallaxmapping is another alternative to the bumpmapping and normalmapping techniques that gives you even more surface detail, a more naturalistic display of 3D surfaces, also without too much of a performance penalty. This technique is similar to displacement mapping and normal mapping at the same time, it is something in between. The method is also designed to display more surface details than is in the original geometric model. It is similar to normal mapping, but the difference is that the method distorts the texture mapping by changing the texture coordinates so that when you look at the surface from different angles, it looks convex, although in reality the surface is flat and does not change. In other words, Parallax Mapping is a technique for approximating the effect of displacement of surface points depending on the change in point of view.

The technique shifts texture coordinates (which is why the technique is sometimes called offset mapping) so that the surface looks more voluminous. The idea of ​​the method is to return the texture coordinates of the point where the view vector intersects the surface. This requires ray tracing for the heightmap, but if it doesn't have too much changing values ​​("smooth" or "smooth"), then you can get by with an approximation. This method is good for surfaces with smoothly changing heights, without calculating intersections and large offset values. Such a simple algorithm differs from normal mapping in only three pixel shader instructions: two math instructions and one extra texture fetch. After the new texture coordinate is calculated, it is used further to read other texture layers: base texture, normal map, etc. This method of parallax mapping on modern video chips is almost as effective as conventional texture mapping, and its result is a more realistic surface display compared to simple normal mapping.

But the use of normal parallax mapping is limited to heightmaps with a small difference in values. "Steep" bumps are processed incorrectly by the algorithm, various artifacts appear, "floating" of textures, etc. Several modified methods have been developed to improve the parallax mapping technique. Several researchers (Yerex, Donnelly, Tatarchuk, Policarpo) have described new methods that improve the initial algorithm. Almost all ideas are based on ray tracing in the pixel shader to determine the intersection of surface details with each other. The modified methods have received several different names: Parallax Mapping with Occlusion, Parallax Mapping with Distance Functio ns, Parallax Occlusion Mapping. For brevity, we will call them all Parallax Occlusion Mapping.

Parallax Occlusion Mapping methods also include ray tracing to determine heights and take into account the visibility of texels. After all, when viewed at an angle to the surface, the texels block each other, and, given this, you can add more depth to the parallax effect. The resulting image becomes more realistic and such improved methods can be used for deeper terrain, it is great for depicting brick and stone walls, pavement, etc. It should be especially noted that the main difference between Parallax Mapping and Displacement Mapping is that all calculations are per-pixel, and not overbearing. That is why the method has names like Virtual Displacement Mapping and Per-Pixel Displacement Mapping. Look at the picture, it's hard to believe, but the pavement stones here are just a pixel-by-pixel effect:

The method allows you to efficiently display detailed surfaces without the millions of vertices and triangles that would be required if the geometry were to implement this. At the same time, high detail is preserved (except for silhouettes/faces) and animation calculations are greatly simplified. This technique is cheaper than using real geometry and uses significantly fewer polygons, especially in cases with very fine detail. There are many applications for the algorithm, and it is best suited for stones, bricks and the like.

Also, an added benefit is that heightmaps can change dynamically (water surface with waves, bullet holes in walls, and more). The disadvantages of the method are the absence of geometrically correct silhouettes (object edges), because the algorithm is pixel-by-pixel and is not a real displacement mapping. But it saves performance in the form of reducing the load on the transformation, lighting and geometry animation. Saves video memory required to store large amounts of geometry data. The technology also has a relatively simple integration into existing applications and the use of familiar utilities used for normal mapping in the process.

The technique has already been used in real games of recent times. So far, they manage with simple parallax mapping based on static height maps, without ray tracing and intersection calculation. Here are examples of using parallax mapping in games:

Postprocessing

In a broad sense, post-processing is everything that happens after the main actions for building an image. In other words, post-processing is any changes to an image after it has been rendered. Post-processing is a set of tools for creating special visual effects, and they are created after the main work on rendering the scene has been completed, that is, when creating post-processing effects, a ready-made bitmap is used.

A simple example from photography: you have photographed a beautiful lake with greenery in clear weather. The sky is very bright, and the trees are too dark. You load a photo into a graphics editor and start changing the brightness, contrast, and other parameters for areas of the image or for the entire picture. But you no longer have the opportunity to change the camera settings, you are doing the processing of the finished image. This is post-processing. Or another example: isolating the background in a portrait photo and applying a blur filter to that area for a depth of field effect with more depth. That is, when you change or correct a frame in a graphics editor, you are doing post-processing. The same can be done in the game, in real time.

There are many different possibilities for processing an image after it has been rendered. Everyone has probably seen a lot of so-called graphic filters in graphic editors. This is exactly what is called post-filters: blur, edge detection, sharpen, noise, smooth, emboss, etc. As applied to real-time 3D rendering, this is done as follows - the entire scene is rendered into a special area, render target, and after the main rendering this image is further processed using pixel shaders and only then displayed on the screen. Of the post-processing effects in games, , , are most often used. There are many other post-effects: noise, flare, distortion, sepia, etc.

Here are a couple of vivid examples of post-processing in game applications:

High Dynamic Range (HDR)

High Dynamic Range (HDR), as applied to 3D graphics, is rendering in a wide dynamic range. The essence of HDR is the description of intensity and color by real physical quantities. The usual image description model is RGB, where all colors are represented as the sum of the primary colors: red, green and blue, with different intensities as possible integer values ​​from 0 to 255 for each, encoded with eight bits per color. The ratio of the maximum intensity to the minimum available for display by a particular model or device is called the dynamic range. Thus, the dynamic range of the RGB model is 256:1 or 100:1 cd/m 2 (two orders of magnitude). This model for describing color and intensity is commonly referred to as Low Dynamic Range (LDR).

The possible LDR values ​​for all cases are clearly not enough, a person is able to see a much larger range, especially at low light intensity, and the RGB model is too limited in such cases (and at high intensities too). The dynamic range of human vision is from 10 -6 to 10 8 cd/m 2 ie 100000000000000:1 (14 orders of magnitude). At the same time, we cannot see the entire range, but the range visible to the eye at any given time is approximately equal to 10,000:1 (four orders of magnitude). Vision adapts to values ​​from another part of the light range gradually, with the help of the so-called adaptation, which can be easily described as a situation with turning off the lights in a room at night - at first the eyes see very little, but over time they adapt to changed lighting conditions and see much more . The same thing happens when you change the dark environment back to light.

So, the dynamic range of the RGB description model is not enough to represent images that a person is able to see in reality, this model significantly reduces the possible values ​​​​of light intensity in the upper and lower parts of the range. The most common example given in HDR material is an image of a darkened room with a window to a bright street on a sunny day. With the RGB model, you can get either a normal display of what is outside the window, or only what is inside the room. Values ​​greater than 100 cd/m 2 are generally clipped in LDR, which is the reason why it is difficult to render bright light sources directly at the camera in 3D rendering.

So far, the data display devices themselves cannot be seriously improved, but it makes sense to abandon LDR in calculations, you can use real physical values ​​\u200b\u200bof intensity and color (or linearly proportional), and display the maximum that it can on the monitor. The essence of the HDR representation is to use intensity and color values ​​in real physical quantities or linearly proportional, and to use not integers, but floating point numbers with high precision (for example, 16 or 32 bits). This will remove the limitations of the RGB model, and the dynamic range of the image will increase significantly. But then any HDR image can be displayed on any display medium (the same RGB monitor), with the highest possible quality for it using special algorithms.

HDR rendering allows us to change the exposure after we've rendered the image. It allows you to simulate the effect of adaptation of human vision (moving from bright open spaces to dark rooms and vice versa), allows you to perform physically correct lighting, and is also a unified solution for applying post-processing effects (glare, flares, bloom, motion blur). Image processing algorithms, color correction, gamma correction, motion blur, bloom and other post-processing methods are better performed in HDR representation.

Real-time 3D rendering applications (games, mostly) have started to use HDR rendering not so long ago, because it requires calculations and render target support in floating point formats, which for the first time became available only on video chips with DirectX 9 support. The usual HDR rendering path in games such as rendering a scene to a floating point buffer, post-processing an image in an extended color range (changing contrast and brightness, color balance, glare and motion blur effects, lens flare and the like), using tone mapping to display the final HDR image on LDR display device. Sometimes environment maps are used in HDR formats for static reflections on objects, the use of HDR in simulating dynamic refractions and reflections is very interesting, dynamic maps in floating point formats can also be used for this. To this, you can add more lightmaps (light maps), pre-calculated and saved in HDR format. Much of the above is done, for example, in Half-Life 2: Lost Coast.

HDR rendering is very useful for complex, higher quality post-processing than conventional methods. The same bloom will look more realistic when calculated in the HDR representation model. For example, as done in Crytek's Far Cry game, it uses standard HDR rendering techniques: bloom filters introduced by Kawase and tone mapping operator Reinhard.

Unfortunately, in some cases, game developers can hide under the name HDR just a bloom filter calculated in the usual LDR range. And while most of what's being done in games with HDR rendering right now is a better quality bloom, the benefits of HDR rendering aren't limited to this one post-effect, it's just the easiest to do.

Other examples of using HDR rendering in real-time applications:


Tone mapping is the process of converting the HDR luminance range to the LDR range displayed by an output device such as a monitor or printer, since outputting HDR images to them would require the dynamic range and gamut of the HDR model to be converted to the corresponding LDR dynamic range, most commonly the RGB model. After all, the range of brightness presented in HDR is very wide, it is several orders of magnitude of the absolute dynamic range at a time, in one scene. And the range that can be reproduced on familiar output devices (monitors, TVs) is only about two orders of magnitude dynamic range.

The conversion from HDR to LDR is called tone mapping, it is lossy and mimics the properties of human vision. Such algorithms are usually called tone mapping operators. Operators divide all image brightness values ​​into three different types: dark, medium, and bright. Based on the estimated midtone luminance, the overall illuminance is corrected, the luminance values ​​of the scene pixels are redistributed to fit within the output range, dark pixels are brightened and bright pixels are darkened. Then, the brightest pixels in the image are scaled to the range of the output device or output view model. The following picture shows the simplest conversion of an HDR image to the LDR range, a linear transformation, and a more complex tone mapping operator is applied to the fragment in the center, which works as described above:

It can be seen that only with the use of non-linear tone mapping, you can get the maximum details in the image, and if you bring HDR to LDR linearly, then many little things are simply lost. There is no single correct tone mapping algorithm, there are several operators that give good results in different situations. Here is a visual example of two different tone mapping operators:

Together with HDR rendering, recently tone mapping has been used in games. It became possible to optionally imitate the properties of human vision: loss of sharpness in dark scenes, adaptation to new lighting conditions when transitioning from very bright areas to dark ones and vice versa, sensitivity to changes in contrast, color ... This is how the imitation of the ability of vision to adapt in Far Cry looks like. The first screenshot shows the image that the player sees when he has just turned from a dark room to a brightly lit open space, and the second one shows the same image after a couple of seconds, after adaptation.

Bloom

Bloom is one of the cinematic post-processing effects that makes the brightest parts of an image even brighter. This is the effect of very bright light, which manifests itself as a glow around bright surfaces, after applying the bloom filter, such surfaces not only receive additional brightness, the light from them (halo) also partially affects the darker areas adjacent to bright surfaces in the frame. The easiest way to show this is with an example:

In 3D Bloom graphics, the filter is made using additional post-processing - mixing the frame smeared with the blur filter (the entire frame or its individual bright areas, the filter is usually applied several times) and the original frame. One of the most commonly used in games and other real-time applications is the bloom post-filter algorithm:

  • The scene is rendered into the framebuffer, the intensity of the glow (glow) of objects is recorded in the alpha channel of the buffer.
  • The framebuffer is copied to a special texture for processing.
  • The texture resolution is reduced, for example, by a factor of 4.
  • Smoothing filters (blur) are applied to the image several times, based on the intensity data recorded in the alpha channel.
  • The resulting image is mixed with the original frame in the framebuffer and the result is displayed on the screen.

Like other types of post-processing, bloom is best used when rendering in high dynamic range (HDR). Additional examples of processing the final bloom image with a filter from real-time 3D applications:

motion blur

Motion blur occurs in photography and filming due to the movement of objects in the frame during the exposure time of the frame, while the lens shutter is open. The frame captured by the camera (photo, movie) does not show the snapshot taken instantly, with zero duration. Due to technological limitations, the frame shows a certain period of time, during which time the objects in the frame can move a certain distance, and if this happens, then all positions of the moving object during the time the lens shutter is open will be presented on the frame as a blurry image along the motion vector . This happens if the object is moving relative to the camera, or the camera relative to the object, and the amount of blur gives us an idea of ​​how fast the object is moving.

In 3D animation, at each particular moment of time (frame), objects are located at certain coordinates in 3D space, similar to a virtual camera with infinitely fast shutter speed. As a result, there is no blur like that experienced by a camera and the human eye when looking at fast moving objects. It looks unnatural and unrealistic. Consider a simple example: several spheres rotate around a certain axis. Here is an image of what this movement would look like with and without blurring:

You can't even tell from an image without blur whether the spheres are moving or not, while motion blur gives a clear idea of ​​the speed and direction of movement of objects. By the way, the lack of motion blur is also the reason why motion in games at 25-30 frames per second seems jerky, although movies and video at the same frame rates look great. To compensate for the lack of motion blur, either a high frame rate (60 frames per second or higher) or the use of additional image processing techniques to emulate the motion blur effect is desirable. This is used both to improve the smoothness of animation and for the effect of photo and film realism at the same time.

The simplest motion blur algorithm for real-time applications is to use the current frame of data to render from previous frames of the animation. But there are more efficient and modern motion blur methods that do not use previous frames, but are based on the motion vectors of objects in the frame, also adding just one more post-processing step to the rendering process. The blur effect can be either full-screen (usually done in post-processing), or for individual, fastest moving objects.

Possible applications of the motion blur effect in games: all racing games (to create the effect of very high speed of movement and for use when watching TV-like replays), sports games (the same replays, but in the game itself, blur can be applied to very fast moving objects, like a ball or puck), fighting games (fast movements of melee weapons, arms and legs), many other games (during in-game 3D cutscenes on the engine). Here are some examples of applying motion blur post-effect from games:

Depth Of Field (DOF)

Depth of field (depth of field), in short, is the blurring of objects depending on their position relative to the focus of the camera. In real life, in photographs and in movies, we do not see all objects equally clearly, this is due to the peculiarity of the structure of the eye and the design of the optics of photo and movie cameras. Photo and film optics have a certain distance, objects located at such a distance from the camera are in focus and look sharp in the picture, while objects more distant from the camera or closer to it, on the contrary, look blurry, the sharpness decreases gradually with increasing or decreasing distance .

As you guessed, this is a photograph, not a rendering. In computer graphics, on the other hand, each object of the rendered image is perfectly clear, since lenses and optics are not simulated in the calculations. Therefore, in order to achieve photo and film realism, one has to apply special algorithms to make something similar for computer graphics. These techniques simulate the effect of different focus on objects at different distances.

One common technique in real-time rendering is to blend the original frame and its blurry version (several passes of the blur filter) based on the depth data for the pixels in the image. In games, there are several uses for the DOF effect, such as game clips in the game engine, replays in sports and racing games. Examples of depth of field in real time:

Level Of Detail (LOD)

The level of detail (level of detail) in 3D applications is a method of reducing the complexity of rendering a frame, reducing the total number of polygons, textures and other resources in a scene, the overall reduction of its complexity. A simple example: the main character model consists of 10,000 polygons. In those cases when it is located close to the camera in the processed scene, it is important that all polygons are used, but at a very large distance from the camera in the final image it will occupy only a few pixels, and there is no point in processing all 10,000 polygons. Perhaps, in this case, hundreds of polygons will be enough, or even a couple of pieces and a specially prepared texture for approximately the same display of the model. Accordingly, at medium distances it makes sense to use a model consisting of a number of triangles greater than that of the simplest model, and less than that of the most complex one.

The LOD method is commonly used when modeling and rendering 3D scenes, using several levels of complexity (geometric or otherwise) for objects in proportion to their distance from the camera. The method is often used by game developers to reduce the number of polygons in a scene and to improve performance. When close to the camera, models with maximum details (number of triangles, texture size, texturing complexity) are used for the highest possible image quality and vice versa, when models are farther away from the camera, models with fewer triangles are used to increase rendering speed. Changing the complexity, in particular, the number of triangles in the model, can occur automatically based on one 3D model of maximum complexity, or maybe based on several pre-prepared models with different levels of detail. By using models with less detail for different distances, the calculated rendering complexity is reduced, with almost no degradation in overall image detail.

The method is especially effective if the number of objects in the scene is large and they are located at different distances from the camera. For example, let's take a sports game, such as a hockey or football simulator. Low-poly character models are used when they are far from the camera, and when they get closer, the models are replaced by others with a large number of polygons. This example is very simple and it shows the essence of the method based on two levels of detail of the model, but no one bothers to create several levels of detail so that the effect of changing the LOD level is not too noticeable, so that the details gradually "grow" as the object approaches.

In addition to the distance from the camera, other factors can also matter for LOD - the total number of objects on the screen (when there is one or two characters in the frame, then complex models are used, and when 10-20, they switch to simpler ones) or the number of frames per second (limits of FPS values ​​are set at which the level of detail changes, for example, at FPS below 30 we reduce the complexity of the models on the screen, and at 60, on the contrary, we increase it). Other possible factors affecting the level of detail are the speed of the object movement (you are unlikely to have time to see a rocket in motion, but you can easily see a snail), the importance of the character from a game point of view (take the same football - for the model of the player you control, you can use more complex geometry and textures, you see it closest and most often). It all depends on the desires and capabilities of a particular developer. The main thing is not to overdo it, frequent and noticeable changes in the level of detail are annoying.

Let me remind you that the level of detail does not necessarily apply only to geometry, the method can also be used to save other resources: when texturing (although video chips already use mipmapping, sometimes it makes sense to change textures on the fly to others with different detail), lighting techniques (similar objects are illuminated according to a complex algorithm, while distant objects are illuminated according to a simple one), texturing techniques (complex parallax mapping is used on near surfaces, and normal mapping is used on distant surfaces), etc.

It is not so easy to show an example from the game, on the one hand, LOD is used to some extent in almost every game, on the other hand, it is not always possible to clearly show this, otherwise there would be little point in LOD itself.

But in this example, it is still clear that the closest car model has the maximum detail, the next two or three cars are also very close to this level, and all the distant ones have visible simplifications, here are just the most significant ones: there are no rear-view mirrors, license plates, windshield wipers and additional lighting. And from the farthest model there is not even a shadow on the road. This is the level of detail algorithm in action.

global illumination

Realistic lighting of the scene is difficult to simulate, each ray of light in reality is reflected and refracted many times, the number of these reflections is not limited. And in 3D rendering, the number of reflections is highly dependent on the calculation capabilities, any calculation of the scene is a simplified physical model, and the resulting image is only close to realism.

Lighting algorithms can be divided into two models: direct or local illumination and global illumination (direct or local illumination and global illumination). The local lighting model uses the calculation of direct illumination, light from light sources until the first intersection of light with an opaque surface, the interaction of objects with each other is not taken into account. Although this model tries to compensate for this by adding background or ambient lighting, this is the simplest approximation, a highly simplified lighting from all indirect light sources, which sets the color and intensity of illumination of objects in the absence of direct light sources.

The same ray tracing calculates the illumination of surfaces only by direct rays from light sources and any surface, in order to be visible, must be directly illuminated by a light source. This is not enough to achieve photorealistic results, in addition to direct illumination, secondary illumination by rays reflected from other surfaces must also be taken into account. In the real world, light rays bounce off surfaces multiple times until they die out completely. Sunlight passing through a window illuminates the entire room, although the rays cannot directly reach all surfaces. The brighter the light source, the more times it will bounce. The color of the reflective surface also affects the color of the reflected light, for example a red wall will cause a red spot on an adjacent white object. Here is a clear difference, calculation without taking into account secondary lighting and taking into account it:

In the global illumination model, global illumination, illumination is calculated taking into account the influence of objects on each other, multiple reflections and refractions of light rays from the surfaces of objects, caustics and subsurface scattering are taken into account. This model allows you to get a more realistic picture, but complicates the process, requiring significantly more resources. There are several global illumination algorithms, we will briefly look at radiosity (calculation of indirect illumination) and photon mapping (calculation of global illumination based on photon maps pre-calculated using tracing). There are also simplified methods for simulating indirect lighting, such as changing the overall brightness of a scene depending on the number and brightness of lights in it, or using a large number of point lights placed around the scene to simulate reflected light, but still this is far from a real algorithm. G.I.

The radiosity algorithm is the process of calculating the secondary reflections of light rays from one surface to another, as well as from the environment to objects. Rays from light sources are traced until their strength drops below a certain level or the rays reach a certain number of reflections. This is a common GI technique, calculations are usually done before rendering, and the results of the calculation can be used for real-time rendering. The basic ideas of radiosity are based on the physics of heat transfer. The surfaces of objects are divided into small areas, called patches, and it is assumed that the reflected light is scattered evenly in all directions. Instead of calculating each beam for the lights, an averaging technique is used, dividing the lights into patches based on the energy levels they put out. This energy is distributed among the patches of surfaces proportionally.

Another method for calculating global illumination proposed by Henrik Wann Jensen is the photon mapping method. Using photon maps is another ray tracing based global illumination algorithm used to simulate the interaction of light rays with scene objects. The algorithm calculates secondary reflections of rays, refraction of light through transparent surfaces, diffuse reflections. This method consists in calculating the illumination of surface points in two passes. The first one performs direct ray tracing of light with secondary reflections, this is a preliminary process performed before the main rendering. This method calculates the energy of photons traveling from the light source to the objects in the scene. When photons reach the surface, the intersection point, direction and energy of the photon are stored in a cache called a photon map. Photon maps can be saved to disk for later use without having to recalculate them every frame. Photon reflections are calculated until the work stops after a certain number of reflections or when a certain energy is reached. In the second rendering pass, the illumination of the scene pixels by direct rays is calculated, taking into account the data stored in the photon maps, the photon energy is added to the direct illumination energy.

Global illumination calculations that use a large number of secondary reflections take much longer than direct illumination calculations. There are techniques for hardware calculation of real-time radio space that use the capabilities of the latest generation of programmable video chips, but for now, the scenes for which real-time global illumination is calculated should be quite simple and many simplifications are made in the algorithms.

But what has been used for a long time is static precomputed global illumination, which is acceptable for scenes without changing the position of light sources and large objects that greatly affect lighting. After all, the calculation of global illumination does not depend on the position of the observer, and if the position of such scene objects and parameters of light sources does not change in the scene, then pre-calculated illumination values ​​can be used. This is used in many games by storing GI calculation data in the form of lightmaps (lightmaps).

There are also acceptable algorithms for simulating global illumination in dynamics. For example, there is this simple method for use in real-time applications to calculate the indirect illumination of an object in a scene: a simplified rendering of all objects with reduced detail (except for the one for which lighting is considered), to a low-resolution cubemap (it can also be used for displaying dynamic reflections on the object's surface), then filtering that texture (several passes of the blur filter), and lighting that object with data from the computed texture as a complement to direct lighting. In cases where dynamic calculation is too heavy, static radiosity maps can be dispensed with. An example from the MotoGP 2 game, which clearly shows the beneficial effects of even such a simple imitation of GI:



This tutorial will help you install shaders in Minecraft and thereby improve the game world by adding dynamic shadows, wind and grass noise, realistic water and much more.

It should be noted right away that shaders load the system quite heavily, and if you have a weak video card or even an integrated one, we recommend that you refrain from installing this mod.

Installation consists of two stages, first you need to install the shader mod, and then additional shaderpacks to it

STEP #1 - Installing the shader mod

  1. Download and install Java
  2. Install OptiFine HD
    or ShaderMod;
  3. Unpack the resulting archive to any place;
  4. We launch the jar file, because he is an installer;
  5. The program will show you the path to the game, if everything is correct, click Yes, Ok, Ok;
  6. Go to .minecraft and create a folder there shaderpacks;
  7. We go into the launcher and see in the line a new profile with the name "ShadersMod", if not, then select it manually.
  8. Next you need to download shaderpacks

STEP #2 - Install shaderpack

  1. Download the shaderpack you are interested in (list at the end of the article)
  2. Press the keys Win+R
  3. Go to .minecraft/shaderpacks. If there is no such folder, then create it.
  4. Move or extract the shader archive to .minecraft/shaderpacks. The path should look like this: .minecraft/shaderpacks/SHADER_FOLDER_NAME/shaders/[.fsh and .vsh files inside]
  5. Launch Minecraft and go Settings > Shaders. Here you will see a list of available shaders. Choose the one you want
  6. In shader settings enable "tweakBlockDamage", disable "CloudShadow" and "OldLighting"

Sonic Ether's Unbelievable Shaders
Sildur's shaders
Chocapic13's Shaders
sensi277"s yShaders
MrMeep_x3's Shaders
Naelego's Cel Shaders
RRe36's Shaders
DeDelner's CUDA Shaders
bruceatsr44's Acid Shaders
Beed28's Shaders
Ziipzaap's Shader Pack
robobo1221's Shaders
dvv16's Shaders
Stazza85 super Shaders
hoo00's Shaders pack B
Regi24's Waving Plants
MrButternuss ShaderPack
DethRaid's Awesome Graphics On Nitro Shaders
Edi's Shader ForALLPc's
CrankerMan's TME Shaders
Kadir Nck Shader (for skate702)
Werrus's Shaders
Knewtonwako's Life Nexus Shaders
CYBOX shader pack
CrapDeShoes CloudShade Alpha
AirLoocke42 Shader
CaptTatsu's BSL Shaders
Triliton's shaders
ShadersMcOfficial's Bloominx Shaders (Chocapic13" Shaders)
dotModded's Continuum Shaders
Qwqx71"s Lunar Shaders (chocapic13"s shader)

– Igor (Administrator)

Within the framework of this article, I will tell you in simple terms what shaders are, as well as why they are needed.

Requirements for the quality of computer graphics are growing day by day. Previously, 2D graphics was considered quite sufficient and it was enough to capture the imagination of millions of people. At the present time, visualization is given much more attention.

However, during the formation of the current 3D graphics, many faced the problem that the built-in filters and gadgets of video cards (GPUs) are simply not enough. For example, often there was a need for their own effects. Therefore, a lot had to be done manually and calculations were carried out in the main computer processor (CPU), which undoubtedly affected performance (despite the fact that the vidyukha, as they say, "idle" idle).

So, over time, various technologies have appeared, such as shaders, allowing you to use the power of the GPU for specific needs.

What are shaders and why are they needed?

shader- a computer program (code) that can be run in the processors of a video card without needlessly wasting the power of the central processor. Moreover, pipelines can be built from these shaders (their sequential application). That is, the same shader can be applied to various kinds of graphic objects, which greatly simplifies the process of creating animation.

Initially, video cards implied three types - vertex (for the effects of individual vertices; for example, to create the effect of waves, rendering grass, etc.), geometric (for small primitives; for example, to create silhouettes) and pixel (for filters of a certain area of ​​the image; for example , fog). And, accordingly, there were three types of specialized processors in the board. Later, such a division was abandoned and all video card processors became universal (they support all three types).

Reducing the overall load on the CPU is not the whole purpose of being able to create your own shaders. It should be understood that a lot of games and videos reuse the same features. For example, why write, for example, effects for water in dozens of similar animation programs from scratch, if you can use ready-made libraries such as OpenGL or DirectX? The latter contain quite a few already implemented shaders and provide a more convenient method for writing your own (no need to write low-level commands for the GPU).

That is, if earlier, in order to create the simplest animation or game, one had to have significant knowledge, then in today's realities this is a feasible task for many.

What is the benefit of the shader approach?

There is some confusion with shaders, since there are different programming language standards for different libraries (GLSL - OpenGL, HLSL - DirectX, and so on), and this is not counting the fact that video card manufacturers themselves can support different features. However, the advantage of using them can be easily appreciated by looking at the picture above with an example of the display difference between DirectX 9 and DirectX 10.

Thus, if you use shaders from a well-known library, then the release of the next version is enough for the quality to improve by itself. Of course, there are nuances here, such as compatibility, support for emerging specialized teams, and so on, but still.

In addition to graphics, the shader approach is useful for ordinary users in the following ways:

1. The speed and performance of the computer increases (after all, the central processor does not need to calculate graphics instead of the GPU).

A very common question of inquisitive gamers and newbie game creators.

A shader (English shader - shading program) is a program for a video card that is used in 3d graphics to understand the final characteristics of an object or image, may include a presentation of light absorption and scattering, texture mapping, display and refraction, shading, surface displacement and a large number of other characteristics.

Shaders are small, so to speak, "scripts for the video card". It is quite easy to realize such various special effects and effects.

They happen pixel (work with images - that is, either with the screen in full, or with textures) and vertex (work with 3D objects). For example, with the help of pixel shaders, effects such as 3D textures (bump), parallax textures, sun rays (sunshafts) a la Crisis, range blur, simply motion blur, animated textures (water, lava, ... ), HDR, anti-aliasing, shadows (using ShadowMaps workflows) and all that stuff. Vertex shaders animate grass, heroes, trees, make waves on the water (such as large ones), and so on. The more difficult (higher quality, more modern) the effect, the more commands it needs in the shader code. But shaders of different versions (1.1 - 5.0) support a different number of commands: the higher the version, the more commands can be applied. For this reason, it is unrealistic to implement certain technological processes on the most junior shaders. For example, specifically for this reason, the latest Dead Space 2 requires the 3rd version of shaders (both pixel and vertex) - since it has such a lighting model that can only be implemented on 3rd and higher versions of shaders.

Shader Options

Depending on the stage of the pipeline, shaders are divided into a certain number of types: vertex, fragment (pixel) and geometric. Well, in the new types of pipelines there are still tessellation shaders. We will not discuss the graphics pipeline in detail, I still think whether to write a separate article about this, for those who decide to study shaders and graphics programming. Write in the comments if you are curious, I will have the information, it is worth wasting time.

Vertex shader:
Vertex shaders make animations of heroes, grass, trees, make waves on the water and almost all other things. In the vertex shader, the programmer has access to data related to vertices, for example: the coordinates of the vertex in space, its texture coordinates, its color and the normal vector.

Geometric shader:
Geometry shaders are ready to build the latest geometry, and can be used for particle creation, on-the-fly model detail configuration, silhouette shaping, and more. Unlike the previous vertex one, they are ready to process not only one vertex, but the whole primitive. A segment (2 vertices) and a triangle (3 vertices) can be a primitive, and in the presence of information about adjacent vertices (English adjacency), up to 6 vertices can be processed for a triangular primitive.

Pixel shader:
Pixel shaders perform texture mapping, light, and various texture effects such as reflection, refraction, fog, bump mapping, etc. Pixel shaders are similarly used for post-effects. The pixel shader works with bitmap moments and with textures - it processes the data associated with pixels (eg color, depth, texture coordinates). The pixel shader is used at the final stage of the graphics pipeline to form an image fragment.

Bottom line: A shader is a different effect on a picture, just like you process your photo on your phone in different colors or patterns.

"itemprop="image">

"What are shaders?" is a very common question of curious players and novice game developers. In this article, I will tell you intelligibly and clearly about these terrible shaders.

I consider computer games to be the engine of progress towards photorealistic images in computer graphics, so let's talk about what "shaders" are in the context of video games.

Before the first graphics accelerators, all the work of rendering video game frames was done by the poor CPU.

Drawing a frame is actually quite a chore: you need to take the "geometry" - polygonal models (world, character, weapons, etc.) and rasterize it. What is rasterize? The entire 3d model consists of the smallest triangles, which the rasterizer turns into pixels (that is, “rasterize” means turn into pixels). After rasterization, take texture data, lighting parameters, fog, etc. and calculate each resulting pixel of the game frame that will be displayed to the player.

So, the central processing unit (CPU - Central Processing Unit) is too smart a guy to force him to do such a routine. Instead, it is logical to single out some kind of hardware module that will offload the CPU so that it can do more important intellectual work.

Such a hardware module has become a graphics accelerator or video card (GPU - Graphics Processing Unit). Now the CPU prepares the data and loads a colleague with routine work. Considering that the GPU is now not just one colleague, it is a crowd of minions-cores, it copes with such work at once.

But we have not yet received an answer to the main question: What are shaders? Wait, I'm getting to this.

Good, interesting and close to photo-realism graphics required video card developers to implement many algorithms at the hardware level. Shadows, lights, highlights and so on. This approach - with the implementation of algorithms in hardware is called "Fixed pipeline or pipeline" and where high-quality graphics are required, it is no longer found. His place was taken by the Programmable Pipeline.

Requests from the players “Come on, bring in good graphonium! Surprise!", pushed game developers (and video card manufacturers, respectively) to more and more complex algorithms. So far, at some point, hardwired hardware algorithms became too few for them.

It's time for graphics cards to become more intelligent. The decision was made to allow developers to program GPU blocks into arbitrary pipelines that implement different algorithms. That is, game developers, graphic programmers were now able to write programs for video cards.

And now, finally, we have reached the answer to our main question.

"What are shaders?"

Shader (English shader - shading program) is a program for a video card that is used in three-dimensional graphics to determine the final parameters of an object or image, may include a description of the absorption and scattering of light, texture mapping, reflection and refraction, shading, displacement of the surface and many other options.

What are shaders? For example, here's an effect you can get, it's a water shader applied to a sphere.

Graphic pipeline

The advantage of the programmable pipeline over its predecessor is that now programmers can create their own algorithms on their own, rather than using a hardwired set of options.

At first, video cards were equipped with several specialized processors that supported different sets of instructions. Shaders were divided into three types depending on which processor will execute them. But then video cards began to be equipped with universal processors that support instruction sets of all three types of shaders. The division of shaders into types has been preserved to describe the purpose of a shader.

In addition to graphical tasks with such intelligent graphics cards, it became possible to perform general-purpose (non-computer graphics) calculations on the GPU.

For the first time, full support for shaders appeared in the video cards of the GeForce 3 series, but the rudiments were implemented back in the GeForce256 (in the form of Register Combiners).

Types of shaders

Depending on the stage of the pipeline, shaders are divided into several types: vertex, fragment (pixel) and geometric. And in the latest types of pipelines, there are also tessellation shaders. We will not discuss the graphics pipeline in detail, I'm still thinking about writing a separate article about this, for those who decide to study shaders and graphics programming. Write in the comments if you are interested, I will know if it is worth wasting time.

Vertex shader

Vertex shaders make animations of characters, grass, trees, create waves on the water, and many other things. In the vertex shader, the programmer has access to data related to vertices, for example: the coordinates of the vertex in space, its texture coordinates, its color and normal vector.

Geometric shader

Geometry shaders are capable of creating new geometry, and can be used to create particles, modify model detail on the fly, create silhouettes, and more. Unlike the previous vertex, they are able to process not only one vertex, but the whole primitive. A primitive can be a segment (two vertices) and a triangle (three vertices), and if there is information about adjacent vertices (English adjacency), up to six vertices can be processed for a triangular primitive.

Pixel shader

Pixel shaders perform texture mapping, lighting, and various texture effects such as reflection, refraction, fog, bump mapping, etc. Pixel shaders are also used for post-effects.

The pixel shader works with bitmap fragments and with textures - it processes the data associated with pixels (eg color, depth, texture coordinates). The pixel shader is used at the last stage of the graphics pipeline to form a fragment of the image.

What do shaders write on?

Initially, shaders could be written in an assembler-like language, but later high-level shader languages ​​similar to C appeared, such as Cg, GLSL and HLSL.

Such languages ​​are much simpler than C, because the tasks solved with their help are much simpler. The type system in such languages ​​reflects the needs of graphics programmers. Therefore, they provide the programmer with special data types: matrices, samplers, vectors, and so on.

RenderMan

Everything we discussed above applies to realtime graphics. But there are non-realtime graphics. What is the difference - realtime - real time, that is, here and now - to give 60 frames per second in the game, this is a real-time process. But rendering a complex frame for cutting-edge animation for several minutes is non-realtime. The point is time.

For example, graphics of such quality as in the latest Pixar animated films can not be obtained in real time now. Very large render farms calculate light simulations using completely different algorithms, very expensive, but giving almost photorealistic images.

Super realistic graphics in Sand piper

For example, look at this cute cartoon, grains of sand, bird feathers, waves, everything looks incredibly real.

*The video can be banned on Youtube if it doesn't open, google pixar sandpiper - a short cartoon about a brave sandpiper is very cute and fluffy. Touches and demonstrates how cool computer graphics can be.

So this is RenderMan from Pixar. It became the first shader programming language. The RenderMan API is the de facto standard for professional rendering, used by all Pixar studios and beyond.

Helpful information

Now you know what shaders are, but in addition to shaders, there are other very interesting topics in game development and computer graphics that will surely interest you:

  • , is a technique for creating stunning effects in modern video games. Overview article and video with lessons on creating effects in Unity3d
  • - If you're thinking about developing video games as a professional career or as a hobby, this article contains a great set of recommendations on where to start, what books to read, etc.

If there are any questions

As usual, if you have any questions, ask them in the comments, I will always answer. For any kind word or correction of errors I will be very grateful.

Liked the article? Share with friends: