1. Game Development

A Beginner's Guide to Coding Graphics Shaders

Scroll to top
This post is part of a series called A Beginner's Guide to Coding Graphics Shaders.
A Beginner's Guide to Coding Graphics Shaders: Part 2

Learning to write graphics shaders is learning to leverage the power of the GPU, with its thousands of cores all running in parallel. It's a kind of programming that requires a different mindset, but unlocking its potential is worth the initial trouble. 

Virtually every modern graphics simulation you see is powered in some way by code written for the GPU, from the realistic lighting effects in cutting edge AAA games to 2D post-processing effects and fluid simulations. 

A scene in Minecraft, before and after applying a few shaders.

The Aim of This Guide

Shader programming sometimes comes off as an enigmatic black magic and is often misunderstood. There are lots of code samples out there that show you how to create incredible effects, but offer little or no explanation. This guide aims to bridge that gap. I'll focus more on the basics of writing and understanding shader code, so you can easily tweak, combine, or write your own from scratch!

This is a general guide, so what you learn here will apply to anything that can run shaders.

So What is a Shader?

A shader is simply a program that runs in the graphics pipeline and tells the computer how to render each pixel. These programs are called shaders because they're often used to control lighting and shading effects, but there's no reason they can't handle other special effects. 

Shaders are written in a special shading language. Don't worry, you don't have to go out and learn a completely new language; we will be using GLSL (OpenGL Shading Language) which is a C-like language. (There are a bunch of shading languages out there for different platforms, but since they're all adapted to run on the GPU, they're all very similar.)

Note: This article is exclusively about fragment shaders. If you're curious about what else is out there, you can read about the various stages in the graphics pipeline on the OpenGL Wiki.

Let's Jump In!

We'll use ShaderToy for this tutorial. This lets you start programming shaders right in your browser, without the hassle of setting anything up! (It uses WebGL for rendering, so you'll need a browser that can support that.) Creating an account is optional, but handy for saving your code.

Note: ShaderToy is in beta at the time of writing this article. Some small UI/syntax details may be slightly different. 

Upon clicking New Shader, you should see something like this:

Your interface might look slightly different if you're not logged in.

The little black arrow at the bottom is what you click to compile your code.

What's Happening?

I'm about to explain how shaders work in one sentence. Are you ready? Here goes!

A shader's sole purpose is to return four numbers: r, g, b,and a.

That's all it ever does or can do. The function you see in front of you runs for every single pixel on screen. It returns those four color values, and that becomes the color of the pixel. This is what's called a Pixel Shader (sometimes referred to as a Fragment Shader). 

With that in mind, let's try turning our screen a solid red. The rgba (red, green, blue, and "alpha", which defines the transparency) values go from 0 to 1, so all we need to do is return r,g,b,a = 1,0,0,1. ShaderToy expects the final pixel color to be stored in fragColor.

1
void mainImage( out vec4 fragColor, in vec2 fragCoord ){    fragColor = vec4(1.0,0.0,0.0,1.0);}

Congratulations! This is your very first working shader!

Challenge: Can you change it to a solid grey color?

vec4 is just a data type, so we could have declared our color as a variable, like so:

1
void mainImage( out vec4 fragColor, in vec2 fragCoord ){    vec4 solidRed = vec4(1.0,0.0,0.0,1.0);    fragColor = solidRed;}

This isn't very exciting, though. We have the power to run code on hundreds of thousands of pixels in parallel and we're setting them all to the same color. 

Let's try to render a gradient across the screen. Well, we can't do much without knowing a few things about the pixel we're affecting, such as its location on screen...

Shader Inputs

The pixel shader passes a few variables for you to use. The most useful one to us is fragCoordwhich holds the pixel's x and y (and z, if you're working in 3D) coordinates. Let's try turning all the pixels on the left half of the screen black, and all those on the right half red:

1
void mainImage( out vec4 fragColor, in vec2 fragCoord ){    vec2 xy = fragCoord.xy; //We obtain our coordinates for the current pixel    vec4 solidRed = vec4(0,0.0,0.0,1.0);//This is actually black right now    if(xy.x > 300.0){//Arbitrary number, we don't know how big our screen is!        solidRed.r = 1.0;//Set its red component to 1.0    }    fragColor = solidRed;}

Note: For any vec4, you can access its components via obj.x, obj.y, obj.z and obj.w or via obj.r, obj.g, obj.b, obj.a. They're equivalent; it's just a convenient way of naming them to make your code more readable, so that when others see obj.r, they understand that obj represents a color.

Do you see a problem with the code above? Try clicking on the go fullscreen button in the bottom right of your preview window.

The proportion of the screen that is red will differ depending on the size of the screen. To ensure that exactly half of the screen is red, we need to know how big our screen is. Screen size is not a built in variable like pixel location was, because it's usually up to you, the programmer who built the app, to set that. In this case, it's the ShaderToy developers who set the screen size.  

If something is not a built in variable, you can send that information from the CPU (your main program) to the GPU (your shader). ShaderToy handles that for us. You can see all the variables being passed to the shader in the Shader Inputs tab. Variables passed in this way from CPU to GPU are called uniform in GLSL. 

Let's tweak our code above to correctly obtain the center of the screen. We'll need to use the shader input iResolution:

1
void mainImage( out vec4 fragColor, in vec2 fragCoord ){    vec2 xy = fragCoord.xy; //We obtain our coordinates for the current pixel    xy.x = xy.x / iResolution.x; //We divide the coordinates by the screen size    xy.y = xy.y / iResolution.y;    // Now x is 0 for the leftmost pixel, and 1 for the rightmost pixel    vec4 solidRed = vec4(0,0.0,0.0,1.0); //This is actually black right now    if(xy.x > 0.5){        solidRed.r = 1.0; //Set its red component to 1.0    }    fragColor = solidRed;}

If you try enlarging the preview window this time, the colors should still perfectly split the screen in half.

From a Split to a Gradient

Turning this into a gradient should be pretty easy. Our color values go from 0 to 1, and our coordinates now go from 0 to 1 as well.

1
void mainImage( out vec4 fragColor, in vec2 fragCoord ){    vec2 xy = fragCoord.xy; //We obtain our coordinates for the current pixel    xy.x = xy.x / iResolution.x; //We divide the coordinates by the screen size    xy.y = xy.y / iResolution.y;    // Now x is 0 for the leftmost pixel, and 1 for the rightmost pixel    vec4 solidRed = vec4(0,0.0,0.0,1.0); //This is actually black right now     solidRed.r = xy.x; //Set its red component to the normalized x value    fragColor = solidRed;}

And voila! 

Challenge: Can you turn this into a vertical gradient? What about diagonal? What about a gradient with more than one color?

If you play around with this enough, you can tell that the top left corner has coordinates (0,1), not (0,0). This is important to keep in mind. 

Drawing Images

Playing around with colors is fun, but if we want to do something impressive, our shader has to be able to take input from an image and alter it. This way we can make a shader that affects our entire game screen (like an underwater-fluid effect or color correction) or affect only certain objects in certain ways based on the inputs (like a realistic lighting system).

If we were programming on a normal platform, we would need to send our image (or texture) to the GPU as a uniform, the same way you would have sent the screen resolution. ShaderToy takes care of that for us. There are four input channels at the bottom:

ShaderToy's four input channels.

Click on iChannel0 and select any texture (image) you like.

Once that's done, you now have an image that's being passed to your shader. There's one problem, however: there's no DrawImage() function. Remember, the only thing the pixel shader can ever do is change the color of each pixel. 

So if we can only return a color, how do we draw our texture on screen? We need to somehow map the current pixel our shader is on, to the corresponding pixel on the texture:

Depending on where the (0,0) is on the screen, you might need to flip the y axis to map your texture correctly. At the time of writing, ShaderToy has been updated to have its origin at the top left, so there's no need to flip anything.

We can do this by using the function texture(textureData,coordinates), which takes texture data and an (x, y) coordinate pair as inputs, and returns the color of the texture at those coordinates as a vec4.

You can match the coordinates to the screen in any way you like. You could draw the entire texture on a quarter of the screen (by skipping pixels, effectively scaling it down) or just draw a portion of the texture. 

For our purposes, we just want to see the image, so we'll match the pixels 1:1:

1
void mainImage( out vec4 fragColor, in vec2 fragCoord ){    vec2 xy = fragCoord.xy / iResolution.xy;//Condensing this into one line    vec4 texColor = texture(iChannel0,xy);//Get the pixel at xy from iChannel0    fragColor = texColor;//Set the screen pixel to that color}

With that, we've got our first image!

Now that you're correctly pulling data from a texture, you can manipulate it however you like! You can stretch it and scale it, or play around with its colors.

Let's try modifying this with a gradient, similar to what we did above:

1
texColor.b = xy.x;

Congratulations, you've just made your first post-processing effect!

Challenge: Can you write a shader that will turn an image black and white?

Note that even though it's a static image, what you're seeing in front of you is happening in real time. You can see this for yourself by replacing the static image with a video: click on the iChannel0 input again and select one of the videos. 

Adding Some Movement

So far all of our effects have been static. We can do a lot more interesting things by making use of the inputs that ShaderToy gives us. iGlobalTime is a constantly increasing variable; we can use it as a seed to make periodic effects. Let's try playing around with colors a bit:

1
void mainImage( out vec4 fragColor, in vec2 fragCoord ){    vec2 xy = fragCoord.xy / iResolution.xy; // Condensing this into one line    vec4 texColor = texture(iChannel0,xy); // Get the pixel at xy from iChannel0       texColor.r *= abs(sin(iGlobalTime));    texColor.g *= abs(cos(iGlobalTime));    texColor.b *= abs(sin(iGlobalTime) * cos(iGlobalTime));    fragColor = texColor; // Set the screen pixel to that color}

There are sine and cosine functions built into GLSL, as well as a lot of other useful functions, like getting the length of a vector or the distance between two vectors. Colors aren't supposed to be negative, so we make sure we get the absolute value by using the abs function.

Challenge: Can you make a shader that changes an image back and forth from black and white to full color?

A Note on Debugging Shaders

While you might be used to stepping through your code and printing out the values of everything to see what's going on, that's not really possible when writing shaders. You might find some debugging tools specific to your platform, but in general your best bet is to set the value you're testing to something graphical you can see instead.

Conclusion

These are only the basics of working with shaders, but getting comfortable with these fundamentals will allow you to do so much more. Browse through the effects on ShaderToy and see if you can understand or replicate some of them!

One thing I didn't mention in this tutorial is Vertex ShadersThey're still written in the same language, except they run on each vertex instead of each pixel, and they return a position as well as a color. Vertex Shaders are usually responsible for projecting a 3D scene onto the screen (something that's built into most graphics pipelines). Pixel shaders are responsible for many of the advanced effects we see, so that's why they are our focus.

Final challenge: Can you write a shader that removes the green screen in the videos on ShaderToy and adds another video as a background to the first?

That's all for this guide! I'd greatly appreciate your feedback and questions. If there's anything specific you want to learn more about, please leave a comment. Future guides could include topics like the basics of lighting systems, or how to make a fluid simulation, or setting up shaders for a specific platform.

Did you find this post useful?
Want a weekly email summary?
Subscribe below and we’ll send you a weekly email summary of all new Game Development tutorials. Never miss out on learning about the next big thing.
Looking for something to help kick start your next project?
Envato Market has a range of items for sale to help get you started.