Cookies help us deliver our services. By using our services, you agree to our use of cookies. Learn more

Tech

Shaders Overview - Part 1

Posted by Mike Dailly on 7 June 2013

With the near arrival of Studio V1.2, it’s about time we started to peel back the cover on what shaders are, what they can do, and how you’ll be using them inside Studio. However, as this is quite a big subject, I'll be breaking it down into a few articles, with the first simply being an overview to give you an idea of how you'll be using them.

One thing to note, is that as Studio is aimed at cross platform development, we’ve chosen a shader language that allows us to do this seamlessly: GLSL ES, as this can be automatically converted to all other types, and this in turn means you only need to write a shader once, and it’ll work everywhere there are shaders available.

We should also note however, that GLSL ES does have some limitations, but we’ll get to these later. So to start, lets briefly discuss exactly what shaders are, and how they work. 

Part1: Shader Overview

Firstly, what exactly IS a shader? Well, when we talk about shaders, we're actually talking about 2 different types of program that run directly on the graphics card, a vertex shader, and a fragment (or pixel) shader. Both of these tiny programs work together in order to manipulate what the graphics card draws to the screen, allowing you to change - in realtime, the position and colour ( or RGBA) values that are actually rendered into the display buffer. 

So what kind of effects are shaders able to produce? Well, in short - almost anything. Lets take a couple of examples that developers would love to do, but simply can't right now in Studio.

Colour Saturation. Lets say a play shoots a bullet at a bad guy, and you want to "flash" the baddie to let the player know they've hit the target. Currently, all we can do is darken, not lighten. This is to say, if we set the colour being drawn to c_black, the sprite will become like a shadow, but we can't do it the other way around - at least, not very easily. Some developers have done some tricks with fogging, but it's not a great solution. However, with shaders, it becomes much simpler, as the colour information is being written out, you can simply change the values to white (or any other colour) while keeping the alpha value (which keeps the shape intact). This does the flash you've all been looking for, and it's a tiny shader that has allowed this.

You can obviously do a lot more, like proper depth of field, glass or water refraction, HSV colour space shifting, or even using multiple textures to do things like normal maps and so on. If you can imagine it, you can probably do it.

In the past I've also used them to do things that aren't really anything to do with rendering, as well. I assigned each object a value then rendered them. Once the whole display was rendered, I was able to "pick" a colour from the screen, and translate that back into the object that I clicked on. For the tens of thousands of objects I was selecting from, this was very handy - and simple.

This is all obviously incredibly powerful. So, what does Studio do when rendering? Well, when we render a QUAD (quadrilateral), it is in reality 2 triangles, each with their own set of vertex data - a vertex being the position, colour and texture information that the graphics card uses to render with. 

So what does this data look like? Each point contains an X,Y,Z (with Z being unimportant for 2D stuff), colour and 0.0 to 1.0 texture coordinates for U and V. This makes a single corner look something like this.

X=10
Y=10
Z=1
Colour=$FFFFFFFF
U=0.0
V=1.0

Before jumping headlong into writing shaders, let’s look at how we create and use them inside Studio.

 Shader resource

Shaders have been added as a standard resource, you edit them inside the standard script editor, complete with full intellisense and syntax coloured highlighting. We have however, tweaked the script editor a little to allow for even easier creation.

Editing a Shader

As you can see, there are now 2 tabs inside the script editor, named Vertex and Fragment, and inside each tab you write the appropriate script. Each shader resource is named, and just like scripts it’s these names that you use to select them. All this makes the process of writing and integrating shaders a natural part of your workflow.

As you can see from the image above, Studio will also pass in certain things for you, such as matrices, lights and other various settings, freeing you from the need to set them up.

So, now we can create a shader, how do we use it inside Studio? There are 2 aspects to shaders, the first is the shader handle itself, and the second are shader constants (both vertex and fragment). First, so set a shader is simplicity itself:


shader_set( MaskShader );

and then to stop using it, simply:
 


shader_reset();

Now, while you can actually do some pretty cool things just with this, what makes shaders invaluable, is being able to set constants within the shader itself. To get a constant that is visible to both the shader, and GML in GLSL ES, you would use the UNIFORM keyword. You would then use these to pass in instance specific details such as say, a colour tint, a glow value, or even a refraction  index.

The first thing you need to do however, is get the handle of each constant. There are 2 constant types; uniforms and samplers. Uniforms are standard constants; ints, bools, floats, vectors, matrices and so on, while samplers are textures. Unlike normal GML usage, you can use up to a maximum of 8 textures at once – or however many the device supports - it may be less.

In the mask example above we need to access a u_vMaskCol uniform. This is a 4 dimensional vector (a vec4) we use in the fragment shader for setting the colour mask used when a sprite is hit, so we can flash (or saturate) the sprite as an effect.


global.maskshader_col = shader_get_uniform( MaskShader, “u_MaskCol” );
 

This will look up the variable, and if found assign it’s handle to the global variable; maskshader_col. You can then use this variable whenever setting this constant, which you do like this….
 


shader_set( MaskShader );

shader_set_uniform_f( global.maskshader_col, Red, Green, Blue, AlphaMask );
 

First you set the shader, which then takes over the rendering from Studio and also tells the system which shader constants are about to be set. The values being passed in are simple floating point variables, what these are, will be specific to your shader.

And that’s it! Studio will fill in all the view/projection matrix stuff, set the base texture (named gm_BaseTexture) and then call your shader when it renders sprites/tiles/backgrounds and so on.

All a fragment shader requires is for you to set the gl_FragColor on exit, and it’ll output that vec4 to the screen.

For normal use, you can still use command like draw_self()draw_sprite_ext() and so on, and Studio will create the primitives, setup the matrices and  textures, leaving you the simple job of actually drawing the pixels.

So, that's it for Part I. Part 2 will look the "nitty gritty" of actually making a shader, and how you'll use it. For now, here's a little *ahem* light reading - The OpenGL ES shader langauge specification.

 

 


Continue reading...

Shaders Overview - Part 2

 

Back to Top