Light shafts
Light shafts, light scattering, god rays – many names but they represent the same awesome effect that can be noticed during foggy day or in a dusty room.
In games they make levels more moody and atmospheric, but how are they made?
The trick is to calculate the illumination power that comes from the source of light, travels near the object that overshadows it and finally reaches the pixel which color is currently calculating. Let’s say that it is very cheated light casting reduced to 2D.
To do this the occlusion of the scene is needed:
The black area is the light-blocking object. It’s black, because it complately blocks the light. The white circle behind is the source of the light. The color of the background is the ambient color of the light that is emited from the source (it is not white, because it would be too bright).
Every pixel of this image has to be sampled and put on the vector that moves from the light source to every pixel of the image. It might look more understandable when the occlusion scene is sampled 10 times:
As you can see the pixels of the occlusion image (not only the teapot, but the whole one) are sampled and put on the lines that go from the light source to the pixels of the final image. It’s like putting samples of the whole occlusion image inside ot the pyramid that goes from the light source to the screen, but flattened to 2D.
When we use more samples the whole scene starts to look like proper scattering.
To get that nice fading the samples summation is controlled by attenuation coefficients: weight and decay. The whole equation looks like that:
This migh looks scarry, but in the fragment shader code it looks simpler:
You can download a source code for this project from GitHub, or just get a working .exe application from here.
The trick is to calculate the illumination power that comes from the source of light, travels near the object that overshadows it and finally reaches the pixel which color is currently calculating. Let’s say that it is very cheated light casting reduced to 2D.
To do this the occlusion of the scene is needed:
The black area is the light-blocking object. It’s black, because it complately blocks the light. The white circle behind is the source of the light. The color of the background is the ambient color of the light that is emited from the source (it is not white, because it would be too bright).
Every pixel of this image has to be sampled and put on the vector that moves from the light source to every pixel of the image. It might look more understandable when the occlusion scene is sampled 10 times:
As you can see the pixels of the occlusion image (not only the teapot, but the whole one) are sampled and put on the lines that go from the light source to the pixels of the final image. It’s like putting samples of the whole occlusion image inside ot the pyramid that goes from the light source to the screen, but flattened to 2D.
When we use more samples the whole scene starts to look like proper scattering.
To get that nice fading the samples summation is controlled by attenuation coefficients: weight and decay. The whole equation looks like that:
This migh looks scarry, but in the fragment shader code it looks simpler:
/** Texture coords of occlusion image */
in vec2 inoutTexCoord;
/** Output color of the pixel */
out vec4 outColor;
/** Position of the light source on screen */
uniform vec2 lightScreenPos;
/** Array of textures: 0 - occlusion scene, 1 - normal scene */
uniform sampler2DArray tex;
/**
* This is uniform layout storing all needed light shafts parameters.
* Using this layout application will update the uniform.
*/
layout( shared ) uniform ShaftsParams
{
int samples;
float exposure;
float decay;
float density;
float weight;
};
void main(void)
{
// Set the basic color
outColor = vec4(0);
// Get current texture coordinates.
vec2 textCoo = inoutTexCoord.xy;
// Calculate the vector that is a one step on vector from lightsource to
// the pixel of image.
vec2 deltaTextCoord = textCoo - lightScreenPos;
deltaTextCoord *= 1.0 / float(samples) * density;
// Set up illumination decay factor.
float illuminationDecay = 1.0;
// Evaluate the summation of samples from occlusion image
for(int i=0; i < samples ; i++)
{
// Step sample location along ray.
textCoo -= deltaTextCoord;
// Retrieve sample at new location.
vec4 colorSample = texture(tex, vec3( clamp(textCoo,0,1), 0 ) );
// Apply sample attenuation scale/decay factors.
colorSample *= illuminationDecay * weight;
// Accumulate combined color.
outColor += colorSample;
// Update exponential decay factor.
illuminationDecay *= decay;
}
// Output final color with a further scale control factor.
outColor *= exposure;
[...]
And that’s it! The scattering from the occlusion image is done. The only thing left is to apply it to the normal scene. I’ve done this in this same fragment shader at the end:[...]
// Get the avarage of color from calculated light scattering and normal scene.
outColor += texture( tex, vec3( inoutTexCoord, 1 ) );
outColor *= 0.5;
}
The last tricky part is to get the light source position on the screen. It can be obtained by transforming its world position to the clip coordinates using view projection matrix.glm::vec4 lightNDCPosition = camera->GetViewProjectionMatrix() * glm::vec4(light->position, 1);
lightNDCPosition /= lightNDCPosition.w;
glm::vec2 lightScreenPosition = glm::vec2(
(lightNDCPosition.x + 1) * 0.5,
(lightNDCPosition.y + 1) * 0.5
);
As you could notice I used a 2d texture array in that fragment shader. I stored there an occlusion image and the image of normal scene for final rendering. It means that for the scene with light shafts there are 3 render passes needed.You can download a source code for this project from GitHub, or just get a working .exe application from here.