Shader Studio is a collection of several GLSL global illumination effects that I've been messing around with in my spare time. It includes some baked-in ambient occlusion usage, reflection/refraction, image-based lighting, and spherical harmonics. I am using several Stanford PLY models which I am converting into my own custom MDL format which contains extra per vertex information. I am planning on using this as a framework for future shader related research.
Realtime Shadows using Percentage Closer Filtering
Shadows are naturally a crucial element of image synthesis, but achieving quality real-time shadowing is quite challenging. Even after understanding all the principles and mathematics involved, there are still many factors that need to be considered in order for everything to work correctly. When it comes to shadow algorithms, especially shadow mapping ones, the devil really is in the details. I decided to add shadow mapping (PCF) using OpenGL to Shader Studio because it is one of the most common shadowing techniques and I really wanted to "relearn" it for myself and document some of the issues that I've ran into along the way. Hopefully others will find this information useful. Here's the summary:
For my particular scenes, I found 2048x2048 be an adequate shadow map resolution. You'll also want to set the GL_TEXTURE_MIN_FILTER and GL_TEXTURE_MAG_FILTER of your shadow buffer to GL_LINEAR to minimize the jaggies around the shadow edges. Lastly, for reasons I'll explain later, I ended up not using any built-in shader texture2DProj(Offset) calls to access the shadow map, so I just set the texture compare mode to GL_NONE.
Make sure to set the texture wrap mode for your shadow map to GL_CLAMP_TO_EDGE, to avoid some noticeable artifacts when rendering shadows. You may also experience "shadow leakage" (see screenshot) due to sampling outside your shadow texture. One way to fix that is to modify your shader to only sample when your shadow coordinates are within (0.0 - 1.0) range. For my specific example, I just modified my light frustum to fully encompass the viewable geometry. It is also very beneficial to have a debug view of your shadow map texture so you can easily diagnose those kinds of issues.
When it came down to sampling the shadow texture in shader, I began with a simple lookup using built-in texture2D call and just comparing the stored depth value with the pixel depth value in light view space. If the distance from light is less than the current pixel depth, then the pixel must be in shadow and the shadowing value is 0.0 and 1.0, otherwise. That, of course, produced some pretty ugly jagged shadows with self-shadowing stippling artifacts. One way to alleviate that is to scale and bias the depth difference value a bit: delta = clamp (delta * 250.0 + 1.0, 0.0, 1.0). This methods tends to decrease the amount of pixels that are shadowed, but greatly helps in eliminating ugly self-shadowing artifacts. The shadow edges were still rather jaggie, so I started looking into using the GLSL built-in textureProj and textureProjOffset Percentage Closer Filtering calls. I must admit, the results were quite disappointing. Perhaps it was the driver implementation for my video card (ATI Radeon HD 7750), but, even using multiple taps of textureProj, did not improve the results enough to justify using those calls. I've taken comparison screenshots using single tap, 4 tap, and 8 random tap PCF lookups and they all ended up looking worse than my original implementation. In the end, I decided to just go with multiple random texture2D lookup calls to smoothen shadow edges some more.
Variance Soft Shadow Maps
I continued my quest to find the "perfect" shadowing algorithm by examining the Variance Shadow Mapping. There is a good article regarding this subject by Andew Lauritzen in GPU Gems 3. I am not sure how widely this technique is used in commercial game engines, but I was curious to see how challenging it would be to produce realistic soft shadows in real-time. After all, most shadows that we see in real life have very soft edges with drastically variable penumbra sizes. The basic implementation of VSMs is actually straight forward and is well described in the article. Compared to PCF, the VSMs present a number of advantages and disadvantages which I summerize as such:
The depth values in VSMs can be pre-filtered just like regular color textures using hardware at constant time. Using constant filter widths, VSMs outperform PCF filtering techniques and offer higher quality shadows.
Even the standard VSM implementation generates shadows with natural soft edges and eliminates most of shadow acne type artfacts.
Using either summed-area tables or just mipmapping, VSMs can be extended to support variable size penumbra based on the distance of occluder to a shadow receiver.
Suffers from numerical issues related to minimum variance value being too large or too small. Variance value has to be clamped to a constant that is proportional to the size of light frustum. Can be tricky to find a value that fixes shadow acne artifacts while also maintaining correct self shadowing size.
Light-Bleeding artifacts (where light occurs in areas that should be fully in shadow) can certainly be minimized, but never fully eliminated for all cases.
As the article suggested, I attempted to implement a dynamically sizable shadow penumbra, based on an estimation of the "blocker depth", using Summed-Area Tables. I used the recursive doubling technique described by Hensley. The biggest hurdle that I encountered with that was a huge loss of precision that SATs suffer from. Even after trying all suggested workarounds, my variance shadows had way more artifacts than I would've liked. Another big issue with SATs is the performance hit from generating them. Since it requires as much as log2 (width + height) render passes to generated summed-area table, I wouldn't recommend using a shadow map larger than 512x512. For those reasons, I decided to abandon SATs for a much simpler mip mapping approach. For now, I am pretty happy with results. (Note: the demo has a slider for controlling shadow softness amount).
Advanced Depth of Field
I updated my framework with a render-to-texture support and thought that it would be nice to showcase the all too popular, Depth of Field (a.k.a Depth of Focus). In a nutshell, the technique is based on the principles of photography in which a lens can precisely focus at only one distance at time; so, the area within the depth of field appears sharp, while the areas in front of and beyond the depth of field appear blurry. It is widely used in many modern video games, although I feel it is also often overused. In my opinion, depth of field is more appropriate to use in video games during cutscenes, rather than having it enabled the whole time. Blurring objects in a distance tends to decrease quality and introduce aliasing artifacts. Nevertheless, the effect can be very convincing and can greatly emphasize key parts in a 3D scene. I followed the implementation of ATI's Advanced Depth of Field from the ShaderX3 book and got pretty good results. The demo has a depth of field slider so you can adjust the focus distance (smaller value means the the lens is focused closer to the camera).