I know I promised to do something with ping pong buffers but I remembered an effect I did a few months ago to simulate caustics using a light cookie. You might have seen some tutorials or assets on the store that do something like this. Usually these techniques involve having a bunch of pre-baked images and cycling through them, changing the image each frame. The issues with this technique are that the animation rate of your caustics is going to fluctuate with the frame rate of your game, and of course that you need a bunch of images taking up memory. You also need a program to generate your pre-baked images.
If this were unreal you could set up a material as a light function and project your caustics shader with the light. This isn't a bad way to go but you end up computing the shader for every pixel the light touches times however many lights you have projecting it. Your source textures might be really low (512x512) but you may be running a shader on the entire screen and then some.
This leads me to a solution that I think is pretty good. Pre-compute one frame of a caustics animation using a shader and project that image with however many lights you want.
Web Player
Asset Package
In the package there is a shader, material, and render texture, along with 3 images that the material uses to generate the caustics. There is a script that you put on a light (or anything in your scene) that stores a specified render texture, material, and image. In the same script file there is a static class that is what actually does the work. The script sends the information it's holding to the static class and the static class remembers what render textures it has blitted to. If it is told to blit to a render texture that has already been blitted to that frame it will skip over it. This is good for if you goof and copy a light that has script on it a bunch of times, the static class will keep the duplicate scripts from blitting over each other's render textures. Now you can use the render texture as a light cookie! The cookie changes every frame and only gets calculated once per frame instead of once per light.
Some optimizations would be to remove caustic generators when you are not able to see them, or have a caustics manager in the scene instead of a static class attached to the generator script. The command buffer post on Unity's blog has a package that shows how to use sort of a manager for stuff like this so check it out!
This is just a repository for random things I work on in my spare time including some UE4 and Unity 3D projects.
Saturday, March 28, 2015
Thursday, March 19, 2015
Graphics Blitting in Unity Part 1
Gas Giant Planet Shader Using Ping Pong Buffers |
First example is a separable blur shader and how to use it to put a glow around a character icon. This idea came from some one in the Unity3D sub-reddit who was looking for a way of automating glows around their character icons. Get the package here!
So separable blur, separable means that the blurring is separated into 2 passes, horizontal and vertical. It's 2 passes but we can use the same shader for both passes by telling it what direction to blur the image in.
First lets have a look at the shader.
Shader "Hidden/SeperableBlur" {
Properties {
_MainTex ("Base (RGB)", 2D) = "black" {}
}
CGINCLUDE
#include "UnityCG.cginc"
#pragma glsl
Starts out pretty simple, only one exposed property, but wait, whats that CGINCLUDE? And no Subshader or Pass? CGINCLUDE is used instead of CGPROGRAM when you want to have a shader that can do lots of things. You make the include section first and then put your subshader section below with multiple passes that reference the vertex and fragment programs you write in the the include section.
struct v2f {
float4 pos : POSITION;
float2 uv : TEXCOORD0;
};
We don't need to pass much information the the fragment shader. Position and uv is all we need.
//Common Vertex Shader
v2f vert( appdata_img v )
{
v2f o;
o.pos = mul (UNITY_MATRIX_MVP, v.vertex);
o.uv = v.texcoord.xy;
return o;
}
appdata_img is defined in UnityCG.cginc as being the standard information that you need for blitting stuff. The rest is straight forward, just pass the uv to the fragment.
half4 frag(v2f IN) : COLOR
{
half2 ScreenUV = IN.uv;
float2 blurDir = _BlurDir.xy;
float2 pixelSize = float2( 1.0 / _SizeX, 1.0 / _SizeY );
I know it says half4 but the textures we are using only store 1 channel. Save the UV's to a variable for convenience. Same with the blurDir (blur direction), I'll talk about this more later but this variable is passed in from script. And pixelSize is the normalized size of a pixel in uv space. The size of the image is passed to the shader from script as well.
float4 Scene = tex2D( _MainTex, ScreenUV ) * 0.1438749;
Scene += tex2D( _MainTex, ScreenUV + ( blurDir * pixelSize * _BlurSpread ) ) * 0.1367508;
Scene += tex2D( _MainTex, ScreenUV + ( blurDir * pixelSize * 2.0 * _BlurSpread ) ) * 0.1167897;
Scene += tex2D( _MainTex, ScreenUV + ( blurDir * pixelSize * 3.0 * _BlurSpread ) ) * 0.08794503;
Scene += tex2D( _MainTex, ScreenUV + ( blurDir * pixelSize * 4.0 * _BlurSpread ) ) * 0.05592986;
Scene += tex2D( _MainTex, ScreenUV + ( blurDir * pixelSize * 5.0 * _BlurSpread ) ) * 0.02708518;
Scene += tex2D( _MainTex, ScreenUV + ( blurDir * pixelSize * 6.0 * _BlurSpread ) ) * 0.007124048;
Scene += tex2D( _MainTex, ScreenUV - ( blurDir * pixelSize * _BlurSpread ) ) * 0.1367508;
Scene += tex2D( _MainTex, ScreenUV - ( blurDir * pixelSize * 2.0 * _BlurSpread ) ) * 0.1167897;
Scene += tex2D( _MainTex, ScreenUV - ( blurDir * pixelSize * 3.0 * _BlurSpread ) ) * 0.08794503;
Scene += tex2D( _MainTex, ScreenUV - ( blurDir * pixelSize * 4.0 * _BlurSpread ) ) * 0.05592986;
Scene += tex2D( _MainTex, ScreenUV - ( blurDir * pixelSize * 5.0 * _BlurSpread ) ) * 0.02708518;
Scene += tex2D( _MainTex, ScreenUV - ( blurDir * pixelSize * 6.0 * _BlurSpread ) ) * 0.007124048;
Ohhhhh Jesus, Look at all that stuff. This is a 13 tap blur, that means we will sample the source image 13 times, weight each sample (that long number on the end of each line), and add the result of all the samples together. Lets just deconstruct one of these lines:
Scene += tex2D( _MainTex, ScreenUV + ( blurDir * pixelSize * 4.0 * _BlurSpread ) ) * 0.05592986;
Sample the _MainTex using the uv's but then add the blur direction (either (0,1) horizontal or (1,0) vertical ), multiplied by the size of one of the pixels, multiplied by how many pixels over we are (in this case it's the 4th tap over), multiplied by an overarching blur spread variable to change the tightness of the blur. Then multiply the sampled texture by a gaussian distribution weight (in this case 0.05592986). Think of gaussian distribution like a bell curve with more weight being given to values closer to the center. If you add up all the numbers on the end they will come out to 1.007124136, or pretty darn close to 1. You will notice that half of the samples add the blur direction and half subract the blur direction. This is because we are sampling left AND right of the center pixel.
Scene *= _ChannelWeight;
float final = Scene.x + Scene.y + Scene.z + Scene.w;
return float4( final,0,0,0 );
now we multiply the final value by the _ChannelWeight variable which is passed in from script to isolate the channel we want. Add each channel together and return it in the first channel of a float4, the rest of the channels don't matter because the render target will only be one channel.
Subshader {
ZTest Off
Cull Off
ZWrite Off
Fog { Mode off }
//Pass 0 Blur
Pass
{
Name "Blur"
CGPROGRAM
#pragma fragmentoption ARB_precision_hint_fastest
#pragma vertex vert
#pragma fragment frag
ENDCG
}
}
After the include portion goes the subshader and passes. In each pass you just need to tell it what vertex and fragment program to use. #pragma fragmentoption ARB_precision_hint_fastest us used to automatically optimize the shader if possible... or something like that.
Now lets check out the IconGlow.cs script.
public Texture icon;
public RenderTexture iconGlowPing;
public RenderTexture iconGlowPong;
private Material blitMaterial;
We are going to need a texture for the icon, and 2 render textures; one the hold the horizontal blur result and one to hold the vertical blur result. I've made the 2 render textures public just so they can be viewed from the inspector. blitMaterial is going to be the material we create that uses the blur shader.
Next check out the start function. This is where everything happens.
blitMaterial = new Material (Shader.Find ("Hidden/SeperableBlur"));
This makes a new material that uses the shader named "Hidden/SeperableBlur". Make sure you don't have another shader with the same name, it can cause some headaches.
int width = icon.width / 2;
int height = icon.height / 2;
The resolution of the blurred image doesn't need to be as high as the original icon. I'm going to knock it down by a factor of 2. This will make the blurred images take up 1/4 the amount of memory which is important because render textures aren't compressed like imported textures.
iconGlowPing = new RenderTexture( width, height, 0 );
iconGlowPing.format = RenderTextureFormat.R8;
iconGlowPing.wrapMode = TextureWrapMode.Clamp;
iconGlowPong = new RenderTexture( width, height, 0 );
iconGlowPong.format = RenderTextureFormat.R8;
iconGlowPong.wrapMode = TextureWrapMode.Clamp;
Now we create the render textures. Width, Height, and the 0 for the number of bits to use for the depth buffer; the textures don't need a depth buffer, hence the 0. Setting the format to R8 means the texture will be a single channel (R/red) and 8 bits or 256 color grey scale image. The memory footprint of these images clock in at double the footprint of a DXT compressed full color image of the same size, so it's important to consider the size of the image when working with render textures. Setting the wrap mode to clap ensures that pixels from the left side of the image don't bleed into the right and vice versa, same with top to bottom.
blitMaterial.SetFloat ("_SizeX", width);
blitMaterial.SetFloat ("_SizeY", height);
blitMaterial.SetFloat ("_BlurSpread", 1.0f);
Now we start setting material values. These are variables we will have access to in the shader. _SizeX and Y should be self explanatory. The shader needs to know how big the image is to precisely sample the next pixel over. _BlurSpread will be used to scale how far the image is blurred, setting it smaller will yield a tighter blur and setting it larger will blur the image more but will also introduce artifacts at too high a value.
blitMaterial.SetVector ("_ChannelWeight", new Vector4 (0,0,0,1));
blitMaterial.SetVector ("_BlurDir", new Vector4 (0,1,0,0));
Graphics.Blit (icon, iconGlowPing, blitMaterial, 0 );
The next 2 variables being set are specific to the first pass. _ChannelWeight is like selecting which channel you want to output. Since we are using the shame shader for both passes we need a way to specify which channel we want to return in the end. I'm setting it to the alpha channel for the first pass because we want to blur the character icon's alpha. _BlurDir is where the "separable" part comes in. think of it like the channel weight but for directions, here the direction is weighted for vertical blur because the first value is 0 (X) and the second value is 1 (Y). The last 2 numbers aren't used.
Finally it's time to blit an image. icon being the first variable passed in will automatically be mapped to the _MainTex variable in the shader. iconGlowPing is the texture where we want to store the result. blitMaterial is the material with the shader we are using to do the work, and 0 is the pass to use in said shader; 0 is the first pass. This shader only has one pass but blit shaders can have many passes to break up large amounts of work or to pre-process data for other passes.
blitMaterial.SetVector ("_ChannelWeight", new Vector4 (1,0,0,0));
blitMaterial.SetVector ("_BlurDir", new Vector4(1,0,0,0));
Graphics.Blit (iconGlowPing, iconGlowPong, blitMaterial, 0 );
Now the vertical blur is done and saved! We now need to change the _ChannelWeight to use the first/red channel. The render textures we are using only store the red channel so the alpha from the icon image is now in the red channel of iconGlowPing. We also need to change the _BlurDir variable to horizontal; 1 (X) and 0 (Y). Now we take the vertically blurred image and blur it horizontally, saving it to iconGlowPong.
Material thisMaterial = this.GetComponent<Renderer>().sharedMaterial;
thisMaterial.SetTexture ("_GlowTex", iconGlowPong);
Now we just get the material that is rendering the icon and tell it about the blurred image we just made.
Finally let's look at the shader the icon actually uses.
half4 frag ( v2f IN ) : COLOR {
half4 icon = tex2D (_MainTex, IN.uv) * _Color;
half glow = tex2D (_GlowTex, IN.uv).x;
glow = saturate( glow * _GlowAlpha );
icon.xyz = lerp( _GlowColor.xyz, icon.xyz, icon.w );
icon.w = saturate( icon.w + glow * _GlowColor.w );
return icon;
}
This is a pretty simple shader and I have gone over some other shaders before so I am skipping right to the meat of it. Look up the main texture and tint it with a color if you want. look up the red (only) channel of the glow texture. Multiply the glow by an parameter to expand it out and saturate it so no values go out of 0-1 range. Lerp the color of the icon with the color of the glow (set by a parameter) using the icons alpha as a mask. This will put the glow color "behind" the icon. Add the glow (multiplied by the alpha of the _GlowColor parameter) to the alpha of the icon and saturate the result. Outputting alpha values outside of 0-1 range will have weird effects when using hdr. And that's it! There is a nice glow with the intensity and color of your choice around the icon. To actually use this shader in a gui menu you should probably copy one of the built in gui shaders and extend that.
Now that this simple primer is out of the way the next post will be about ping pong buffers!
Sunday, March 1, 2015
2.5D Sun In Depth Part 4: Heat Haze
Last part! The heat haze:
The heat haze is the green model in the image above. It just sits on top of everything and ties the whole thing together. It only uses 2 textures, one for distortion and one to mask out the glow and distortion. Both of these textures are small and uncompressed.
The Shader
This is a 3 pass shader, The first pass draws an orange glow over everything based on a mask. The second takes a snapshot of the frame as a whole and saves it to a texture to use later. The third uses that texture and distorts it with the distortion texture and draws it back onto the screen.
Since this isn't a surface shader we need to do some extra things that would normally be taken care of.
CGINCLUDE
Instead of CGPROGRAM like in the surface shaders we use CGINCLUDE which treats all of the programs (vertex and pixel) like includes that can be referenced from different passes later on. These passes will have CGPROGRAM lines.
#include "UnityCG.cginc"
This is a general library of functions that will be used later on. This particular library exists in the Unity program files directory, but you can write your own and put them in the same folder as your shaders.
Next is defining the vertex to pixel shader data structure.
struct v2f {
float4 pos : POSITION;
float2 uv : TEXCOORD0;
float4 screenPos : TEXCOORD1;
};
This is similar to the struct input from the last shader with the exception that you need to tell it which coordinates your variables are bound to. Unity supports 10 coordinates (I think); POSITION which is a float4, COLOR which is a 8 bit int (0-255) mapped to 0-1, and TEXCOORD0-7 which are all float4. This limits the amount of data that you can pre-compute and send to the pixel shader, but in this shader we won't be getting anywhere near those limits. You can get 2 more interpolators with the tag "#pragma target 3.0" but this will limit the hardware your shader can be viewed on to video cards supporting shader model 3 and higher.
sampler2D _GrabTexture;// : register(s0);
You won't find this texture sampler up with the properties because it can't be defined by hand. This is the texture that will be the screen after the grab pass. The " : register(s0);" is apparently legacy but I'll leave it in there commented out just in case.
v2f vertMain( appdata_full v ) {
v2f o;
o.pos = mul( UNITY_MATRIX_MVP, v.vertex );
o.screenPos = ComputeScreenPos(o.pos);
o.uv = v.texcoord.xy;
return o;
}
This is the vertex program for all of our shaders. Unlike surface shaders, the entire struct needs to be filled out. The vertex program also has to return the v2f (vertex to fragment) struct instead of void like a surface shader. This shader is pretty simple. The first line declares a new v2f variable. The second computes the hardware screen position of the vertex. The third computes and saves the interpolated screen position using a function from the UnityCG file. The fourth passes through the uv coordinates. And the fifth returns the data to be used in the pixel shader.
half4 fragMain ( v2f IN ) : COLOR {
half4 mainTex = tex2D (_MainTex, IN.uv);
mainTex *= _Color;
mainTex *= _Factor;
return mainTex;
}
This is going to be the first pass of the shader. It returns a half4 to the COLOR buffer which is just the screen. This is the orange glow shader and it just uses the mask and applies a _Color and _Factor parameter.
half4 fragDistort ( v2f IN ) : COLOR {
float2 screenPos = IN.screenPos.xy / IN.screenPos.w;
// FIXES UPSIDE DOWN
#if SHADER_API_D3D9
screenPos.y = 1 - screenPos.y;
#endif
float distFalloff = 1.0 / ( IN.screenPos.z * 0.1 );
half4 mainTex = tex2D( _MainTex, IN.uv );
half4 distort1 = tex2D( _DistortTex, IN.uv * _Tiling1.xy + _Time.y * _Tiling1.zw );
half4 distort2 = tex2D( _DistortTex, IN.uv * _Tiling2.xy + _Time.y * _Tiling2.zw );
half4 distort3 = tex2D( _DistortTex, IN.uv * _Tiling3.xy + _Time.y * _Tiling3.zw );
half2 distort = ( ( distort1.xy + distort2.yz + distort2.zx ) - 1.5 ) * 0.01 * _Distortion * distFalloff * mainTex.x;
screenPos += distort;
half4 final = tex2D( _GrabTexture, screenPos );
final.w = mainTex.w;
return final;
}
This is the distort shader, it also returns a half 4 and applies it to the color buffer.
float2 screenPos = IN.screenPos.xy / IN.screenPos.w;
In this shader we use the screen position that was send out of the vertex shader earlier. This line makes a new float2 variable, screenPos, the texture coordinate of the screen.
// FIXES UPSIDE DOWN
#if SHADER_API_D3D9
screenPos.y = 1 - screenPos.y;
#endif
On older hardware (shader model 2) the screen can be upside down. This line checks for that case and flips the screenPos. This should work in theory, I don't have old hardware to test it on though and it doesn't seem to work with Graphics Emulation. :/
float distFalloff = 1.0 / ( IN.screenPos.z * 0.1 );
Since distortion is a screen space effect it needs to be scaled down based on how far away it is from the camera. This algorithm worked for the current situation but it is dependent on things like camera fov so you may have to change it to something else for your purposes.
half4 mainTex = tex2D( _MainTex, IN.uv );
We are using the main texture again as a mask.
half4 distort1 = tex2D( _DistortTex, IN.uv * _Tiling1.xy + _Time.y * _Tiling1.zw );
half4 distort2 = tex2D( _DistortTex, IN.uv * _Tiling2.xy + _Time.y * _Tiling2.zw );
half4 distort3 = tex2D( _DistortTex, IN.uv * _Tiling3.xy + _Time.y * _Tiling3.zw );
Sample the distortion texture 3 times with different tiling and scrolling parameters to ensure that we don't see and repeating patterns in our distortion.
half2 distort = ( ( distort1.xy + distort2.yz + distort2.zx ) - 1.5 ) * 0.01 * _Distortion * distFalloff * mainTex.x;
Add the three distortion textures together. Swizzle the channels to further ensure no repeating patterns. Subtract 1.5 to normalize the distortion range, multiply it by a super low number unless you want crazy distortion, multiply it by the _Distortion parameter, the distFalloff variable, and the mask.
screenPos += distort;
Now just add the distortion to the screenPos;
half4 final = tex2D( _GrabTexture, screenPos );
Sample the _GrabTexture, which is the screen texture, with the screenPos as texture coordinates.
final.w = mainTex.w;
Set the alpha to the masks alpha.
return final;
And done! Throw an ENDCG in there and the CGINCLUDE is finished and ready to be used in the passes
Subshader {
Tags {"Queue" = "Transparent" }
Now we are going to start using these programs in passes. First set up the subshader and set the queue to transparent.
Pass {
ZTest LEqual
ZWrite Off
Blend SrcAlpha One
CGPROGRAM
#pragma fragmentoption ARB_precision_hint_fastest
#pragma vertex vertMain
#pragma fragment fragMain
ENDCG
}
The first pass is the orange glow. Most of this stuff has been covered. The new stuff is after CGPROGRAM.
#pragma fragmentoption ARB_precision_hint_fastest
Makes the shader faster but less precise. Good for a full screen effect like distortion and this simple glow.
#pragma vertex vertMain
Specifies the vertex program to use for this pass.
#pragma fragment fragMain
Specifies the pixel shader to use for this pass... And that's it! All the vertex shaders, pixel shaders, and other stuff were defined in the CGINCLUDE above.
GrabPass {
Name "BASE"
Tags { "LightMode" = "Always" }
}
The second pass grabs the screen and saves it to that _GrabTexture parameter. It must be all implicit because that's all there is to it.
Pass {
ZTest LEqual
ZWrite Off
Fog { Mode off }
Blend SrcAlpha OneMinusSrcAlpha
CGPROGRAM
#pragma fragmentoption ARB_precision_hint_fastest
#pragma vertex vertMain
#pragma fragment fragDistort
ENDCG
}
The last pass is the distortion. The only new tag is Fog { Mode off } which turns off fog for this pass. Everything will already have been fogged appropriately and having fog on your distortion pass would essentially double the amount of fog on everything behind it.
#pragma vertex vertMain
The vertex program vertMain is shared between this pass and the first pass.
#pragma fragment fragDistort
Set this pass to use the distortion program as its pixel shader. Throw in some closing brackets and we're done.
Final sun without and with heat haze shader |
And so we have reached the end of the Sun Shader Epic. I hope you enjoyed it and maybe learned something, but if not that's ok too.