Monday, July 16, 2018

Lucky Swooshes

While working on Super Lucky's Tale something that I though was solved in a pretty cool way was the swooshes. Swooshes are used for collecting coins, spawning certain enemies, and Lucky's tail swipe effect. For this type of effect you want a fire and forget solution and you also want it to be super predictable. It should do the same thing every time and keep a consistent look over its lifetime. An obvious approach may be to use a trail attached to an object that moved toward a target. There's a few issues that may arise with this approach though. If the target moves the trail could end up having a funny shape, it could also be difficult to figure out exactly when the swoosh will arrive at the target which doesn't help with timing when to spawn things.
The best solution turned out being a mesh that was a strip of polygons with the beginning and end at 0,0,0 with a trail texture that scrolled across it. A script on the swoosh object informed the shader where the target was and the vertex shader moved the end of the swoosh over to the targets position. The script could also orient the mesh to face the target. This allowed for lots of variation in the shape of the swooshes and also guaranteed that the swoosh would reach the target exactly when it was supposed to and always have the intended shape.
A swoosh model could be made with all kinds of twisting ribbons and then deformed along a path to make all kinds of fun shapes.

The important part of the vertex shader that moves the end of the swoosh is below:
// For screen shooshes smoosh the mesh flat on the Y axis
v.vertex.y *= 1.0 - _ScreenSquish;

// Add some random offset on the X and Z zxis for screen swooshes
float2 divergence = _Divergence.xy * saturate( sin ( v.uv.x * UNITY_PI ) );
v.vertex.xz += divergence * _ScreenSquish;

// Find the start position of the swoosh
float3 worldOrigin = mul( unity_ObjectToWorld, float4(0,0,0,1) ).xyz;

// Now figure out the end position relative to the start position
float3 endOffset = _TargetPos - worldOrigin;

// Get the world position of the vertex
float3 worldPos = mul( unity_ObjectToWorld, v.vertex ).xyz;

// Add the end offset to the vertex world position masked byt the uv coordinates
worldPos += endOffset * v.uv.x;

// Transform the world position to screen position
o.vertex = mul(UNITY_MATRIX_VP, float4(worldPos,1));

// Smoosh the swoosh against the screen if it is a screen swoosh
o.vertex.z = lerp( o.vertex.z, o.vertex.w, _ScreenSquish * 0.99 );
_ScreenSquish is a float from 0-1 that is passed in from script telling the swoosh if it should be pressed against the screen, like the coin collect swooshes. This keeps is from being occluded by any opaque geometry while still being attached to a point in the world. It still follows a world space position but that position is attached to the screen.
_Divergence is a float2 passed in from script that adds some offset the the middle of the swoosh so screen swooshes don't overlap and follow a bit of a random path.

The pixel shader is pretty simple, here's the basic swoosh texture lookup with a bit of fade out on either end:
// texture coords for swoosh texture
float2 swooshUV = saturate( IN.uv * _Tiling.xy + float2( lerp( _Tiling.z, _Tiling.w, _Ramp ), 0 ) );
half4 col = tex2D(_MainTex, swooshUV ) * _Color;

// start and end fade in
half edgeFade = saturate( ( 1.0 - abs( IN.uv.x * 2 - 1 ) ) * (1.0 / _FadeInOut ) );
edgeFade = smoothstep(0,1,edgeFade);

// multiply together
col *= edgeFade;
_Ramp is passed in from script to control the swoosh travel progression.
_Tiling is set in the material and allows for control of the length of the swoosh and how _Ramp effects the swoosh travel.
_FadeInOut is lets you set how mush on the ends of the swoosh to fade out.

A material property block can be used to send the information right to the swoosh renderer without messing with the materials at all like so:
MaterialPropertyBlock MPB = new MaterialPropertyBlock ();
Renderer thisRenderer = this.GetComponent ();

if (targetScreen) {
 MPB.SetFloat ("_ScreenSquish", 1.0f);
 MPB.SetVector ("_Divergence", new Vector2 (Random.Range (-screenDivergence, screenDivergence), Random.Range (-screenDivergence, screenDivergence)));
}

MPB.SetFloat("_Ramp", ramp);

thisRenderer.SetPropertyBlock (MPB);
Now a swoosh can be spawned anywhere, its script will drive _Ramp over a specified time, you know exactly when it will reach the end and what it will look like along the way.
Even when moving the camera a screen swoosh will always start at its world position and end at the screen position, the shape is always smooth and movement always fluid.
Lucky's tail swipe uses the same shader with an extra texture overlay to make it look more wispy. The tail swoosh is spawned at Lucky's position and rotation and then the end position is just updated to be Lucky's current position.
If Lucky is jumping, the tail swoosh it will follow him in the air while maintaining its smooth shape. It's a subtle effect but helps tie it to Lucky.
But we're not done yet. You can get really fancy with a geometry shader by turning each triangle into a little particle. Here one of the swoosh ribbons has the swoosh particle shader on it. This shader turned each triangle into its own quad and gave it some movement over its lifetime. Because this is just a shader you can play the whole effect backwards. And like the regular swooshes, the particle swooshes update their positions when the target moves.
How exactly that all works may be a post for another day though.

Monday, July 9, 2018

Dark and Stormy


This repo is availible on github: github.com/SquirrelyJones/DarkAndStormy
In this post I'll break down some of whats going on in this funky skybox shader for Unity. Some of the techniques used are Flow Mapping, Steep Parallax Mapping, and Front To Back Alpha Blending. There are plenty of resources that go over these techniques in detail and this post is more about using those techniques to make something cool than it is about thoroughly explaining each one. It should also be noted that this shader is not optimized and is structured for easier readability.

The Textures

This is the main cloud layer.
This is the flow map generated from the main cloud layer. The red and green channels are similar to a blurry normal map (openGL style) generated from a the main clouds layer height. The smaller clouds will flow outward from the thicker parts of the main clouds. The blue channel is a mask for where there may be pinches due to the flow pushing a lot of the clouds texture into a small area. This doesn't look great on clouds so we want to mask out places where this will occur.
This is the second cloud layer, it will add detail to the large clouds and be distorted by the large clouds flow map.
This is the wave distortion map. This distorts all the clouds and gives an ocean wave feel to the motion.
The wave distortion mas generated in Substance Designer using a cellular pattern with heavy anisotropic blur applied. he blur direction should be perpendicular to the direction it will scroll to give it a proper wavy look.
The last texture is the Upper color that will show through the clouds.

The Shader

Shader "Skybox/Clouds"
{
 Properties
 {
  [NoScaleOffset] _CloudTex1 ("Clouds 1", 2D) = "white" {}
  [NoScaleOffset] _FlowTex1 ("Flow Tex 1", 2D) = "grey" {}
  _Tiling1("Tiling 1", Vector) = (1,1,0,0)

  [NoScaleOffset] _CloudTex2 ("Clouds 2", 2D) = "white" {}
  [NoScaleOffset] _Tiling2("Tiling 2", Vector) = (1,1,0,0)
  _Cloud2Amount ("Cloud 2 Amount", float) = 0.5
  _FlowSpeed ("Flow Speed", float) = 1
  _FlowAmount ("Flow Amount", float) = 1

  [NoScaleOffset] _WaveTex ("Wave", 2D) = "white" {}
  _TilingWave("Tiling Wave", Vector) = (1,1,0,0)
  _WaveAmount ("Wave Amount", float) = 0.5
  _WaveDistort ("Wave Distort", float) = 0.05

  _CloudScale ("Clouds Scale", float) = 1.0
  _CloudBias ("Clouds Bias", float) = 0.0

  [NoScaleOffset] _ColorTex ("Color Tex", 2D) = "white" {}
  _TilingColor("Tiling Color", Vector) = (1,1,0,0)
  _ColPow ("Color Power", float) = 1
  _ColFactor ("Color Factor", float) = 1

  _Color ("Color", Color) = (1.0,1.0,1.0,1)
  _Color2 ("Color2", Color) = (1.0,1.0,1.0,1)

  _CloudDensity ("Cloud Density", float) = 5.0

  _BumpOffset ("BumpOffset", float) = 0.1
  _Steps ("Steps", float) = 10

  _CloudHeight ("Cloud Height", float) = 100
  _Scale ("Scale", float) = 10

  _Speed ("Speed", float) = 1

  _LightSpread ("Light Spread PFPF", Vector) = (2.0,1.0,50.0,3.0)
 }
All the properties that can be played with.
 SubShader
 {
  Tags { "RenderType"="Opaque" }
  LOD 100

  Pass
  {
   CGPROGRAM
   #pragma vertex vert
   #pragma fragment frag
   
   #include "UnityCG.cginc"
   #define SKYBOX
   #include "FogInclude.cginc"
There is a custom include file that has a poor mans height fog and integrates directional light color. The terrain shader also uses the same fog to keep things cohesive.
   sampler2D _CloudTex1;
   sampler2D _FlowTex1;
   sampler2D _CloudTex2;
   sampler2D _WaveTex;

   float4 _Tiling1;
   float4 _Tiling2;
   float4 _TilingWave;

   float _CloudScale;
   float _CloudBias;

   float _Cloud2Amount;
   float _WaveAmount;
   float _WaveDistort;
   float _FlowSpeed;
   float _FlowAmount;

   sampler2D _ColorTex;
   float4 _TilingColor;

   float4 _Color;
   float4 _Color2;

   float _CloudDensity;

   float _BumpOffset;
   float _Steps;

   float _CloudHeight;
   float _Scale;
   float _Speed;

   float4 _LightSpread;

   float _ColPow;
   float _ColFactor;
Just declaring all the property variables to be used.
   struct v2f
   {
    float4 vertex : SV_POSITION;
    float3 worldPos : TEXCOORD0; 
   };

   
   v2f vert (appdata_full v)
   {
    v2f o;
    o.vertex = UnityObjectToClipPos(v.vertex);
    o.worldPos = mul( unity_ObjectToWorld, v.vertex ).xyz;
    return o;
   }
The vertex shader is pretty lightweight, just need the world position for the pixel shader.
   float rand3( float3 co ){
       return frac( sin( dot( co.xyz ,float3(17.2486,32.76149, 368.71564) ) ) * 32168.47512);
   }
We'll need a random number for some noise. This will generate a random number based on a float3.
   half4 SampleClouds ( float3 uv, half3 sunTrans, half densityAdd ){

    // wave distortion
    float3 coordsWave = float3( uv.xy *_TilingWave.xy + ( _TilingWave.zw * _Speed * _Time.y ), 0.0 );
    half3 wave = tex2Dlod( _WaveTex, float4(coordsWave.xy,0,0) ).xyz;
The wave texture needs to be sampled first, it will distort the rest of the coordinates like a Gerstner Wave. In all the _Tiling parameters .xy is tiling scale and .zw is scrolling speed. All scrolling is multiplied byt the global _Speed variable for easily adjusting the overall speed of the skybox.
    // first cloud layer
    float2 coords1 = uv.xy * _Tiling1.xy + ( _Tiling1.zw * _Speed * _Time.y ) + ( wave.xy - 0.5 ) * _WaveDistort;
    half4 clouds = tex2Dlod( _CloudTex1, float4(coords1.xy,0,0) );
    half3 cloudsFlow = tex2Dlod( _FlowTex1, float4(coords1.xy,0,0) ).xyz;
Using the red and green channels of the wave texture (xy) distort the uv coordinates for the first cloud layer. Also sample the clouds flow texture with the same coordinates.
    // set up time for second clouds layer
    float speed = _FlowSpeed * _Speed * 10;
    float timeFrac1 = frac( _Time.y * speed );
    float timeFrac2 = frac( _Time.y * speed + 0.5 );
    float timeLerp  = abs( timeFrac1 * 2.0 - 1.0 );
    timeFrac1 = ( timeFrac1 - 0.5 ) * _FlowAmount;
    timeFrac2 = ( timeFrac2 - 0.5 ) * _FlowAmount;
This is a standard setup for flow mapping.

    // second cloud layer uses flow map
    float2 coords2 = coords1 * _Tiling2.xy + ( _Tiling2.zw * _Speed * _Time.y );
    half4 clouds2 = tex2Dlod( _CloudTex2, float4(coords2.xy + ( cloudsFlow.xy - 0.5 ) * timeFrac1,0,0)  );
    half4 clouds2b = tex2Dlod( _CloudTex2, float4(coords2.xy + ( cloudsFlow.xy - 0.5 ) * timeFrac2 + 0.5,0,0)  );
    clouds2 = lerp( clouds2, clouds2b, timeLerp);
    clouds += ( clouds2 - 0.5 ) * _Cloud2Amount * cloudsFlow.z;
The second cloud layer coordinates start with the first cloud layer coordinates so the second cloud layer will stay relative to the first. Sample the second cloud layer using the flow map to distort the coordinates. Then add them to the base cloud layer, masking them by the flow maps blue channel.
    // add wave to cloud height
    clouds.w += ( wave.z - 0.5 ) * _WaveAmount;
Add the wave texture blue channel to the cloud height
    // scale and bias clouds because we are adding lots of stuff together
    // and the values cound go outside 0-1 range
    clouds.w = clouds.w * _CloudScale + _CloudBias;
Since everything is just getting added together there is the possibility that the values could go outside of 0-1 range. If things look weird we can manually scale and bias the final value back into a more reasonable range.
    // overhead light color
    float3 coords4 = float3( uv.xy * _TilingColor.xy + ( _TilingColor.zw * _Speed * _Time.y ), 0.0 );
    half4 cloudColor = tex2Dlod( _ColorTex, float4(coords4.xy,0,0)  );
sample the overhead light color texture.
    // cloud color based on density
    half cloudHightMask = 1.0 - saturate( clouds.w );
    cloudHightMask = pow( cloudHightMask, _ColPow );
    clouds.xyz *= lerp( _Color2.xyz, _Color.xyz * cloudColor.xyz * _ColFactor, cloudHightMask );
Using the cloud height (the alpha channel of the clouds) lerp between the the 2 colors and multiply the overall cloud color. The power function is used to adjust the tightness of the "cracks" in the clouds that let light through.
    // subtract alpha based on height
    half cloudSub = 1.0 - uv.z;
    clouds.w = clouds.w - cloudSub * cloudSub;
subtract the uv position from the cloud height. This gives us the cloud density at the current height.
    // multiply density
    clouds.w = saturate( clouds.w * _CloudDensity );
Multiply the density by the _CloudDensity variable to control the softness of the clouds.
    // add extra density
    clouds.w = saturate( clouds.w + densityAdd );
Add any extra density if needed. This variable is passed in and is 0 except for the final pass in which it is 1
    // add Sunlight
    clouds.xyz += sunTrans * cloudHightMask;
Add in the sun gradients masked by the cloud height mask.
    // pre-multiply alpha
    clouds.xyz *= clouds.w;
The front to back alpha blending function needs the alpha to be pre-multiplied.
    return clouds;
   }
This is the main function for sampling the clouds. The pixel shader will loop over this function.
   fixed4 frag (v2f IN) : SV_Target
   {
    // generate a view direction fromt he world position of the skybox mesh
    float3 viewDir = normalize( IN.worldPos - _WorldSpaceCameraPos );

    // get the falloff to the horizon
    float viewFalloff = 1.0 - saturate( dot( viewDir, float3(0,1,0) ) );

    // Add some up vector to the horizon to pull the clouds down
    float3 traceDir = normalize( viewDir + float3(0,viewFalloff * 0.1,0) );
We can get the view direction from subtracting the camera position from the world position and normalizing the result. "traceDir" is the direction that will be used generate the cloud uvs. It is just the view direction with a little bit of "up" added at the horizon. This adds a little bit of bend to the clouds, like they are curving around the planet, and keeps them from sprawling off into infinity at the horizon and causing all kinds of artifacts.
    // Generate uvs from the world position of the sky
    float3 worldPos = _WorldSpaceCameraPos + traceDir * ( ( _CloudHeight - _WorldSpaceCameraPos.y ) / max( traceDir.y, 0.00001) );
    float3 uv = float3( worldPos.xz * 0.01 * _Scale, 0 );
Use the camera position + the trace direction to get a world position for the cloud layer. This way the clouds will react to the camera moving, just make sure not to move the camera up through the clouds, things get weird. Then make the uvs for the clouds from the world position, multiplying by the global scale variable for easy adjusting.
    // Make a spot for the sun, make it brighter at the horizon
    float lightDot = saturate( dot( _WorldSpaceLightPos0, viewDir ) * 0.5 + 0.5 );
    half3 lightTrans = _LightColor0.xyz * ( pow( lightDot,_LightSpread.x ) * _LightSpread.y + pow( lightDot,_LightSpread.z ) * _LightSpread.w );
    half3 lightTransTotal = lightTrans * pow(viewFalloff, 5 ) * 5.0 + 1.0;
Using the dot product from the first directional light direction and the view direction, get a gradient in the direction of the sun. Then use power to tighten up the gradient to your liking. This it the light from the sun that will shine through the back of the clouds. The _LightSpread parameter has the power and factor for the two sun gradients that get added together for better control.
    // Figure out how for to move through the uvs for each step of the parallax offset
    half3 uvStep = half3( traceDir.xz * _BumpOffset * ( 1.0 / traceDir.y ), 1.0 ) * ( 1.0 / _Steps );
    uv += uvStep * rand3( IN.worldPos + _SinTime.w );
Standard steep parallax uv step amount. This is how far through the uvs and the cloud height we move with each sample. Then the starting uv is jittered a bit wit a random value per pixel to keep it from looking like flat layers.
    // initialize the accumulated color with fog
    half4 accColor = FogColorDensitySky(viewDir);
    half4 clouds = 0;
    [loop]for( int j = 0; j < _Steps; j++ ){
     // if we filled the alpha then break out of the loop
     if( accColor.w >= 1.0 ) { break; }

     // add the step offset to the uv
     uv += uvStep;

     // sample the clouds at the current position
     clouds = SampleClouds(uv, lightTransTotal, 0.0 );

     // add the current cloud color with front to back blending
     accColor += clouds * ( 1.0 - accColor.w );
    }
Start by getting the fog at the starting point. This creates an early out opportunity from the loop since we don't need to sample clouds once the the accumulated color is fully opaque. Then Iterate over the clouds moving the uv with each iteration and adding the clouds to the accumulated color using front to back alpha blending.
    // one last sample to fill gaps
    uv += uvStep;
    clouds = SampleClouds(uv, lightTransTotal, 1.0 );
    accColor += clouds * ( 1.0 - accColor.w );
Once we have iterated over the entire cloud volume do one last sample without testing against the cloud height to fill in any holes from cloud values that didn't fit inside the volume.
    // return the color!
    return accColor;
   }
   ENDCG
  }
 }
}
Then return the color and we're done!

Thursday, June 7, 2018

Opaque Active Camouflage Part 1

In this post I will show how to implement an active camouflage technique using the previous frame buffer similar to the effect used in Ghost Recon: Future Soldier.  This example will be done in Unity but the principles are the same for implementing it in any engine.

What is great about this technique is that you can apply it to any object in the scene and it will just sort of blend in with whats around it.  You don't actually see through the object so it also obscures whatever is behind it making it great for "invisibility cloaks" and what not.

The full repo with example assets can be found HERE

There are 4 main steps to this effect:
1: A command buffer for grabbing the frame buffer at the right time
2: A command buffer for drawing the active camo version of the objects over top of the original objects.
3: A script that tells objects they should be drawn with active camo.
4: The active camo shader itself.

1. Grabbing the frame Buffer


To grab the frame buffer we will use a command buffer on the main camera.  I'm making a new script called FrameGrabCommandBuffer
using UnityEngine;
using UnityEngine.Rendering;

[ExecuteInEditMode]
[RequireComponent (typeof(Camera))]
public class FrameGrabCommandBuffer : MonoBehaviour {

 private CommandBuffer rbFrame;
 [SerializeField]
 private CameraEvent rbFrameQueue = CameraEvent.AfterForwardAlpha;
Start by declaring a new Command Buffer and a new CameraEvent, I've serialized it so you can see the results of different events.  AfterForwardAlpha will cause this command buffer to execute after all the transparent stuff has drawn but before an canvas UI is drawn. Special in-world UI may need to go into it's own command buffer.
 public RenderTexture lastFrame;
 public RenderTexture lastFrameTemp;
 private RenderTargetIdentifier lastFrameRTI;
We need a render texture for the last frame, and a temporary texture in case the screen is resized and we need to make a new lastFrame texture.  Command buffers use RenderTargetIdentifier instead of the RenderTexture so we will need one for the last frame texture.
 private int screenX = 0;
 private int screenY = 0;
 private Camera thisCamera;
screenX and screenY store the current size of the camera and thisCamera is the camera the script is attached to.
 void OnEnable() {

  thisCamera = GetComponent<Camera> ();

  rbFrame = new CommandBuffer();
  rbFrame.name = "FrameCapture";
  thisCamera.AddCommandBuffer(rbFrameQueue, rbFrame);

  RebuildCBFrame ();

  Shader.SetGlobalFloat( "_GlobalActiveCamo", 1.0f );
 }
When this script is enabled we want to set thisCamera, initialize the command buffer and apply is to the camera.
RebuildCBFrame () is the function that will actually build the command buffer but an empty buffer can be applied to the camera and updated later.
Set a global shader value to let all the shaders know that the active camo is ready to go!
 void OnDisable() {

  if (rbFrame != null) {
   thisCamera.RemoveCommandBuffer(rbFrameQueue, rbFrame);
   rbFrame = null;
  }

  if (lastFrame != null) {
   lastFrame.Release();
   lastFrame = null;
  }

  Shader.SetGlobalFloat( "_GlobalActiveCamo", 0.0f );
 }
When the script is disabled we want to remove the command buffer from the camera and clean up the last frame texture to avoid memory leaks.  Also inform the shaders that there is no more active camo.  Next build the actual command buffer.
 RebuildCBFrame() {

  rbFrame.Clear ();
First clear it in case there are any instructions in it since this function may be called from time to time.
  if (lastFrame != null) {
   lastFrameTemp = RenderTexture.GetTemporary(lastFrame.width, lastFrame.height, 0, RenderTextureFormat.DefaultHDR);
   Graphics.Blit (lastFrame, lastFrameTemp);
   lastFrame.Release();
   lastFrame = null;
  }
If the last frame texture already exists that means the screen size has changed so we need to store the existing last frame in a temp texture to copy later.  Then release the last frame texture to free up its memory.
 screenX = thisCamera.pixelWidth;
  screenY = thisCamera.pixelHeight;
Store the current width and height of the camera, this is used to check it the camera has been resized later.
  lastFrame = new RenderTexture(screenX/2, screenY/2, 0, RenderTextureFormat.DefaultHDR);
  lastFrame.wrapMode = TextureWrapMode.Clamp;
  lastFrame.Create ();
  lastFrameRTI = new RenderTargetIdentifier(lastFrame);
Make a new render texture for the last frame.  Half of the screen resolution is enough for this effect. The wrap mode should be clamp so that when the texture is being distorted by the shader it won't pull in things from the other side of the screen.  Lastly create the render texture and get the render target identifier for it.
  if (lastFrameTemp != null) {
   Graphics.Blit (lastFrameTemp, lastFrame);
   RenderTexture.ReleaseTemporary (lastFrameTemp);
   lastFrameTemp = null;
  }
If the temp last frame texture exists that means we need to copy it to the new last frame texture we just made.  A standard Graphics.Blit will do.  Then release the temp texture and null it.
  Shader.SetGlobalTexture ("_LastFrame", lastFrame);
Inform all the shaders what texture they will be using for their active camo.
  RenderTargetIdentifier cameraTargetID = new RenderTargetIdentifier(BuiltinRenderTextureType.CameraTarget);
  rbFrame.Blit(cameraTargetID, lastFrameRTI);
 }
This is the actual command buffer instructions.  Get the render target identifier of the camera and blit it to the last frame render target identifier.  Pretty simple.
 void OnPreRender(){

  if (screenX != thisCamera.pixelWidth || screenY != thisCamera.pixelHeight) {
   RebuildCBFrame ();
  }
 }
}
Last but not least, before the camera renders, check to see if the screen size has changed.  If it has, rebuild the command buffer.

2. Drawing the Active Camo Objects


Now we want to set up the command buffer that will render the active camo objects. Make a new script called ActiveCamoCommandBuffer.
using System.Collections.Generic;
using UnityEngine;
using UnityEngine.Rendering;

public class ActiveCamoObject {
 public Renderer renderer;
 public Material material;
}
The first thing we need is a class that will hold the renderer that we want to draw and the material we want to use to draw it.
[ExecuteInEditMode]
[RequireComponent (typeof(Camera))]
public class ActiveCamoCommandBuffer : MonoBehaviour {

 public static ActiveCamoCommandBuffer instance;

 private CommandBuffer rbDrawAC;
 [SerializeField]
 private CameraEvent rbDrawACQueue = CameraEvent.AfterForwardOpaque;

 private HashSet<ActiveCamoObject> acObjects = new HashSet<ActiveCamoObject>();
 private Camera thisCamera;
 private bool updateActiveCamoCB = false;
There needs to be a static instance of this script so that later on we can have each active camo object tell this script that it needs to be drawn. Also make a new command buffer and a new camera event. AfterForwardOpaque will happen after all of the opaque things have been drawn and before the transparent things get drawn. The hash set of active camo objects will be iterated over to draw each object. thisCamera is just hte camera the script is attached to. updateActiveCamoCB is the variable we will use to see if the command buffer needs to be rebuilt.
 void Awake(){
  ActiveCamoCommandBuffer.instance = this;
 }
The first thing that needs to happen is the instance needs to be set. Awake() is the first thing that gets called so it is an ideal place for setting instances.
 void OnEnable() {
  thisCamera = GetComponent<Camera> ();

  rbDrawAC = new CommandBuffer();
  rbDrawAC.name = "DrawActiveCamo";
  thisCamera.AddCommandBuffer(rbDrawACQueue, rbDrawAC);
  updateActiveCamoCB = true;
 }
When the script is enabled it should set the camera variable, create the command buffer, add it to the camera, and set the variable letting us know that the command buffer should be updated. The reason we don't rebuild the command buffer immediately is because something else may change that also requires a command buffer rebuild.
 void OnDisable() {
  if (rbDrawAC != null) {
   thisCamera.RemoveCommandBuffer(rbDrawACQueue, rbDrawAC);
   rbDrawAC = null;
  }
 }
When the script is disabled it should remove the command buffer from the camera.
 public void AddRenderer( ActiveCamoObject newObject ) {
  acObjects.Add (newObject);
  updateActiveCamoCB = true;
 }

 public void RemoveRenderer( ActiveCamoObject newObject ) {
  acObjects.Remove (newObject);
  updateActiveCamoCB = true;
 }
These two public functions will add/remove the ActiveCamoObject passed to them to/from the hash set of active camo objects. Whenever there is a change to the hash set the command buffer needs to be rebuilt.
 void RebuildCBActiveCamo(){
  rbDrawAC.Clear ();
  foreach( ActiveCamoObject acObject in acObjects ){
   rbDrawAC.DrawRenderer(acObject.renderer, acObject.material);
  }
  updateActiveCamoCB = false;
 }
This function actually rebuilds the command buffer. first clear the buffer and then draw each renderer with its material from the acObjects hash set. Final set updateActiveCamoCB to false.
 void OnPreRender(){
  if (updateActiveCamoCB) {
   RebuildCBActiveCamo ();
  }
 }
}
The last step is to check if the command buffer needs to be rebuilt and if so rebuild it.

3. Per Object Active Camo Script


Make a new script called ActiveCamoRenderer that will be applied to any game object with a Renderer component that should get active camo
using UnityEngine;

public class ActiveCamoRenderer : MonoBehaviour {

 private Renderer thisRenderer;
 [SerializeField]
 private Material ActiveCamoMaterial;
 private MaterialPropertyBlock MPB;
 private ActiveCamoObject acObject;
 [HideInInspector]
 public float ActiveCamoRamp = 0.0f;
We need a variable for the renderer, an exposed variable for the active camo material, a material property block that will control the active camo material, the ActiveCamoObject that will get sent to the command buffer script an a public float variable that will be used to control the active camo material from another controller script.
 void Start(){
  MPB = new MaterialPropertyBlock ();
  thisRenderer = GetComponent ();
  acObject = new ActiveCamoObject();
  acObject.renderer = thisRenderer;
  acObject.material = ActiveCamoMaterial;
 }
When the script starts it needs to initialize the property block, get the renderer, create the ActiveCamoObject and assign the renderer and material to it.
 void OnBecameVisible(){
  ActiveCamoCommandBuffer.instance.AddRenderer (acObject);
 }

 void OnBecameInvisible() {
  ActiveCamoCommandBuffer.instance.RemoveRenderer (acObject);
 }
OnBecomeVisible and OnBecomeInvisible are functions that get called by unity when that object is first visible on screen and when it is first no longer visible on screen respectively. When the object becomes visible we want to add the ActiveCamoObject to the ActiveCamoCommandBuffer, and remove it when it becomes invisible.
 void Update () {
  MPB.SetFloat ("_ActiveCamoRamp", ActiveCamoRamp);
  thisRenderer.SetPropertyBlock (MPB);
 }
}
Each frame set the _ActiveCamoRamp shader variable on the material property block and then apply the block to the renderer. The MaterialPropertyBlock lets us use the same material for multiple objects but still control the material on a per renderer basis.
The last thing we need is a script that we can drop onto a character and control all the active camo objects on that character at once. So make a new script called ActiveCamoController.
using UnityEngine;

public class ActiveCamoController : MonoBehaviour {

 [SerializeField]
 private ActiveCamoRenderer[] activeCamoRenderers;

 [SerializeField]
 [Range (0f,1f)]
 private float ActiveCamoRamp = 0.0f;
 
 // Update is called once per frame
 void Update () {
  for (int i = 0; i < activeCamoRenderers.Length; i++) {
   activeCamoRenderers [i].ActiveCamoRamp = ActiveCamoRamp;
  }
 }
}
This script just loops over all ActiveCamoRenderers and changes their ActiveCamoRamp variable.

4. The Active Camo Shader


This shader will use a texture to add some random flow to the the active camo. This distortion texture is just 256x256 Photoshop clouds. Import settings should have compression set to none (because it is a small effects texture it is important that it be free of compression artifacts) and sRGB sampling unchecked (because it will be used for distorting texture coordinates).

Start by creating a new shader and calling it ActiveCamoUnlitSimple. This shader will be a lot simpler than the one on display but I will make another post about the full shader for part 2.
Shader "Unlit/ActiveCamoUnlitSimple"
{
 Properties
 {
  _DistortTex ("Distortion", 2D) = "grey" {}
  _DistortTexTiling ("Distortion Tiling", Vector) = (1,1,0,0)
  _DistortAmount ("Distortion Amount", Range(0,1)) = 0.1
  _VertDistortAmount ("Vert Distortion Amount", Range(0,1)) = 0.1
 }
These are the the properties that will be used. we'll need a distortion texture, Vector parameter for tiling x and y and scrolling x and y, a float parameter to control the amount of distortion, and a float parameter to control the amount of pull the shader does one the surrounding environment.
 SubShader
 {
  Tags { "RenderType"="Transparent" "Queue"="Transparent" }
  LOD 100
The RenderType should be Transparent. The Queue is not so important since a command buffer will be drawing this at a specific point in the rendering pipeline. The LOD is unchanged from its initialization.
  Pass
  {
   Offset -1,-1
   Blend One OneMinusSrcAlpha
Offset -1,-1 will make sure that there will be no z-fighting with the objects it is supposed to be drawing on top of. Blend One OneMinusSrcAlpha is pre-multiplied alpha blending and will provide similar results between HDR and non HDR rendering.
   CGPROGRAM
   #pragma vertex vert
   #pragma fragment frag

   #include "UnityCG.cginc"
Just telling the shader what the vertex program and the fragment program will be and including some base Unity shader functions
   sampler2D _DistortTex;
   float4 _DistortTexTiling;
   float _DistortAmount;
   float _VertDistortAmount;

   // per instance variables
   float _ActiveCamoRamp;

   // global variables
   sampler2D _LastFrame;
   float _GlobalActiveCamo;
Here are all the variables the shader will use. The first 4 are controlled by the properties in the material. _ActiveCamoRamp is passed in with the MaterialPropertyBlock from the ActiveCamoRenderer script. _LastFrame and _GlobalActiveCamo are defined globally by the FrameGrabCommandBuffer script.
   struct v2f
   {
    float4 vertex : SV_POSITION;
    float2 uv : TEXCOORD0;
    float4 screenPos: TEXCOORD1;
    float2 screenNormal : TEXCOORD2;
   };
v2f is the data structure that will get passed from the vertex shader to the pixel (fragment) shader. We need the position, the uv coords, the screen position, and the x and y values of the screen normal.
   v2f vert (appdata_full v)
   {
    v2f o;
    o.vertex = UnityObjectToClipPos(v.vertex);
    o.uv = v.texcoord;
    o.screenPos = ComputeScreenPos(o.vertex);
    fixed3 worldNormal = UnityObjectToWorldNormal(v.normal);
    o.screenNormal = mul( (float3x3)UNITY_MATRIX_V, worldNormal ).xy;

    return o;
   }
This is the vertex shader. o.vertex, o.uv, and o.screenPos and worldNormal are pretty common in many vertex shaders so I won't go into them. To get the screen normal we have to multiply the world normal by the camera view matrix.
   fixed4 frag (v2f IN) : SV_Target
   {

    // get the distortion for the prevous frame coords
    half2 distortion = tex2D (_DistortTex, IN.uv.xy * _DistortTexTiling.xy + _Time.yy * _DistortTexTiling.zw ).xy;
    distortion -= tex2D (_DistortTex, IN.uv.xy * _DistortTexTiling.xy + _Time.yy * _DistortTexTiling.wz ).yz;
The start of the pixel shader. Get the red and green channel of distortion texture. Then subtract the green and blue channel of another distortion texture with swizzled scrolling values so it scrolls a different direction. This produces a distortion value that ranges from -1 to 1.
    // get the last frame to use as camo
    float2 screenUV = IN.screenPos.xy / IN.screenPos.w;
    screenUV += distortion * _DistortAmount * 0.1;
    screenUV += IN.screenNormal * _VertDistortAmount * 0.1;
    half3 lastFrame = tex2D (_LastFrame, screenUV).xyz;
Get the screen uv for the last frame texture and add the distortion texture multiplied by the distortion amount at 1/10th. A little goes a long way when distorting texture coordinates. Then add the screen normal multiplied by the vert distort amount at 1/10th. This is what will pull the surroundings onto the camo shader. Finaly sample the last frame texture with the screen uv.
    // the final amound of active camo to apply
    half activeCamo = _ActiveCamoRamp * _GlobalActiveCamo;

    // premultiplied alpha camo
    half4 final = half4( lastFrame * activeCamo, activeCamo);
    final.w = saturate( final.w);

    return final;
   }
   ENDCG
  }
 }
}
Multiply the per instance _ActiveCamoRamp variable and the _GlobalActiveCamo variable together to form the alpha value for the shader. To pre-multiply the alpha just multiply the final color by the alpha before returning it.

Setting it all up


Make a material for each object you want to apply camo to. You don't need unique material but this allows you to have different tiling and distortion values.
Apply the ActiveCamoRenderer script to each of the objects you want to have active camo on and apply to active camo material you made to them.
Now add the ActiveCamoController script to the main object and drag all the active camo objects into the Active camo renderers array.
The last thing to do is add the ActiveCamoCommandBuffer script and the FrameGrabCommandBuffer script to the main camera.
Now press play and drag the slider on the control script to make things vanish!

Saturday, April 15, 2017

Splatoon in Unity

Infinite Splatoon Style Splatting In Unity
Executable: Splatoonity.zip
Unity Package: Splatoonity.unitypackage

I thought I might extend this example and put it up on the asset store but I'll probably never get around to it. :/  So I might as well just post it up here because people keep asking about it.  This example is made in Unity 5.6 but started off in 5.3 so it can be made to work in older versions if needed.

The basic idea
This works a little bit like deferred decals meets light maps.  With deferred decals, the world position is figured out from the depth buffer and then is transformed into decal space.  Then the decal can be sampled and applied.  But this won't work with areas that are not on the screen, it also doesn't save the decals.  You would have to draw every single decal, every frame, and that would start to slow you frame rate after a while.  Plus you would have a hard time figuring out how much area each color was taking up because decals can go on top of other decals and you can't really check how much actual space a decal is covering.

What we need to do is figure out a way to consolidate all the splats and then draw them all at once on the world.  Kinda like how a light map works, but for decals.


Get the world Position
First we need to have the world position, since you can't draw decals without knowing where to draw them.  To do that we draw the model to a ARGBFloat render texture, outputting it's world position in the pixel shader.  But drawing the model as is won't do, we need to draw it as if it's second uv channel were its position.

When a model gets rendered the vertex shader takes the vertex positions and maps them to the screen based the camera with this little bit of code:

o.pos = UnityObjectToClipPos(v.vertex);
But you don't have to use the vertex position of the model, you can use whatever you want.  In this case we take the second uv channel and using that as the position.

float3 uvWorldPos = float3( v.texcoord1.xy * 2.0 - 1.0, 0.5 );
o.pos = mul( UNITY_MATRIX_VP, float4( uvWorldPos, 1.0 ) );
o.worldPos = mul(unity_ObjectToWorld, v.vertex).xyz;
It looks a little bit different but uvWorldPos is basically unwrapping the model and putting it in front of an orthographic camera.  The camera that draws this will need to be at the center of the world and pointing in the correct direction in order to see the model.  The actual world position is passed down and is written out in the pixel shader.

World position texture

Why render out a world position texture?  Well because once it's rendered we don't need to worry about the model anymore.  There could be lots of models, who knows.  And more models = more draw calls.  As long as it doesn't move we don't need to bother with it.  A tangent map and binormal map are also generated in the same way and stored for later use when generating a normal map for the edges of the splats. Use ddx and ddy of the world position to generate the tangents and binormals for the second uv set.  This is optional though as there are other ways of getting world normals without pre-computed tangents.  I'll talk about that later.

float3 worldTangent = normalize( ddx( i.worldPos ) ) * 0.5 + 0.5;
float3 worldBinormal = normalize( ddy( i.worldPos ) ) * 0.5 + 0.5;

Assemble the decals
Just as if you were drawing deferred decals, you need to collect them all in one decal manager and then tell each one to render.  We use a static instance splat manager and whatever wants to draw a splat adds it's splat to the splat manager.  The biggest difference with a decal manager is that we only need to add the splat once, not every frame.

Once all the splats are assembled they can be blit to a splat texture, referencing the world texture just like deferred decals reference the depth buffer.

The splats get drawn to alternating textures (ping pong buffers) so that new splats can be custom blended with old splats.  The world position is sampled from the baked texture and is multiplied by each splat transform matrix.  Each splat color needs to remove any previous splat colors to keep track of score or eventually everything would be covered with every color.

float4 currentSplat = tex2D(_LastSplatTex, i.uv);
float4 wpos = tex2D(_WorldPosTex, i.uv);

for( int i = 0; i < _TotalSplats; i++ ){
 float3 opos = mul(_SplatMatrix[i], float4(wpos.xyz,1)).xyz;

 // skip if outside of projection volume
 if( opos.x > -0.5 && opos.x < 0.5 && opos.y > -0.5 && opos.y < 0.5 && opos.z > -0.5 && opos.z < 0.5 ){
  // generate splat uvs
  float2 uv = saturate( opos.xz + 0.5 );
  uv *= _SplatScaleBias[i].xy;
  uv += _SplatScaleBias[i].zw;
    
  // sample the texture
  float newSplatTex = tex2D( _MainTex, uv ).x;
  newSplatTex = saturate( newSplatTex - abs( opos.y ) * abs( opos.y ) );
  currentSplat = min( currentSplat, 1.0 - newSplatTex * ( 1.0 - _SplatChannelMask[i] ) );
  currentSplat = max( currentSplat, newSplatTex * _SplatChannelMask[i]);
 }

}

// mask based on world coverage
// needed for accurate score calculation
return currentSplat * wpos.w;
Just like light maps this splat map is pretty low resolution and not detailed enough to look good on it's own.  Thankfully we just need smooth edges and not per pixel details, something that distance field textures are good at.  Below is the atlas of distance field textures for the splat decals and the multi channel distance field for the final splat map.
Splat distance field decal textures

Splat decals applied to splat map

Updating the score
To update the score we downsample the splat map first to a 256x256 texture with generated mip maps using a shader that steps the distance field at 0.5 to ensure that the score will mimic what is scene in came, and then again to a 4x4 texture.  Then we sample the colors, average them together and set the score based on the brightness of each channel.

This is done in a co-routine that spreads out the work over multiple frames since we don't need it to be super responsive.  The co-routine updates the score continually once every second.


Drawing the splats (surface shader)
Now that we have a distance field splat map we can easily sample the splats in the material shader.  We sample the splat texture using the second uv set which we can get by putting uv2 in front of the splat texture sample name in the input struct:

struct Input {
 float2 uv_MainTex;
 float2 uv2_SplatTex;
 float3 worldNormal;
 float3 worldTangent;
 float3 worldBinormal;
 float3 worldPos;
 INTERNAL_DATA
};
We sample the splat texture and also one texel up and one texel over to create a normal offset map

// Sample splat map texture with offsets
float4 splatSDF = tex2D (_SplatTex, IN.uv2_SplatTex);
float4 splatSDFx = tex2D (_SplatTex, IN.uv2_SplatTex + float2(_SplatTex_TexelSize.x,0) );
float4 splatSDFy = tex2D (_SplatTex, IN.uv2_SplatTex + float2(0,_SplatTex_TexelSize.y) );
Because the distance field edge is created in the shader, when viewed at harsh angles or from a far distance the edges can become smaller than one pixel which aliasess and doesn't look very good.  This code tries to create an edge width that will not alias.  This is similar to signed distance field text rendering.  It's not perfect and doesn't help with specular aliasing.

// Use ddx ddy to figure out a max clip amount to keep edge aliasing at bay when viewing from extreme angles or distances
half splatDDX = length( ddx(IN.uv2_SplatTex * _SplatTex_TexelSize.zw) );
half splatDDY = length( ddy(IN.uv2_SplatTex * _SplatTex_TexelSize.zw) );
half clipDist = sqrt( splatDDX * splatDDX + splatDDY * splatDDY );
half clipDistHard = max( clipDist * 0.01, 0.01 );
half clipDistSoft = 0.01 * _SplatEdgeBumpWidth;
We smoothstep the splat distance field to create a crisp but soft edge for each channel.  Each channel must bleed over itself just a little bit to ensure there are no holes when splats of different colors meet.  A second smooth step is done to create a mask for the splat edges.

// Smoothstep to make a soft mask for the splats
float4 splatMask = smoothstep( ( _Clip - 0.01 ) - clipDistHard, ( _Clip - 0.01 ) + clipDistHard, splatSDF );
float splatMaskTotal = max( max( splatMask.x, splatMask.y ), max( splatMask.z, splatMask.w ) );

// Smoothstep to make the edge bump mask for the splats
float4 splatMaskInside = smoothstep( _Clip - clipDistSoft, _Clip + clipDistSoft, splatSDF );
splatMaskInside = max( max( splatMaskInside.x, splatMaskInside.y ), max( splatMaskInside.z, splatMaskInside.w ) );
Now we create a normal offset for each channel of the splat map and combine them all into a single normal offset.  Also we can sample a tiling normal map to give the splatted areas some texture.  Note that the _SplatTileNormalTex is uncompressed just because I think it looks better with glossy surfaces.  This normal offset is in the tangent space of the second uv channel and we need to get it into the tangent space of the first uv channel to combine with the regular material's bump map.

// Create normal offset for each splat channel
float4 offsetSplatX = splatSDF - splatSDFx;
float4 offsetSplatY = splatSDF - splatSDFy;

// Combine all normal offsets into single offset
float2 offsetSplat = lerp( float2(offsetSplatX.x,offsetSplatY.x), float2(offsetSplatX.y,offsetSplatY.y), splatMask.y );
offsetSplat = lerp( offsetSplat, float2(offsetSplatX.z,offsetSplatY.z), splatMask.z );
offsetSplat = lerp( offsetSplat, float2(offsetSplatX.w,offsetSplatY.w), splatMask.w );
offsetSplat = normalize( float3( offsetSplat, 0.0001) ).xy; // Normalize to ensure parity between texture sizes
offsetSplat = offsetSplat * ( 1.0 - splatMaskInside ) * _SplatEdgeBump;

// Add some extra bump over the splat areas
float2 splatTileNormalTex = tex2D( _SplatTileNormalTex, IN.uv2_SplatTex * 10.0 ).xy;
offsetSplat += ( splatTileNormalTex.xy - 0.5 ) * _SplatTileBump  * 0.2;
First we need to get the splat edge normal into world space.  There's two ways of going about this.  The first is to generate and store the tangents elsewhere, which is what I originally did and is included in the package.  The second is to compute the the world normal without tangents which is also included in the package but is commented out.  Depending on what your bottlenecks are (memory vs instructions) you can pick which technique to use.  They both produce similar results.

The tangent-less normals were implemented from the example in this blog post:

// Create the world normal of the splats
#if 0
 // Use tangentless technique to get world normals
 float3 worldNormal = WorldNormalVector (IN, float3(0,0,1) );
 float3 offsetSplatLocal2 = normalize( float3( offsetSplat, sqrt( 1.0 - saturate( dot( offsetSplat, offsetSplat ) ) ) ) );
 float3 offsetSplatWorld = perturb_normal( offsetSplatLocal2, worldNormal, normalize( IN.worldPos - _WorldSpaceCameraPos ), IN.uv2_SplatTex );
#else
 // Sample the world tangent and binormal textures for texcoord1 (the second uv channel)
 // you could skip the binormal texture and cross the vertex normal with the tangent texture to get the bitangent
 float3 worldTangentTex = tex2D ( _WorldTangentTex, IN.uv2_SplatTex ).xyz * 2.0 - 1.0;
 float3 worldBinormalTex = tex2D ( _WorldBinormalTex, IN.uv2_SplatTex ).xyz * 2.0 - 1.0;

 // Create the world normal of the splats
 float3 offsetSplatWorld = offsetSplat.x * worldTangentTex + offsetSplat.y * worldBinormalTex;
#endif
Now that the splat edge normal is in world space we need to get it into the original tangent space.

// Get the tangent and binormal for the texcoord0 (this is just the actual tangent and binormal that comes in from the vertex shader)
float3 worldTangent = WorldNormalVector (IN, float3(1,0,0) );
float3 worldBinormal = WorldNormalVector (IN, float3(0,1,0) );

// Convert the splat world normal to tangent normal for texcood0
float3 offsetSplatLocal = 0.0001;
offsetSplatLocal.x = dot( worldTangent, offsetSplatWorld );
offsetSplatLocal.y = dot( worldBinormal, offsetSplatWorld );
offsetSplatLocal = normalize( offsetSplatLocal );
Talk about a roundabout solution. Now we can sample the main material normal and combine it with the splat normal.

// sample the normal map for the main material
float4 normalMap = tex2D( _BumpTex, IN.uv_MainTex );
normalMap.xyz = UnpackNormal( normalMap );
float3 tanNormal = normalMap.xyz;

// Add the splat normal to the tangent normal
tanNormal.xy += offsetSplatLocal * splatMaskTotal;
tanNormal = normalize( tanNormal );
Sample the albedo texture and lerp it with the 4 splat colors using the splat mask

// Albedo comes from a texture tinted by color
float4 MainTex = tex2D (_MainTex, IN.uv_MainTex );
fixed4 c = MainTex * _Color;

// Lerp the color with the splat colors based on the splat mask channels
c.xyz = lerp( c.xyz, float3(1.0,0.5,0.0), splatMask.x );
c.xyz = lerp( c.xyz, float3(1.0,0.0,0.0), splatMask.y );
c.xyz = lerp( c.xyz, float3(0.0,1.0,0.0), splatMask.z );
c.xyz = lerp( c.xyz, float3(0.0,0.0,1.0), splatMask.w );
All that's left is to output the surface values.

o.Albedo = c.rgb;
o.Normal = tanNormal;
o.Metallic = _Metallic;
o.Smoothness = lerp( _Glossiness, 0.7, splatMaskTotal );
o.Alpha = c.a;
Final result
And that's all there is to it!

Saturday, March 28, 2015

Graphics Blitting in Unity Part 2: Caustics

I know I promised to do something with ping pong buffers but I remembered an effect I did a few months ago to simulate caustics using a light cookie.  You might have seen some tutorials or assets on the store that do something like this.  Usually these techniques involve having a bunch of pre-baked images and cycling through them, changing the image each frame.  The issues with this technique are that the animation rate of your caustics is going to fluctuate with the frame rate of your game, and of course that you need a bunch of images taking up memory.  You also need a program to generate your pre-baked images.

If this were unreal you could set up a material as a light function and project your caustics shader with the light.  This isn't a bad way to go but you end up computing the shader for every pixel the light touches times however many lights you have projecting it.  Your source textures might be really low (512x512) but you may be running a shader on the entire screen and then some.

This leads me to a solution that I think is pretty good.  Pre-compute one frame of a caustics animation using a shader and project that image with however many lights you want.


Web Player

Asset Package

In the package there is a shader, material, and render texture, along with 3 images that the material uses to generate the caustics.  There is a script that you put on a light (or anything in your scene) that stores a specified render texture, material, and image.  In the same script file there is a static class that is what actually does the work.  The script sends the information it's holding to the static class and the static class remembers what render textures it has blitted to.  If it is told to blit to a render texture that has already been blitted to that frame it will skip over it.  This is good for if you goof and copy a light that has script on it a bunch of times, the static class will keep the duplicate scripts from blitting over each other's render textures.  Now you can use the render texture as a light cookie!  The cookie changes every frame and only gets calculated once per frame instead of once per light.

Some optimizations would be to remove caustic generators when you are not able to see them, or have a caustics manager in the scene instead of a static class attached to the generator script.  The command buffer post on Unity's blog has a package that shows how to use sort of a manager for stuff like this so check it out!

Thursday, March 19, 2015

Graphics Blitting in Unity Part 1

Gas Giant Planet Shader Using Ping Pong Buffers

A very powerful feature in Unity is the ability to blit or render a new texture from an existing set of texture using a custom shader.  This is how many post process effects are done such as bloom, screen space ambient occlusion, and god rays.  There are many more uses for blitting than just post process effects and I'm going to go over a few of them in the next few posts starting with uses for standard separable blur and moving on to ping pong buffers to create cool effects like the gas giant planet shown above.

First example is a separable blur shader and how to use it to put a glow around a character icon.  This idea came from some one in the Unity3D sub-reddit who was looking for a way of automating glows around their character icons.  Get the package here!



So separable blur, separable means that the blurring is separated into 2 passes, horizontal and vertical.  It's 2 passes but we can use the same shader for both passes by telling it what direction to blur the image in.

First lets have a look at the shader.

Shader "Hidden/SeperableBlur" {
Properties {
_MainTex ("Base (RGB)", 2D) = "black" {}
}

CGINCLUDE

#include "UnityCG.cginc"
#pragma glsl

Starts out pretty simple, only one exposed property, but wait, whats that CGINCLUDE?  And no Subshader or Pass?  CGINCLUDE is used instead of CGPROGRAM when you want to have a shader that can do lots of things.  You make the include section first and then put your subshader section below with multiple passes that reference the vertex and fragment programs you write in the the include section.

struct v2f {
float4 pos : POSITION;
float2 uv : TEXCOORD0;
};

We don't need to pass much information the the fragment shader.  Position and uv is all we need.

//Common Vertex Shader
v2f vert( appdata_img v )
{
v2f o;
o.pos = mul (UNITY_MATRIX_MVP, v.vertex);
o.uv = v.texcoord.xy;
return o;
}

appdata_img is defined in UnityCG.cginc as being the standard information that you need for blitting stuff.  The rest is straight forward, just pass the uv to the fragment.

half4 frag(v2f IN) : COLOR
{
half2 ScreenUV = IN.uv;

float2 blurDir = _BlurDir.xy;
float2 pixelSize = float2( 1.0 / _SizeX, 1.0 / _SizeY );

I know it says half4 but the textures we are using only store 1 channel.  Save the UV's to a variable for convenience.  Same with the blurDir (blur direction), I'll talk about this more later but this variable is passed in from script.  And pixelSize is the normalized size of a pixel in uv space.  The size of the image is passed to the shader from script as well.

float4 Scene = tex2D( _MainTex, ScreenUV ) * 0.1438749;

Scene += tex2D( _MainTex, ScreenUV + ( blurDir * pixelSize * _BlurSpread ) ) * 0.1367508;
Scene += tex2D( _MainTex, ScreenUV + ( blurDir * pixelSize * 2.0 * _BlurSpread ) ) * 0.1167897;
Scene += tex2D( _MainTex, ScreenUV + ( blurDir * pixelSize * 3.0 * _BlurSpread ) ) * 0.08794503;
Scene += tex2D( _MainTex, ScreenUV + ( blurDir * pixelSize * 4.0 * _BlurSpread ) ) * 0.05592986;
Scene += tex2D( _MainTex, ScreenUV + ( blurDir * pixelSize * 5.0 * _BlurSpread ) ) * 0.02708518;
Scene += tex2D( _MainTex, ScreenUV + ( blurDir * pixelSize * 6.0 * _BlurSpread ) ) * 0.007124048;

Scene += tex2D( _MainTex, ScreenUV - ( blurDir * pixelSize * _BlurSpread ) ) * 0.1367508;
Scene += tex2D( _MainTex, ScreenUV - ( blurDir * pixelSize * 2.0 * _BlurSpread ) ) * 0.1167897;
Scene += tex2D( _MainTex, ScreenUV - ( blurDir * pixelSize * 3.0 * _BlurSpread ) ) * 0.08794503;
Scene += tex2D( _MainTex, ScreenUV - ( blurDir * pixelSize * 4.0 * _BlurSpread ) ) * 0.05592986;
Scene += tex2D( _MainTex, ScreenUV - ( blurDir * pixelSize * 5.0 * _BlurSpread ) ) * 0.02708518;
Scene += tex2D( _MainTex, ScreenUV - ( blurDir * pixelSize * 6.0 * _BlurSpread ) ) * 0.007124048;

Ohhhhh Jesus, Look at all that stuff.  This is a 13 tap blur, that means we will sample the source image 13 times, weight each sample (that long number on the end of each line), and add the result of all the samples together.  Lets just deconstruct one of these lines:

Scene += tex2D( _MainTex, ScreenUV + ( blurDir * pixelSize * 4.0 * _BlurSpread ) ) * 0.05592986;

Sample the _MainTex using the uv's but then add the blur direction (either (0,1) horizontal or (1,0) vertical ), multiplied by the size of one of the pixels, multiplied by how many pixels over we are (in this case it's the 4th tap over), multiplied by an overarching blur spread variable to change the tightness of the blur.  Then multiply the sampled texture by a gaussian distribution weight (in this case 0.05592986).  Think of gaussian distribution like a bell curve with more weight being given to values closer to the center.  If you add up all the numbers on the end they will come out to 1.007124136, or pretty darn close to 1.  You will notice that half of the samples add the blur direction and half subract the blur direction.  This is because we are sampling left AND right of the center pixel.

Scene *= _ChannelWeight;
float final = Scene.x + Scene.y + Scene.z + Scene.w;

return float4( final,0,0,0 );

now we multiply the final value by the _ChannelWeight variable which is passed in from script to isolate the channel we want.  Add each channel together and return it in the first channel of a float4, the rest of the channels don't matter because the render target will only be one channel.

Subshader {

ZTest Off
Cull Off
ZWrite Off
Fog { Mode off }

//Pass 0 Blur
Pass
{
Name "Blur"

CGPROGRAM
#pragma fragmentoption ARB_precision_hint_fastest
#pragma vertex vert
#pragma fragment frag
ENDCG
}
}

After the include portion goes the subshader and passes.  In each pass you just need to tell it what vertex and fragment program to use.  #pragma fragmentoption ARB_precision_hint_fastest us used to automatically optimize the shader if possible... or something like that.

Now lets check out the IconGlow.cs script.

public Texture icon;
public RenderTexture iconGlowPing;
public RenderTexture iconGlowPong;

private Material blitMaterial;

We are going to need a texture for the icon, and 2 render textures; one the hold the horizontal blur result and one to hold the vertical blur result.  I've made the 2 render textures public just so they can be viewed from the inspector. blitMaterial is going to be the material we create that uses the blur shader.

Next check out the start function.  This is where everything happens.

blitMaterial = new Material (Shader.Find ("Hidden/SeperableBlur"));

This makes a new material that uses the shader named "Hidden/SeperableBlur".  Make sure you don't have another shader with the same name, it can cause some headaches.

int width = icon.width / 2;
int height = icon.height / 2;

The resolution of the blurred image doesn't need to be as high as the original icon.  I'm going to knock it down by a factor of 2.  This will make the blurred images take up 1/4 the amount of memory which is important because render textures aren't compressed like imported textures.

iconGlowPing = new RenderTexture( width, height, 0 );
iconGlowPing.format = RenderTextureFormat.R8;
iconGlowPing.wrapMode = TextureWrapMode.Clamp;

iconGlowPong = new RenderTexture( width, height, 0 );
iconGlowPong.format = RenderTextureFormat.R8;
iconGlowPong.wrapMode = TextureWrapMode.Clamp;

Now we create the render textures.  Width, Height, and the 0 for the number of bits to use for the depth buffer; the textures don't need a depth buffer, hence the 0.  Setting the format to R8 means the texture will be a single channel (R/red) and 8 bits or 256 color grey scale image.  The memory footprint of these images clock in at double the footprint of a DXT compressed full color image of the same size, so it's important to consider the size of the image when working with render textures.  Setting the wrap mode to clap ensures that pixels from the left side of the image don't bleed into the right and vice versa, same with top to bottom.

blitMaterial.SetFloat ("_SizeX", width);
blitMaterial.SetFloat ("_SizeY", height);
blitMaterial.SetFloat ("_BlurSpread", 1.0f);

Now we start setting material values.  These are variables we will have access to in the shader.  _SizeX and Y should be self explanatory.  The shader needs to know how big the image is to precisely sample the next pixel over.  _BlurSpread will be used to scale how far the image is blurred, setting it smaller will yield a tighter blur and setting it larger will blur the image more but will also introduce artifacts at too high a value.

blitMaterial.SetVector ("_ChannelWeight", new Vector4 (0,0,0,1));
blitMaterial.SetVector ("_BlurDir", new Vector4 (0,1,0,0));
Graphics.Blit (icon, iconGlowPing, blitMaterial, 0 );

The next 2 variables being set are specific to the first pass.  _ChannelWeight is like selecting which channel you want to output.  Since we are using the shame shader for both passes we need a way to specify which channel we want to return in the end.  I'm setting it to the alpha channel for the first pass because we want to blur the character icon's alpha.  _BlurDir is where the "separable" part comes in.  think of it like the channel weight but for directions, here the direction is weighted for vertical blur because the first value is 0 (X) and the second value is 1 (Y).  The last 2 numbers aren't used.

Finally it's time to blit an image.  icon being the first variable passed in will automatically be mapped to the _MainTex variable in the shader.  iconGlowPing is the texture where we want to store the result.  blitMaterial is the material with the shader we are using to do the work, and 0 is the pass to use in said shader; 0 is the first pass.  This shader only has one pass but blit shaders can have many passes to break up large amounts of work or to pre-process data for other passes.

blitMaterial.SetVector ("_ChannelWeight", new Vector4 (1,0,0,0));
blitMaterial.SetVector ("_BlurDir", new Vector4(1,0,0,0));
Graphics.Blit (iconGlowPing, iconGlowPong, blitMaterial, 0 );

Now the vertical blur is done and saved!  We now need to change the _ChannelWeight to use the first/red channel.  The render textures we are using only store the red channel so the alpha from the icon image is now in the red channel of iconGlowPing.  We also need to change the _BlurDir variable to horizontal; 1 (X) and 0 (Y).  Now we take the vertically blurred image and blur it horizontally, saving it to iconGlowPong.

Material thisMaterial = this.GetComponent<Renderer>().sharedMaterial;
thisMaterial.SetTexture ("_GlowTex", iconGlowPong);

Now we just get the material that is rendering the icon and tell it about the blurred image we just made.

Finally let's look at the shader the icon actually uses.

half4 frag ( v2f IN ) : COLOR {

half4 icon = tex2D (_MainTex, IN.uv) * _Color;
half glow = tex2D (_GlowTex, IN.uv).x;
glow = saturate( glow * _GlowAlpha );

icon.xyz = lerp( _GlowColor.xyz, icon.xyz, icon.w );
icon.w = saturate( icon.w + glow * _GlowColor.w );

return icon;
}

This is a pretty simple shader and I have gone over some other shaders before so I am skipping right to the meat of it.  Look up the main texture and tint it with a color if you want.  look up the red (only) channel of the glow texture.  Multiply the glow by an parameter to expand it out and saturate it so no values go out of 0-1 range.  Lerp the color of the icon with the color of the glow (set by a parameter)  using the icons alpha as a mask.  This will put the glow color "behind" the icon.  Add the glow (multiplied by the alpha of the _GlowColor parameter) to the alpha of the icon and saturate the result.  Outputting alpha values outside of 0-1 range will have weird effects when using hdr.  And that's it!  There is a nice glow with the intensity and color of your choice around the icon.  To actually use this shader in a gui menu you should probably copy one of the built in gui shaders and extend that.

Now that this simple primer is out of the way the next post will be about ping pong buffers!