Monday, July 23, 2018

Geometry Shader Adventures, Mesh Triangle to Particle

Geometry shaders are pretty cool because they let you turn a triangle into just about anything so long as the output doesn't exceed 1 kilobyte (don't quote me on that). Here is a simple geometry shader that turns all triangles into screen facing quads and gives them some particle like motion that can be driven with some parameters. If you want to fire up the above example in Unity you can download the asset package below. Exported with 2017.4.3f1 but should work for other versions too since it's just an unlit shader.
Shader "Unlit/MeshToParticle"
{
 Properties
 {
  _MainTex ("Texture", 2D) = "white" {}
  _Color ("Color", Color) = (1,1,1,1)
  _Factor ("Factor", Float) = 2.0

  _Ramp ("Ramp", Range(0,1)) = 0
  _Size ("Size", Float) = 1.0
  _Spread ("Random Spread", Float) = 1.0
  _Frequency ("Noise Frequency", Float) = 1.0
  _Motion ("Motion Distance", Float) = 1.0

  _InvFade ("Soft Particles Factor", Range(0.01,3.0)) = 1.0
 }

These are just the parameters that will control the particles.

_MainTex is the particle texture.
_Color is the color of the particle, this gets multiplied by vertex color.
_Factor is how bright the particles should be (for boosting values over 1).
_Ramp drives the lifetime of the particles and sliding it back and forth with "play" the particles.
_Size is the size of the particles in world space.
_Spread is how far apart the particles will move in a random direction.
_Frequency is the frequency of curl like noise that will be added to the particles over their lifetime.
_Motion is how far the particles will travel.
_InvFade is for depth bias blending with opaque objects.

 SubShader
 {
  Tags { "Queue"="Transparent" "RenderType"="Transparent"}
  Blend One OneMinusSrcAlpha
  ColorMask RGB
  Cull Off Lighting Off ZWrite Off
  LOD 100

  Pass
  {
   CGPROGRAM
   #pragma vertex vert
   #pragma geometry geom
   #pragma fragment frag
   #pragma target 4.0
   #pragma multi_compile_particles

Defining the various shaders. The geometry shader happens inbetween the vertex and pixel shader like so: Vertex Shader -> Geometry Shader -> Pixel Shader

   #include "UnityCG.cginc"

   sampler2D _MainTex;
   float4 _MainTex_ST;
   float4 _Color;
   float _Factor;

   float _Ramp;
   float _Size;
   float _Frequency;
   float _Spread;
   float _Motion;

   sampler2D_float _CameraDepthTexture;
   float _InvFade;

Just defining the variables to use.

   // data coming from unity
   struct appdata
   {
    float4 vertex : POSITION;
    float4 texcoord : TEXCOORD0;
    fixed4 color : COLOR;
   };

This is the data that unity will feed to the vertex shader. The vertex shader isn't going to do much work so we will use the same struct to send information to the geometry shader.

   // vertex shader mostly just passes information to the geometry shader
   appdata vert (appdata v)
   {
    appdata o;

    // change the position to world space
    float3 worldPos = mul( unity_ObjectToWorld, v.vertex ).xyz;
    o.vertex = float4(worldPos,1);

    // pass these through unchanged
    o.texcoord = v.texcoord;
    o.color = v.color;

    return o;
   }

The vertex shader just transforms the vertex position to world space.

   // information that will be sent to the pixel shader
   struct v2f {
    float4 vertex : SV_POSITION;
    fixed4 color : COLOR;
    float2 texcoord : TEXCOORD0;
    #ifdef SOFTPARTICLES_ON
     float4 projPos : TEXCOORD1;
    #endif
   };

This is the data that the geometry shader will sent to the pixel shader. This looks like something the vertex shader might normally do.

   // geometry vertex function
   // this will all get called in geometry shader
   // its nice to keep this stuff in its own function
   v2f geomVert (appdata v)
   {
    v2f o;
    o.vertex = UnityWorldToClipPos(v.vertex.xyz);
    o.color = v.color;
    o.texcoord = TRANSFORM_TEX(v.texcoord, _MainTex);
    #ifdef SOFTPARTICLES_ON
     o.projPos = ComputeScreenPos (o.vertex);
     // since the vertex is already in world space we need to 
     // skip some of the stuff in the COMPUTE_EYEDEPTH funciton
     // COMPUTE_EYEDEPTH(o.projPos.z);
     o.projPos.z = -mul(UNITY_MATRIX_V, v.vertex).z;
    #endif

    return o;
   }

This function is called in the geometry shader for each vertex it generates. It does most of the work that a vertex shader would normally do so I like to think of this a vertex shader for the geometry shader.

   // geometry shader
   [maxvertexcount(4)]
   void geom(triangle appdata input[3], inout TriangleStream stream )
   {
    // get the values for the centers of the triangle
    float3 pointPosWorld = (input[0].vertex.xyz + input[1].vertex.xyz + input[2].vertex.xyz ) * 0.3333333;
    float4 pointColor = (input[0].color + input[1].color + input[2].color ) * 0.3333333;
    float4 uv = (input[0].texcoord + input[1].texcoord + input[2].texcoord ) * 0.3333333;

This is the actual geometry shader, the real meat and potatoes of the whole effect. This geometry shader gets sent a triangle (an array of 3 appdata structs) so to start we get values for the center of the triangle by averaging the 3 points of the triangle

    // lifetime based on tiling and ramp parameters
    half lifeTime = saturate( uv.x + lerp( -1.0, 1.0, _Ramp ) );

    // fade particle on and off based on lifetime
    float fade = smoothstep( 0.0, 0.1, lifeTime);
    fade *= 1.0 - smoothstep( 0.1, 1.0, lifeTime);

    // don't draw invisible particles
    if( fade == 0.0 ){
     return;
    }

    // multiply color alpha by fade value
    pointColor.w *= fade;

The particle lifetime is based on the uv.x value and is biased by the ramp value. This makes the lifetime go from 0-1 across the texture coords. A fade value is generated based on the lifetime and if the value is 0.0 (before or after its lifetime) we return nothing, skipping the pixel shader and the rest of the work for this particle

    // random number seed from uv coords
    float3 seed = float3( uv.x + 0.3 + uv.y * 2.3, uv.x + 0.6 + uv.y * 3.1, uv.x + 0.9 + uv.y * 9.7 );
    // random number per particle based on seed
    float3 random3 = frac( sin( dot( seed * float3(138.215, 547.756, 318.269), float3(167.214, 531.148, 671.248) ) * float3(158.321,456.298,725.681) ) * float3(158.321,456.298,725.681) );
    // random direction from random number
    float3 randomDir = normalize( random3 - 0.5 );

We generate a random value for each particle, This can be used to ad variability to all kinds of things like size, color, rotation; but we are just using it to get a random direction in this case.

    // curl-ish noise for making the particles move in an interesting way
    float3 noise3x = float3( uv.x, uv.x + 2.3, uv.x + 5.7 ) * _Frequency;
    float3 noise3y = float3( uv.y + 7.3, uv.y + 9.7, uv.y + 12.3 ) * _Frequency;
    float3 noiseDir = sin(noise3x.yzx * 5.731 ) * sin( noise3x.zxy * 3.756 ) * sin( noise3x.xyz * 2.786 );
    noiseDir += sin(noise3y.yzx * 7.731 ) * sin( noise3y.zxy * 5.756 ) * sin( noise3y.xyz * 3.786 );

We also generate some noise with sine functions seeding based on the uvs. This creates some wispy curl-like motion for the particles
    // add the random direction and the curl direction to the world position
    pointPosWorld += randomDir * lifeTime * _Motion * _Spread;
    pointPosWorld += noiseDir * lifeTime * _Motion;

Then add the random and noise motion to the particle world position

    // the up and left camera direction for making the camera facing particle quad
    float3 camUp = UNITY_MATRIX_V[1].xyz * _Size * 0.5;
    float3 camLeft = UNITY_MATRIX_V[0].xyz * _Size * 0.5;

    // v1-----v2
    // |     / |
    // |    /  |
    // |   C   |
    // |  /    |
    // | /     |
    // v3-----v4

    float3 v1 = pointPosWorld + camUp + camLeft;
    float3 v2 = pointPosWorld + camUp - camLeft;
    float3 v3 = pointPosWorld - camUp + camLeft;
    float3 v4 = pointPosWorld - camUp - camLeft;

The camera up and left direction are hidden in the view matrix and we can use them to generate the positions for the 4 vertices.

    // send information for each vertex to the geomVert function

    appdata vertIN;
    vertIN.color = pointColor;

    vertIN.vertex = float4(v1,1);
    vertIN.texcoord.xy = float2(0,1);
    stream.Append( geomVert(vertIN) );

    vertIN.vertex = float4(v2,1);
    vertIN.texcoord.xy  = float2(1,1);
    stream.Append( geomVert(vertIN) );

    vertIN.vertex = float4(v3,1);
    vertIN.texcoord.xy  = float2(0,0);
    stream.Append( geomVert(vertIN) );

    vertIN.vertex = float4(v4,1);
    vertIN.texcoord.xy  = float2(1,0);
    stream.Append( geomVert(vertIN) );

   }
Now we can send some updated appdata to the geomVert function and append the result. The color will be the same for all the verts in the quad but the position and texture coordinates need to be updated before sending the appdata to the geomVert function.

stream.Append() adds a vertex to a triangle strip. The first 3 appends create the first triangle, adding a forth append creates the second triangle using that vertex and the previous 2 vertices. This is known as a triangle strip (super old school term) and you can continue appending vertices and each one will be a new triangle with the previous 2 verts. You can make hair or blades of grass this way.

   // simple particle like pixel shader
   fixed4 frag (v2f IN) : SV_Target
   {
    #ifdef SOFTPARTICLES_ON
     float sceneZ = LinearEyeDepth (SAMPLE_DEPTH_TEXTURE_PROJ(_CameraDepthTexture, UNITY_PROJ_COORD(IN.projPos)));
     float partZ = IN.projPos.z;
     IN.color.w *= saturate (_InvFade * (sceneZ-partZ));
    #endif

    // sample the texture
    fixed4 col = tex2D(_MainTex, IN.texcoord);
    col *= _Color;
    col *= IN.color;
    col.xyz *= _Factor;

    // premultiplied alpha
    col.xyz *= col.w;

    return col;
   }
   ENDCG
  }
 }
}
The pixel shader looks like a simple particle shader because it pretty much is. There's lots more you can do with geometry shaders. You could add a tessellation shader and turn each mesh triangle into 100+ particles or other crazy stuff. I hope this gives you some insight into geometry shaders and helps get you started making some crazy stuff.

1 comment: