Monday, July 23, 2018

Geometry Shader Adventures, Mesh Triangle to Particle

Geometry shaders are pretty cool because they let you turn a triangle into just about anything so long as the output doesn't exceed 1 kilobyte (don't quote me on that). Here is a simple geometry shader that turns all triangles into screen facing quads and gives them some particle like motion that can be driven with some parameters. If you want to fire up the above example in Unity you can download the asset package below. Exported with 2017.4.3f1 but should work for other versions too since it's just an unlit shader.
Shader "Unlit/MeshToParticle"
{
 Properties
 {
  _MainTex ("Texture", 2D) = "white" {}
  _Color ("Color", Color) = (1,1,1,1)
  _Factor ("Factor", Float) = 2.0

  _Ramp ("Ramp", Range(0,1)) = 0
  _Size ("Size", Float) = 1.0
  _Spread ("Random Spread", Float) = 1.0
  _Frequency ("Noise Frequency", Float) = 1.0
  _Motion ("Motion Distance", Float) = 1.0

  _InvFade ("Soft Particles Factor", Range(0.01,3.0)) = 1.0
 }

These are just the parameters that will control the particles.

_MainTex is the particle texture.
_Color is the color of the particle, this gets multiplied by vertex color.
_Factor is how bright the particles should be (for boosting values over 1).
_Ramp drives the lifetime of the particles and sliding it back and forth with "play" the particles.
_Size is the size of the particles in world space.
_Spread is how far apart the particles will move in a random direction.
_Frequency is the frequency of curl like noise that will be added to the particles over their lifetime.
_Motion is how far the particles will travel.
_InvFade is for depth bias blending with opaque objects.

 SubShader
 {
  Tags { "Queue"="Transparent" "RenderType"="Transparent"}
  Blend One OneMinusSrcAlpha
  ColorMask RGB
  Cull Off Lighting Off ZWrite Off
  LOD 100

  Pass
  {
   CGPROGRAM
   #pragma vertex vert
   #pragma geometry geom
   #pragma fragment frag
   #pragma target 4.0
   #pragma multi_compile_particles

Defining the various shaders. The geometry shader happens inbetween the vertex and pixel shader like so: Vertex Shader -> Geometry Shader -> Pixel Shader

   #include "UnityCG.cginc"

   sampler2D _MainTex;
   float4 _MainTex_ST;
   float4 _Color;
   float _Factor;

   float _Ramp;
   float _Size;
   float _Frequency;
   float _Spread;
   float _Motion;

   sampler2D_float _CameraDepthTexture;
   float _InvFade;

Just defining the variables to use.

   // data coming from unity
   struct appdata
   {
    float4 vertex : POSITION;
    float4 texcoord : TEXCOORD0;
    fixed4 color : COLOR;
   };

This is the data that unity will feed to the vertex shader. The vertex shader isn't going to do much work so we will use the same struct to send information to the geometry shader.

   // vertex shader mostly just passes information to the geometry shader
   appdata vert (appdata v)
   {
    appdata o;

    // change the position to world space
    float3 worldPos = mul( unity_ObjectToWorld, v.vertex ).xyz;
    o.vertex = float4(worldPos,1);

    // pass these through unchanged
    o.texcoord = v.texcoord;
    o.color = v.color;

    return o;
   }

The vertex shader just transforms the vertex position to world space.

   // information that will be sent to the pixel shader
   struct v2f {
    float4 vertex : SV_POSITION;
    fixed4 color : COLOR;
    float2 texcoord : TEXCOORD0;
    #ifdef SOFTPARTICLES_ON
     float4 projPos : TEXCOORD1;
    #endif
   };

This is the data that the geometry shader will sent to the pixel shader. This looks like something the vertex shader might normally do.

   // geometry vertex function
   // this will all get called in geometry shader
   // its nice to keep this stuff in its own function
   v2f geomVert (appdata v)
   {
    v2f o;
    o.vertex = UnityWorldToClipPos(v.vertex.xyz);
    o.color = v.color;
    o.texcoord = TRANSFORM_TEX(v.texcoord, _MainTex);
    #ifdef SOFTPARTICLES_ON
     o.projPos = ComputeScreenPos (o.vertex);
     // since the vertex is already in world space we need to 
     // skip some of the stuff in the COMPUTE_EYEDEPTH funciton
     // COMPUTE_EYEDEPTH(o.projPos.z);
     o.projPos.z = -mul(UNITY_MATRIX_V, v.vertex).z;
    #endif

    return o;
   }

This function is called in the geometry shader for each vertex it generates. It does most of the work that a vertex shader would normally do so I like to think of this a vertex shader for the geometry shader.

   // geometry shader
   [maxvertexcount(4)]
   void geom(triangle appdata input[3], inout TriangleStream stream )
   {
    // get the values for the centers of the triangle
    float3 pointPosWorld = (input[0].vertex.xyz + input[1].vertex.xyz + input[2].vertex.xyz ) * 0.3333333;
    float4 pointColor = (input[0].color + input[1].color + input[2].color ) * 0.3333333;
    float4 uv = (input[0].texcoord + input[1].texcoord + input[2].texcoord ) * 0.3333333;

This is the actual geometry shader, the real meat and potatoes of the whole effect. This geometry shader gets sent a triangle (an array of 3 appdata structs) so to start we get values for the center of the triangle by averaging the 3 points of the triangle

    // lifetime based on tiling and ramp parameters
    half lifeTime = saturate( uv.x + lerp( -1.0, 1.0, _Ramp ) );

    // fade particle on and off based on lifetime
    float fade = smoothstep( 0.0, 0.1, lifeTime);
    fade *= 1.0 - smoothstep( 0.1, 1.0, lifeTime);

    // don't draw invisible particles
    if( fade == 0.0 ){
     return;
    }

    // multiply color alpha by fade value
    pointColor.w *= fade;

The particle lifetime is based on the uv.x value and is biased by the ramp value. This makes the lifetime go from 0-1 across the texture coords. A fade value is generated based on the lifetime and if the value is 0.0 (before or after its lifetime) we return nothing, skipping the pixel shader and the rest of the work for this particle

    // random number seed from uv coords
    float3 seed = float3( uv.x + 0.3 + uv.y * 2.3, uv.x + 0.6 + uv.y * 3.1, uv.x + 0.9 + uv.y * 9.7 );
    // random number per particle based on seed
    float3 random3 = frac( sin( dot( seed * float3(138.215, 547.756, 318.269), float3(167.214, 531.148, 671.248) ) * float3(158.321,456.298,725.681) ) * float3(158.321,456.298,725.681) );
    // random direction from random number
    float3 randomDir = normalize( random3 - 0.5 );

We generate a random value for each particle, This can be used to ad variability to all kinds of things like size, color, rotation; but we are just using it to get a random direction in this case.

    // curl-ish noise for making the particles move in an interesting way
    float3 noise3x = float3( uv.x, uv.x + 2.3, uv.x + 5.7 ) * _Frequency;
    float3 noise3y = float3( uv.y + 7.3, uv.y + 9.7, uv.y + 12.3 ) * _Frequency;
    float3 noiseDir = sin(noise3x.yzx * 5.731 ) * sin( noise3x.zxy * 3.756 ) * sin( noise3x.xyz * 2.786 );
    noiseDir += sin(noise3y.yzx * 7.731 ) * sin( noise3y.zxy * 5.756 ) * sin( noise3y.xyz * 3.786 );

We also generate some noise with sine functions seeding based on the uvs. This creates some wispy curl-like motion for the particles
    // add the random direction and the curl direction to the world position
    pointPosWorld += randomDir * lifeTime * _Motion * _Spread;
    pointPosWorld += noiseDir * lifeTime * _Motion;

Then add the random and noise motion to the particle world position

    // the up and left camera direction for making the camera facing particle quad
    float3 camUp = UNITY_MATRIX_V[1].xyz * _Size * 0.5;
    float3 camLeft = UNITY_MATRIX_V[0].xyz * _Size * 0.5;

    // v1-----v2
    // |     / |
    // |    /  |
    // |   C   |
    // |  /    |
    // | /     |
    // v3-----v4

    float3 v1 = pointPosWorld + camUp + camLeft;
    float3 v2 = pointPosWorld + camUp - camLeft;
    float3 v3 = pointPosWorld - camUp + camLeft;
    float3 v4 = pointPosWorld - camUp - camLeft;

The camera up and left direction are hidden in the view matrix and we can use them to generate the positions for the 4 vertices.

    // send information for each vertex to the geomVert function

    appdata vertIN;
    vertIN.color = pointColor;

    vertIN.vertex = float4(v1,1);
    vertIN.texcoord.xy = float2(0,1);
    stream.Append( geomVert(vertIN) );

    vertIN.vertex = float4(v2,1);
    vertIN.texcoord.xy  = float2(1,1);
    stream.Append( geomVert(vertIN) );

    vertIN.vertex = float4(v3,1);
    vertIN.texcoord.xy  = float2(0,0);
    stream.Append( geomVert(vertIN) );

    vertIN.vertex = float4(v4,1);
    vertIN.texcoord.xy  = float2(1,0);
    stream.Append( geomVert(vertIN) );

   }
Now we can send some updated appdata to the geomVert function and append the result. The color will be the same for all the verts in the quad but the position and texture coordinates need to be updated before sending the appdata to the geomVert function.

stream.Append() adds a vertex to a triangle strip. The first 3 appends create the first triangle, adding a forth append creates the second triangle using that vertex and the previous 2 vertices. This is known as a triangle strip (super old school term) and you can continue appending vertices and each one will be a new triangle with the previous 2 verts. You can make hair or blades of grass this way.

   // simple particle like pixel shader
   fixed4 frag (v2f IN) : SV_Target
   {
    #ifdef SOFTPARTICLES_ON
     float sceneZ = LinearEyeDepth (SAMPLE_DEPTH_TEXTURE_PROJ(_CameraDepthTexture, UNITY_PROJ_COORD(IN.projPos)));
     float partZ = IN.projPos.z;
     IN.color.w *= saturate (_InvFade * (sceneZ-partZ));
    #endif

    // sample the texture
    fixed4 col = tex2D(_MainTex, IN.texcoord);
    col *= _Color;
    col *= IN.color;
    col.xyz *= _Factor;

    // premultiplied alpha
    col.xyz *= col.w;

    return col;
   }
   ENDCG
  }
 }
}
The pixel shader looks like a simple particle shader because it pretty much is. There's lots more you can do with geometry shaders. You could add a tessellation shader and turn each mesh triangle into 100+ particles or other crazy stuff. I hope this gives you some insight into geometry shaders and helps get you started making some crazy stuff.

Monday, July 16, 2018

Lucky Swooshes

While working on Super Lucky's Tale something that I though was solved in a pretty cool way was the swooshes. Swooshes are used for collecting coins, spawning certain enemies, and Lucky's tail swipe effect. For this type of effect you want a fire and forget solution and you also want it to be super predictable. It should do the same thing every time and keep a consistent look over its lifetime. An obvious approach may be to use a trail attached to an object that moved toward a target. There's a few issues that may arise with this approach though. If the target moves the trail could end up having a funny shape, it could also be difficult to figure out exactly when the swoosh will arrive at the target which doesn't help with timing when to spawn things.
The best solution turned out being a mesh that was a strip of polygons with the beginning and end at 0,0,0 with a trail texture that scrolled across it. A script on the swoosh object informed the shader where the target was and the vertex shader moved the end of the swoosh over to the targets position. The script could also orient the mesh to face the target. This allowed for lots of variation in the shape of the swooshes and also guaranteed that the swoosh would reach the target exactly when it was supposed to and always have the intended shape.
A swoosh model could be made with all kinds of twisting ribbons and then deformed along a path to make all kinds of fun shapes.

The important part of the vertex shader that moves the end of the swoosh is below:
// For screen shooshes smoosh the mesh flat on the Y axis
v.vertex.y *= 1.0 - _ScreenSquish;

// Add some random offset on the X and Z zxis for screen swooshes
float2 divergence = _Divergence.xy * saturate( sin ( v.uv.x * UNITY_PI ) );
v.vertex.xz += divergence * _ScreenSquish;

// Find the start position of the swoosh
float3 worldOrigin = mul( unity_ObjectToWorld, float4(0,0,0,1) ).xyz;

// Now figure out the end position relative to the start position
float3 endOffset = _TargetPos - worldOrigin;

// Get the world position of the vertex
float3 worldPos = mul( unity_ObjectToWorld, v.vertex ).xyz;

// Add the end offset to the vertex world position masked byt the uv coordinates
worldPos += endOffset * v.uv.x;

// Transform the world position to screen position
o.vertex = mul(UNITY_MATRIX_VP, float4(worldPos,1));

// Smoosh the swoosh against the screen if it is a screen swoosh
o.vertex.z = lerp( o.vertex.z, o.vertex.w, _ScreenSquish * 0.99 );
_ScreenSquish is a float from 0-1 that is passed in from script telling the swoosh if it should be pressed against the screen, like the coin collect swooshes. This keeps is from being occluded by any opaque geometry while still being attached to a point in the world. It still follows a world space position but that position is attached to the screen.
_Divergence is a float2 passed in from script that adds some offset the the middle of the swoosh so screen swooshes don't overlap and follow a bit of a random path.

The pixel shader is pretty simple, here's the basic swoosh texture lookup with a bit of fade out on either end:
// texture coords for swoosh texture
float2 swooshUV = saturate( IN.uv * _Tiling.xy + float2( lerp( _Tiling.z, _Tiling.w, _Ramp ), 0 ) );
half4 col = tex2D(_MainTex, swooshUV ) * _Color;

// start and end fade in
half edgeFade = saturate( ( 1.0 - abs( IN.uv.x * 2 - 1 ) ) * (1.0 / _FadeInOut ) );
edgeFade = smoothstep(0,1,edgeFade);

// multiply together
col *= edgeFade;
_Ramp is passed in from script to control the swoosh travel progression.
_Tiling is set in the material and allows for control of the length of the swoosh and how _Ramp effects the swoosh travel.
_FadeInOut is lets you set how mush on the ends of the swoosh to fade out.

A material property block can be used to send the information right to the swoosh renderer without messing with the materials at all like so:
MaterialPropertyBlock MPB = new MaterialPropertyBlock ();
Renderer thisRenderer = this.GetComponent ();

if (targetScreen) {
 MPB.SetFloat ("_ScreenSquish", 1.0f);
 MPB.SetVector ("_Divergence", new Vector2 (Random.Range (-screenDivergence, screenDivergence), Random.Range (-screenDivergence, screenDivergence)));
}

MPB.SetFloat("_Ramp", ramp);

thisRenderer.SetPropertyBlock (MPB);
Now a swoosh can be spawned anywhere, its script will drive _Ramp over a specified time, you know exactly when it will reach the end and what it will look like along the way.
Even when moving the camera a screen swoosh will always start at its world position and end at the screen position, the shape is always smooth and movement always fluid.
Lucky's tail swipe uses the same shader with an extra texture overlay to make it look more wispy. The tail swoosh is spawned at Lucky's position and rotation and then the end position is just updated to be Lucky's current position.
If Lucky is jumping, the tail swoosh it will follow him in the air while maintaining its smooth shape. It's a subtle effect but helps tie it to Lucky.
But we're not done yet. You can get really fancy with a geometry shader by turning each triangle into a little particle. Here one of the swoosh ribbons has the swoosh particle shader on it. This shader turned each triangle into its own quad and gave it some movement over its lifetime. Because this is just a shader you can play the whole effect backwards. And like the regular swooshes, the particle swooshes update their positions when the target moves.
How exactly that all works may be a post for another day though.

Monday, July 9, 2018

Dark and Stormy


This repo is availible on github: github.com/SquirrelyJones/DarkAndStormy
In this post I'll break down some of whats going on in this funky skybox shader for Unity. Some of the techniques used are Flow Mapping, Steep Parallax Mapping, and Front To Back Alpha Blending. There are plenty of resources that go over these techniques in detail and this post is more about using those techniques to make something cool than it is about thoroughly explaining each one. It should also be noted that this shader is not optimized and is structured for easier readability.

The Textures

This is the main cloud layer.
This is the flow map generated from the main cloud layer. The red and green channels are similar to a blurry normal map (openGL style) generated from a the main clouds layer height. The smaller clouds will flow outward from the thicker parts of the main clouds. The blue channel is a mask for where there may be pinches due to the flow pushing a lot of the clouds texture into a small area. This doesn't look great on clouds so we want to mask out places where this will occur.
This is the second cloud layer, it will add detail to the large clouds and be distorted by the large clouds flow map.
This is the wave distortion map. This distorts all the clouds and gives an ocean wave feel to the motion.
The wave distortion mas generated in Substance Designer using a cellular pattern with heavy anisotropic blur applied. he blur direction should be perpendicular to the direction it will scroll to give it a proper wavy look.
The last texture is the Upper color that will show through the clouds.

The Shader

Shader "Skybox/Clouds"
{
 Properties
 {
  [NoScaleOffset] _CloudTex1 ("Clouds 1", 2D) = "white" {}
  [NoScaleOffset] _FlowTex1 ("Flow Tex 1", 2D) = "grey" {}
  _Tiling1("Tiling 1", Vector) = (1,1,0,0)

  [NoScaleOffset] _CloudTex2 ("Clouds 2", 2D) = "white" {}
  [NoScaleOffset] _Tiling2("Tiling 2", Vector) = (1,1,0,0)
  _Cloud2Amount ("Cloud 2 Amount", float) = 0.5
  _FlowSpeed ("Flow Speed", float) = 1
  _FlowAmount ("Flow Amount", float) = 1

  [NoScaleOffset] _WaveTex ("Wave", 2D) = "white" {}
  _TilingWave("Tiling Wave", Vector) = (1,1,0,0)
  _WaveAmount ("Wave Amount", float) = 0.5
  _WaveDistort ("Wave Distort", float) = 0.05

  _CloudScale ("Clouds Scale", float) = 1.0
  _CloudBias ("Clouds Bias", float) = 0.0

  [NoScaleOffset] _ColorTex ("Color Tex", 2D) = "white" {}
  _TilingColor("Tiling Color", Vector) = (1,1,0,0)
  _ColPow ("Color Power", float) = 1
  _ColFactor ("Color Factor", float) = 1

  _Color ("Color", Color) = (1.0,1.0,1.0,1)
  _Color2 ("Color2", Color) = (1.0,1.0,1.0,1)

  _CloudDensity ("Cloud Density", float) = 5.0

  _BumpOffset ("BumpOffset", float) = 0.1
  _Steps ("Steps", float) = 10

  _CloudHeight ("Cloud Height", float) = 100
  _Scale ("Scale", float) = 10

  _Speed ("Speed", float) = 1

  _LightSpread ("Light Spread PFPF", Vector) = (2.0,1.0,50.0,3.0)
 }
All the properties that can be played with.
 SubShader
 {
  Tags { "RenderType"="Opaque" }
  LOD 100

  Pass
  {
   CGPROGRAM
   #pragma vertex vert
   #pragma fragment frag
   
   #include "UnityCG.cginc"
   #define SKYBOX
   #include "FogInclude.cginc"
There is a custom include file that has a poor mans height fog and integrates directional light color. The terrain shader also uses the same fog to keep things cohesive.
   sampler2D _CloudTex1;
   sampler2D _FlowTex1;
   sampler2D _CloudTex2;
   sampler2D _WaveTex;

   float4 _Tiling1;
   float4 _Tiling2;
   float4 _TilingWave;

   float _CloudScale;
   float _CloudBias;

   float _Cloud2Amount;
   float _WaveAmount;
   float _WaveDistort;
   float _FlowSpeed;
   float _FlowAmount;

   sampler2D _ColorTex;
   float4 _TilingColor;

   float4 _Color;
   float4 _Color2;

   float _CloudDensity;

   float _BumpOffset;
   float _Steps;

   float _CloudHeight;
   float _Scale;
   float _Speed;

   float4 _LightSpread;

   float _ColPow;
   float _ColFactor;
Just declaring all the property variables to be used.
   struct v2f
   {
    float4 vertex : SV_POSITION;
    float3 worldPos : TEXCOORD0; 
   };

   
   v2f vert (appdata_full v)
   {
    v2f o;
    o.vertex = UnityObjectToClipPos(v.vertex);
    o.worldPos = mul( unity_ObjectToWorld, v.vertex ).xyz;
    return o;
   }
The vertex shader is pretty lightweight, just need the world position for the pixel shader.
   float rand3( float3 co ){
       return frac( sin( dot( co.xyz ,float3(17.2486,32.76149, 368.71564) ) ) * 32168.47512);
   }
We'll need a random number for some noise. This will generate a random number based on a float3.
   half4 SampleClouds ( float3 uv, half3 sunTrans, half densityAdd ){

    // wave distortion
    float3 coordsWave = float3( uv.xy *_TilingWave.xy + ( _TilingWave.zw * _Speed * _Time.y ), 0.0 );
    half3 wave = tex2Dlod( _WaveTex, float4(coordsWave.xy,0,0) ).xyz;
The wave texture needs to be sampled first, it will distort the rest of the coordinates like a Gerstner Wave. In all the _Tiling parameters .xy is tiling scale and .zw is scrolling speed. All scrolling is multiplied byt the global _Speed variable for easily adjusting the overall speed of the skybox.
    // first cloud layer
    float2 coords1 = uv.xy * _Tiling1.xy + ( _Tiling1.zw * _Speed * _Time.y ) + ( wave.xy - 0.5 ) * _WaveDistort;
    half4 clouds = tex2Dlod( _CloudTex1, float4(coords1.xy,0,0) );
    half3 cloudsFlow = tex2Dlod( _FlowTex1, float4(coords1.xy,0,0) ).xyz;
Using the red and green channels of the wave texture (xy) distort the uv coordinates for the first cloud layer. Also sample the clouds flow texture with the same coordinates.
    // set up time for second clouds layer
    float speed = _FlowSpeed * _Speed * 10;
    float timeFrac1 = frac( _Time.y * speed );
    float timeFrac2 = frac( _Time.y * speed + 0.5 );
    float timeLerp  = abs( timeFrac1 * 2.0 - 1.0 );
    timeFrac1 = ( timeFrac1 - 0.5 ) * _FlowAmount;
    timeFrac2 = ( timeFrac2 - 0.5 ) * _FlowAmount;
This is a standard setup for flow mapping.

    // second cloud layer uses flow map
    float2 coords2 = coords1 * _Tiling2.xy + ( _Tiling2.zw * _Speed * _Time.y );
    half4 clouds2 = tex2Dlod( _CloudTex2, float4(coords2.xy + ( cloudsFlow.xy - 0.5 ) * timeFrac1,0,0)  );
    half4 clouds2b = tex2Dlod( _CloudTex2, float4(coords2.xy + ( cloudsFlow.xy - 0.5 ) * timeFrac2 + 0.5,0,0)  );
    clouds2 = lerp( clouds2, clouds2b, timeLerp);
    clouds += ( clouds2 - 0.5 ) * _Cloud2Amount * cloudsFlow.z;
The second cloud layer coordinates start with the first cloud layer coordinates so the second cloud layer will stay relative to the first. Sample the second cloud layer using the flow map to distort the coordinates. Then add them to the base cloud layer, masking them by the flow maps blue channel.
    // add wave to cloud height
    clouds.w += ( wave.z - 0.5 ) * _WaveAmount;
Add the wave texture blue channel to the cloud height
    // scale and bias clouds because we are adding lots of stuff together
    // and the values cound go outside 0-1 range
    clouds.w = clouds.w * _CloudScale + _CloudBias;
Since everything is just getting added together there is the possibility that the values could go outside of 0-1 range. If things look weird we can manually scale and bias the final value back into a more reasonable range.
    // overhead light color
    float3 coords4 = float3( uv.xy * _TilingColor.xy + ( _TilingColor.zw * _Speed * _Time.y ), 0.0 );
    half4 cloudColor = tex2Dlod( _ColorTex, float4(coords4.xy,0,0)  );
sample the overhead light color texture.
    // cloud color based on density
    half cloudHightMask = 1.0 - saturate( clouds.w );
    cloudHightMask = pow( cloudHightMask, _ColPow );
    clouds.xyz *= lerp( _Color2.xyz, _Color.xyz * cloudColor.xyz * _ColFactor, cloudHightMask );
Using the cloud height (the alpha channel of the clouds) lerp between the the 2 colors and multiply the overall cloud color. The power function is used to adjust the tightness of the "cracks" in the clouds that let light through.
    // subtract alpha based on height
    half cloudSub = 1.0 - uv.z;
    clouds.w = clouds.w - cloudSub * cloudSub;
subtract the uv position from the cloud height. This gives us the cloud density at the current height.
    // multiply density
    clouds.w = saturate( clouds.w * _CloudDensity );
Multiply the density by the _CloudDensity variable to control the softness of the clouds.
    // add extra density
    clouds.w = saturate( clouds.w + densityAdd );
Add any extra density if needed. This variable is passed in and is 0 except for the final pass in which it is 1
    // add Sunlight
    clouds.xyz += sunTrans * cloudHightMask;
Add in the sun gradients masked by the cloud height mask.
    // pre-multiply alpha
    clouds.xyz *= clouds.w;
The front to back alpha blending function needs the alpha to be pre-multiplied.
    return clouds;
   }
This is the main function for sampling the clouds. The pixel shader will loop over this function.
   fixed4 frag (v2f IN) : SV_Target
   {
    // generate a view direction fromt he world position of the skybox mesh
    float3 viewDir = normalize( IN.worldPos - _WorldSpaceCameraPos );

    // get the falloff to the horizon
    float viewFalloff = 1.0 - saturate( dot( viewDir, float3(0,1,0) ) );

    // Add some up vector to the horizon to pull the clouds down
    float3 traceDir = normalize( viewDir + float3(0,viewFalloff * 0.1,0) );
We can get the view direction from subtracting the camera position from the world position and normalizing the result. "traceDir" is the direction that will be used generate the cloud uvs. It is just the view direction with a little bit of "up" added at the horizon. This adds a little bit of bend to the clouds, like they are curving around the planet, and keeps them from sprawling off into infinity at the horizon and causing all kinds of artifacts.
    // Generate uvs from the world position of the sky
    float3 worldPos = _WorldSpaceCameraPos + traceDir * ( ( _CloudHeight - _WorldSpaceCameraPos.y ) / max( traceDir.y, 0.00001) );
    float3 uv = float3( worldPos.xz * 0.01 * _Scale, 0 );
Use the camera position + the trace direction to get a world position for the cloud layer. This way the clouds will react to the camera moving, just make sure not to move the camera up through the clouds, things get weird. Then make the uvs for the clouds from the world position, multiplying by the global scale variable for easy adjusting.
    // Make a spot for the sun, make it brighter at the horizon
    float lightDot = saturate( dot( _WorldSpaceLightPos0, viewDir ) * 0.5 + 0.5 );
    half3 lightTrans = _LightColor0.xyz * ( pow( lightDot,_LightSpread.x ) * _LightSpread.y + pow( lightDot,_LightSpread.z ) * _LightSpread.w );
    half3 lightTransTotal = lightTrans * pow(viewFalloff, 5 ) * 5.0 + 1.0;
Using the dot product from the first directional light direction and the view direction, get a gradient in the direction of the sun. Then use power to tighten up the gradient to your liking. This it the light from the sun that will shine through the back of the clouds. The _LightSpread parameter has the power and factor for the two sun gradients that get added together for better control.
    // Figure out how for to move through the uvs for each step of the parallax offset
    half3 uvStep = half3( traceDir.xz * _BumpOffset * ( 1.0 / traceDir.y ), 1.0 ) * ( 1.0 / _Steps );
    uv += uvStep * rand3( IN.worldPos + _SinTime.w );
Standard steep parallax uv step amount. This is how far through the uvs and the cloud height we move with each sample. Then the starting uv is jittered a bit wit a random value per pixel to keep it from looking like flat layers.
    // initialize the accumulated color with fog
    half4 accColor = FogColorDensitySky(viewDir);
    half4 clouds = 0;
    [loop]for( int j = 0; j < _Steps; j++ ){
     // if we filled the alpha then break out of the loop
     if( accColor.w >= 1.0 ) { break; }

     // add the step offset to the uv
     uv += uvStep;

     // sample the clouds at the current position
     clouds = SampleClouds(uv, lightTransTotal, 0.0 );

     // add the current cloud color with front to back blending
     accColor += clouds * ( 1.0 - accColor.w );
    }
Start by getting the fog at the starting point. This creates an early out opportunity from the loop since we don't need to sample clouds once the the accumulated color is fully opaque. Then Iterate over the clouds moving the uv with each iteration and adding the clouds to the accumulated color using front to back alpha blending.
    // one last sample to fill gaps
    uv += uvStep;
    clouds = SampleClouds(uv, lightTransTotal, 1.0 );
    accColor += clouds * ( 1.0 - accColor.w );
Once we have iterated over the entire cloud volume do one last sample without testing against the cloud height to fill in any holes from cloud values that didn't fit inside the volume.
    // return the color!
    return accColor;
   }
   ENDCG
  }
 }
}
Then return the color and we're done!