Blog

Car paint shader pt 1: Base and Clear Coat

While I was working at Jackson Dawson, one of my major projects was to develop a car paint shader. Our projects were built primarily using unity with a C#, and so was my shader. Car paint traditionally is comprised of three layers; the clear coat layer, the base coat layer, and the metallic flake layer. The existing car paint that we were using, one of the top options from the Unity app store, had a lot of problems; the clear coat layer was causing the light to get blown out and from a distance the car would look completely white regardless of its paint color.

The base layer is pretty simple, just a standard PBR shader, I emulated Unity's instructions with this one. The real complexity started when I began introducing the clear coat shader. The clear coat layer is a glossy layer of transparent material that coats the outside of the car. Its used to protect the other layers of paint   Right away this forced me to write the shader in Forward Rendering mode only, as the differences in normals from the base + flake layer and the clear coat layer is incompatible with deferred rendering in unity, and I couldn’t figure out any way around this other than to write the shader in the forward rendering mode. At some point I want to try tackling this problem again when the new scriptable rendering pipeline system comes out in Unity to create a deferred pipeline capable of handling multiple layered materials.

The challenge with the clear coat layer was figuring out how to blend the base and clear coat layers. This was where the previous shader fell down as it just added the two layers together, causing blowout when two bright spots overlapped. Scientifically any light reflected off the clear coat layer shouldn’t reach the lower layer, but trying to convert that into an algorithm that could run in a manner that didn't kill performance was decidedly more difficult. The awkward part of this problem was trying to determine how much light was going to get filtered out of the base came from calculating how much light was getting reflected away by the clear coat that wasn't ever reaching the camera, and how that how
I tried to figure out an analytical solution based off of science that wouldn't be too performance intensive, but I couldn't figure one out during the time that I had to work on this project. So I wound up experimenting a lot and just trying things out until I found a solution that worked fine.

Another thing I was working on while I was writing this shader was implementing shader features to optimize the shader so that it didn’t run anything that the artist didn’t want it to. As a result the car paint shader also works as a simpler Clear Coat shader that works perfectly for materials like carbon fiber. And it will simply ignore all the flake calculations.

The next major challenge came with the flake layer. In concept, the layer is a bunch of tiny flakes that reflect light in different directions, and cause a light sparkle when you look at car paint. Getting this to function in game is a lot more difficult. And probably deserves a writeup all to itself, so expect that coming sometime soon.

Spark compute particle shader

This is made using Unity, the particles are all simulated on the gpu with a program i wrote in HLSL using compute shaders, then rendered with a shader i wrote in cgfx. 

I tried to write the system to be as versatile as possible, so you can edit a whole myriad of properties in engine as its running. My next step is to add full geometry collision.

I'm pretty proud of it for now, it has collision/physics with the ground plane, the plane also gets illuminated by the sparks when they collide if the sparks have enough energy, the sparks lose energy over time which causes them to glow less and eventually turn grey, The system looks pretty with the HDR glow and other effects. I'm working on giving the system collision with any geometry as well, I reckon I should have that done within the month.

Screenshot (29).png

The performance is solid as well. For most of the video Im making a solid 150 - 300 fps with around 30-50k particles (which are actually semitransparent as well). The occasional stutter or speed up was due to my recording software not being able to record flawlessly at the bitrate that a million little particles required, but without the recording software I was able to get the same results minus the stutter with 100k+ particles.

A lot of the algorithm that I used to handle spawning/simulating/particle death were inspired by the methods laid out by Gareth Thomas in these slides: 
https://www.slideshare.net/DevCentralAMD/holy-smoke-faster-particle-rendering-using-direct-compute-by-gareth-thomas

 

You can find the source for this project here: 
https://github.com/TarAlacrin/SparkParticleSystem

Heightmap Deformation System

This is a system I wrote in unity that uses tessellation and compute shaders to simulate terrain deformation on a plane. There are two parts to this project; the compute shader that regulates the deformation of the plane, and the shader used to render it.

The way the compute works is that it is given a base heightmap that determines the overall shape of the terrain. On top of that is overlaid the layer of uniform height flat “soft terrain” illustrated by another heightmap handled by the compute shader. Whenever the compute shader detects an overlap with the terrain and the deformers, it writes the difference to the soft terrain heightmap. As it’s a computebuffer that gets written and read to by the compute shader every frame the read/write versions of the soft terrain heightmap must be swapped every frame by a script running on the cpu. These heightmaps are then passed to the tessellation shader to be rendered.

heightmap wireframe.png

The model used in this demo is only a 10 quad by 10 quad plane, but with the magic of a tessellation shader, it’s possible to dynamically increase the level of detail on the fly by adding in more polygons during the geometry function of the shader. The advantage of this process is that it adds the extra polygons in on the GPU at a late stage in the rendering process, which allows for rapid changes in the number of extra polygons without suffering a performance drop. (you can see this effect at 0:44 seconds in to the youtube video). Right now, the algorithm I’m using is fairly naïve, tessellating uniformly based on distance from camera, however, I believe that with some restructuring of the algorithm and how it handles data, I could get it to tessellate based on the derivative of the slope, which would have the advantage of only putting the extra vertexes where they are really needed, further optimizing this process. Unfortunately there are a few issues at play that make that idea a lot more challenging to implement, but it’s something that I want to come back to in the future. The shader also is responsible for the change in color; brown if you’ve deformed the terrain a bunch, green if it’s unbroken, as well as some other minor visual tweaks to get it to a point where I thought it looked good.

An extreme example of the tesselation based on distance from the camera.

An extreme example of the tesselation based on distance from the camera.

The biggest issue with this process was trying to work out collision; since the extra polygons were added in at a late stage in the rendering process; they didn’t actually “exist” outside of that process so it was impossible to use them as any basis for colliders. The heightmap data too never leaves the GPU, and to attempt to pull all that data from the GPU every frame could cause some major bottleneck performance issues. Even if we could get the heightmap data; the heightmap data is constantly changing so dynamically on such a fine and detailed level, so any attempt to construct a collision mesh from the data would cook most computers. The solution I came up with this was to merely pole the heightmap every frame; performing a sort of simple version of a raycast, to get the height of the deforming object. It works perfectly well for what it is, but isn’t very friendly to work with and probably wouldn’t work in an actual game scenario. The reason I went with this method over other, potentially simple solutions, is that it allowed you to sink slowly into the terrain, rather than sinking instantly. 

Ultimately I don’t think this is too big of an issue though, as in an actual game, it’s unlikely that you would have deformable terrain that’s as large and deep as in my demo, so it would be a lot easier to get away with other more practical methods of collision which would have looked tacky in my demo.



You can find the source for this project here:
https://github.com/TarAlacrin/HeightmapOnTheGPU

GPU based compute particle system

This is a custom particle system I wrote in Unity that runs entirely on the GPU using compute shaders. It can get up to around 1 million particles on my crappy laptop before it starts getting any lag.

Compute shaders can’t write and read to the same buffer at the same time, so in order to simulate anything a function is needed on CPU that swaps the read buffer and the write buffer every frame, and this has to be done with every piece of persistent data that is maintained between frames, so a lot of the problems that the program had to overcome arose from that dynamic.

FinalAdjustedScreen21a.jpg

The first iteration of this particle system was the result of my first foray into writing compute shaders on the GPU. Back then it was pretty much just a single function that ran on every particle that caused tem to orbit around a point in space. I eventually went back and expanded it a ton, adding gravity and ground collision, as well as turning it into a real particle system with emission/particle death.

In order to add emission and particle death to the system I needed to pretty much restructure it entirely. Instead of running a single function on every particle in the system, I split the compute shader into two primary functions; one to handle the emission of the particles, and another to handle the simulation of particle

I also wrote the shader that renders the particles as well; the version that I show off here has all the particle as just billboarded triangles. The process of writing the shader was a bit different from writing a normal shader however, as before I got to the vertex-fragment functions I needed to use geometry function to construct the particles geometry first. The shader currently is pretty simplistic and is something I plan to come back to and expand upon more in the future. 

goodScreen21.png

Sound Analyzer

This project was pretty simple, mostly an attempt for me to explore how game engines handle sound. The challenge for me was to get the information from the depths of unity, parse it, and then display it the information in an interesting way. I wound up having a bit of fun with the project and adding in some physics and other things to make it fun to interact with. I'll probably do more of a proper write up in the future.

 

You can find the source for this project here:
https://github.com/TarAlacrin/SoundAnalyzer

SoundAnalyzer.jpg