I need to rethink how I manage my lights (once again) because right now the light casting shadows are rendered as additive light, which means the IBL contributions is applied several times.
Until now I didn't have a notion of "ambient" lighting.
I need to rethink how I manage my lights (once again) because right now the light casting shadows are rendered as additive light, which means the IBL contributions is applied several times.
Until now I didn't have a notion of "ambient" lighting.
Working on my cubemap generation pipeline I was still puzzled on why the IBL would be so strong compared to the actual lights.
I decided to verify that my PBR wasn't broken by using red PBR balls this time and well...
Took me a day to figure out what was happening.
After checking my code a few times I isolated it out on being related to the DFG LUT.
Inverting its value (one minus) was somehow fixing the shading and brightness issue. This was very confusing.
Then I extracted the LUT from Filament and compared it from Learn OpenGL and mine.
Here is what they look like in Designer:
Notice what's wrong ?
Filament LUT use swapped Red and Green channels in its LUT.
My initial one minus trick was just a lucky fix. I'm glad I took the time to figure out what was happening.
In their doc, Filament doesn't mention that swap: https://google.github.io/filament/Filament.md.html#table_texturedfg
Anyway, once I figured this out, the fix was immediate and my shiny balls were now looking great:
I'm looking at ways to store my shadow volumes resulting binary mask in the form of a bit mask.
The goal is storing something like 32 shadows into an RGBA8 texture to sample it later when rendering object.
Doing so will allow me to render the lit object only once (while doing IBL + casting lights + other lights).
I think my next step will be to look into map building and most importantly occlusion culling and scene cell/portal splitting.
For that I started playing with TrenchBroom to build some map. I'm trying to see how I could build a pipeline around it to build meshes and import that in my engine.
Been a while, so here are some news.
I made some decent progress with Trenchbroom, I figured how to parse the map file format and output objs from it. Still have some details to iron out, but it's promising.
I also started testing custom textures and meshes:
Next I wanted to fix a bug I wasn't aware of until it was mentioned on the Graphic programming Discord: did you know that when computing the bitangent in your vertex shader you had to multiply it by an handness factor ? I didn't.
In order to fix this, I had to rework how I was writing some data in my mesh format. Took the time to split the regular geometry from the shadow volume one into separate files. (In anticipation of the geometry coming from TrenchBroom).
I also did a quick test with a non-closed mesh to see how far it was working well (or not). Notably I was thinking about how to cast shadows with meshes like fences.
...And one sided geometry actually works well !
I will have to think about a trick to make it work as two sided, but that doesn't seem impossible.
During pre-processing I also discard some materials, which allows to get rid of hidden faces.
I have been trying out more complex stuff to see if it was working well.
Another update: I got auto-reloading or the scene and meshes working.
(It works by simply monitoring the scene source file on the engine side.)
This means I can edit my scene in TrenchBroom and get live updates in-engine on the side.
I also took this opportunity to implement a framerate limiter, this way I can save on performance while the engine is out of focus.
So two days ago I decided to look into Open Dynamic Engine (ODE), mostly to evaluate how much works it would represent to integrate.
I was wondering how much work was needed to compile it.
Well... Compiling it was very straightforward on Linux actually. So I spent the saved time into integrating.
So now I have a bouncing ball in my engine. 😍
As for why I choose ODE and not something else is mostly because I wanted an easy to build and use C API.
Bullet is starting to be a bit outdated and I haven't found a C wrapper. Jolt wrappers aren't super up to date nor complete so far.
ODE worked out of the box, so that should be good enough for now. Hopefully performance will follow for my use-case.
Well... Guess what ? ODE is gone. ODE is fine, but requires too much work to get good performance out of it. The price of its flexibility I guess.
I switched to Jolt instead, and while the C API versions out there aren't perfect, they do the job. Getting great performance out of it with minimal tweaking.
I even got cubes working. :)
Decided to get back on cubemaps and finally tackle the blending of several of them.
I'm using a brush in TrenchBroom as the bounds of the parallax correction. The advantage is that I can share the same brush across several cubemap capture points.
Okay I got the volume bounds working and even added fading so that the reflection is not visible when outside.
Now I have to think about blending between cubemaps and reflection proxies.
(I also need to do something about the octahedral seams.)
Octahedral cubemaps can be very low quality, so I also tried out using blue noise to jitter the reflection vector to hide a bit artifacts.
Not very happy with the results however. You need a high level of jitter to hide issues and that may bias the reflections in unforeseen ways (but they are low quality anyway).
Also the jitter is static in screenspace, animating it make the original artifacts visible again because of the visual persistance. 😩
Did some tweak to add some padding on the octahedral textures, now the seams are (almost) all gone.
I can enjoy very shiny balls. :)
(I also added a quick reinhard tonemap to the debug preview of the cubemap since they are rendered too late in the pipeline for the editor stuff.)
So a trick I thought about today when talking with colleagues regarding my recent octahedral stuff, is to adjust the blue noise intensity used during the cubemap sampling based on the roughness value.
Also I forgot I had FXAA running, with sort of clean up the blue noise. 🤔
I noticed I haven't posted an update in a while. So where are we ?
I mostly spent September working on cubemap blending and physics.
Regarding physics, with the new wrapper I added, I got basic detection/trigger volumes working.
I also worked on converting my meshes to static collision in my levels. Then I added some refresh to ensure that moving static objects would interact well with simulated stuff (only for the editor side of things).
Quick demo below.
Currently in the middle of a refactor for the physics handling code. Now that I have an idea of what I can do and how, it's a good opportunity to do a cleanup.
Once this will be settled (I wanna experiment a trigger volume + door opening) I will take a look at Steam Audio I think.
Demos like this makes me want to try sound related stuff: https://www.youtube.com/watch?v=hEqqzqDnuV8