Needed a visualisation to debug a weird depth bug on #dreamcast and ended up with this strange thing (and now I solved my bug, hooray)

Finally got a handle on bumpmapping for 1998 #dreamcast hardware. The basic idea is to simulate realtime light-responsive surface details on relatively simple geometry by providing a 'map' image that describes how light should interact with the surfaces, as if a lot more surface detail were being drawn.

Super detailed geometry would take up more of the limited RAM and needs much more CPU time to calculate lighting and feed into the graphics pipeline every frame. This is just six faces each with four vertices, a non bumpmapped version would need a mesh with thousands of verts.

#gamedev

(edit: spelt 'game' wrong, amazing)

@voxel Not to be confused with parallax mapping! Which is also super cool, but is difficult to make work on a dreamcast.
@nicopap @voxel 2, 3 or 4 layers of the same texture with a height map in the alpha channel. Set alpha-test ref value to different values for each layer. Offset UVs according to dot(view,tangentU) and dot(view,tangentV). Works great.
@TomF @nicopap @voxel How old is this technique? It sounds like it could work all the way back to OpenGL 1.x
@mirth @nicopap @voxel I first heard about it implemented in the very first Unreal Engine (might have been Corrinne Yu?), which was software-rendered, so... :-)
@TomF @mirth @voxel The original technic with a single step, so called "offset limiting" is very efficient and easy to implement, but has a few artifacts (not that the more fancy ray marching is that difficult ^^, but you gotta sample textures multiple times)
@nicopap @mirth @voxel The original context is the Dreamcast, so you have no pixel shaders - all you have is texture sampling and what you can squeeze into the alpha-blender.
@TomF @nicopap @voxel That's about the end of the era where I can imagine a reasonable way for the hardware to do what it does. The idea of doing arbitrary computed texture reads in a fragment shader feels impossibly expensive even though with the teraflops and terabytes in a mainstream PC we're obviously way beyond that.
@mirth @nicopap @voxel Yeah these days it's better to do a hundred maths ops in order to save a single texture fetch. And the same is true for both CPUs and GPUs.