Skip to main content

Image based lighting notes

These are some of my notes from implementing image based lighting, a.k.a. IBL.

I thought I understood it pretty well until I started implementing it. Now, after a lot of reading, discussing and trying things out, I'm finally getting back to the stage where I tentatively believe I understand it. Hopefully these notes will save future me, or anyone else reading them, from going through the same difficulty. I'll update them if I find out anything further.

Some helpful references




The radiance integral

$$L_o(v) = L_e(v) + \int_\Omega L_i(l) f(l, v) (n \circ l) dl$$
  • The amount of light reflected at some point on a surface is the sum, over all incoming directions, of the light from that direction multiplied by the BRDF for the surface material, multiplied by the cosine of the angle between the light and the relevant normal vector, plus any light emitted by the surface itself.
  • We usually calculate this separately for the diffuse and specular lighting at the point, then sum the results.
  • For diffuse lighting, the relevant normal vector is the surface normal.
  • For specular lighting, the relevant normal vector is the half-vector: the vector halfway between the light direction and the view direction (where both vectors are pointing away from the surface point rather than towards it).

BRDF

  • A BRDF is a function which, given a view vector and a light vector, returns the amount of light which will be reflected along the view vector.
  • The view vector is the vector from the point being shaded to the view position, not the other way around.
  • The light vector is the vector from the point being shaded to the light position, not the other way around.
  • BRDF is rotation invariant, only depends on the angle between the two vectors.
  • BRDFs usually have other parameters which are constant for any given material.

Diffuse BRDF

  • Diffuse contribution comes from light emerging from the surface after bouncing around inside it a bit.
  • This means the light direction can be fairly random.
  • So a fairly common diffuse BRDF is simply a constant value: the diffuse color for the mtaerial divided by pi (this is the Lambertian model).
  • Materials with lots of subsurface scattering will need a more complicated diffuse BRDF such as Oren-Nayar.
  • Oren-Nayar is based on the microfacet model described below.

Microfacet BRDF

$$f(l, v, h) = \frac{D(h) F(v, h) G(l, v, h)}{4 (n \circ l) (n \circ v)}$$
  • Usually used for specular BRDFs, but Oren-Nayar (diffuse) is also based on microfacets.
  • Assumes that a surface is composed of lots of tiny planar fragments whose normals vary according to some statistical distribution. This is the Cook-Torrance model.
  • Breaks the BRDF into 3 main components:
    • D(h), the normal distribution function.
    • F(v, h), the Fresnel function.
    • G(l, v, h), the geometric shadowing function.
  • D(h) tells us the fraction of microfacet normals which point in a given direction.
    • This describes how rough or smooth a material is.
    • Rougher materials will have a more even spread of normals across whereas smoother ones will tend to have them concentrated around a single direction.
    • Lots of options for this: Blinn-Phong, GGX, GTR, etc.
  • F(v, h) gives us the fraction of light that gets reflected in each channel.
    • This tells us the material's "colour".
    • Pretty much everyone uses Schlick's approximation for this.
  • G(l, v, h) describes how much of the reflected light is blocked by other microfacets.
    • Tells us how "dark" a material is.
    • Is (or should be) affected by how rough the material is.
    • There are lots of options for this too.

IBL in general

  • The idea of IBL is to bake out components of the radiance integral into textures ahead of time, so that our shaders can rapidly approximate it given just a direction, a surface colour and a roughness value.
  • We can bake out the irradiance (incoming light) for any given direction:
    • The irradiance will be from all pixels within a cone around the given direction.
    • The solid angle of the cone is determined by the normal distribution function for your BRDF (which in turn is shaped by the material's roughness).
    • This will be stored in a texture where we can look up values using a direction vector, i.e. a cube map or a lat-long 2D texture.
    • You can use a mip-mapped texture to store the results for different roughness levels. mip-level 0 will be the smoothest level, higher levels will be rougher.
    • Mip-level N+1 should represent a cone covering twice as many pixels as Mip-level N, so that the sampling works out correctly.
    • Because the mip-level contents depend on the mapping from the roughness parameter to a cone width, the irradiance map will be specific to a particular normal distribution function.
  • You can precalculate the rest of the BRDF separately as well:
    • This is the split sum approximation described in the SIGGRAPH 2013 course on physically based rendering.
    • Generate a 2D texture where the u axis corresponds to dot(N, V) values >= 0.0 and the v axis corresponds to roughness values between 0.0 and 1.0.
    • The texture contains a scale and bias to the material color.
    • Apply the scale and bias, then multiply the result by the value from the irradiance map to get the final color for the pixel.

IBL vs. Reflections

  • What's the difference between IBL and reflections?
    • Theoretically: nothing. They're the same thing.
    • I think there may be an accident of terminology here, where "IBL" is often used to mean diffuse IBL; and reflections is used to mean specular IBL.
  • It's common to provide a low-res HDR image with a matching high-res LDR image. In this case:
    • The low-res HDR image is for diffuse IBL.
    • The high-res LDR image is for reflections.
  • This implies that reflections are expected to sample from an LDR input whereas IBL (of any kind, specular or diffuse) is better off sampling from a HDR input.

Comments

Popular posts from this blog

Assert no lock required

This is a technique I learnt about from Jason Gregory's excellent book, Game Engine Architecture (3rd Edition) . If you have a shared resource accessed by multiple threads, where you're fairly certain that it's only ever accessed by one thread at a time, you can use an assert() to check for this at debug time without having to pay the runtime cost of locking a mutex. The implementation is fairly straightforward: class UnnecessaryMutex { public: void lock() { assert(!_locked); _locked = true; } void unlock() { assert(_locked); _locked = false; } private: volatile bool _locked = false; }; #ifdef ENABLE_LOCK_ASSERTS #define BEGIN_ASSERT_LOCK_NOT_REQUIRED(mutex) (mutex).lock() #define END_ASSERT_LOCK_NOT_REQUIRED(mutex) (mutex).unlock() #else #define BEGIN_ASSERT_LOCK_NOT_REQUIRED(mutex) #define END_ASSERT_LOCK_NOT_REQUIRED(mutex) #endif Usage is equally straightforward: UnnecessaryMutex gMutex; void PossiblyOverlappingFunction

Triangle bounding boxes in a single byte

Just thought of a way to store the bounding box for a single triangle in only one byte. It's not really practical or something you'd ever really want to use, but what the hell. Assume we have some kind of indexed mesh structure with a list of vertex positions and a list of triangle indices:   struct Mesh {     std::vector<vec3> verts;     std::vector<uvec3> triangles;   }; We can find the bounding box of a triangle by taking the min and max of all three vertices:   vec3 Mesh::lowerBound(uint32_t tri) const {     vec3 v0 = verts[triangles[tri].x];     vec3 v1 = verts[triangles[tri].y];     vec3 v2 = verts[triangles[tri].z];     return min(min(v0, v1), v2);   }   vec3 Mesh::upperBound(uint32_t tri) const {     vec3 v0 = verts[triangles[tri].x];     vec3 v1 = verts[triangles[tri].y];     vec3 v2 = verts[triangles[tri].z];     return max(max(v0, v1), v2);   } This is nice and simple and probably way better than what I'm about to suggest. W

LD_DEBUG

Posting this mainly as a reminder to myself... If you ever find yourself needing to figure out a dynamic library loading problem on Linux, LD_DEBUG can be a massive help. This is an environment variable you can set to make the dynamic linker print out a ton of useful diagnostic info. There are a number of different values which control the amount and type of diagnostics printed. One of the values is help; if you set LD_DEBUG to this and run executable it will print out a list of all the available options along with brief descriptions. For example, on my Linux workstation at the office: > LD_DEBUG=help cat Valid options for the LD_DEBUG environment variable are: libs display library search paths reloc display relocation processing files display progress for input file symbols display symbol table processing bindings display information about symbol binding versions display version dependencies all all previous options combi