Search Unity

Volumetric Rendering and Shading

Discussion in 'Shaders' started by Major, Apr 30, 2017.

  1. Major

    Major

    Joined:
    Jul 26, 2012
    Posts:
    69
    Hello.

    I'm getting into writing shaders as it is something that has interested me for a while, particularly volume rendering. So to get into things I started with a simple ray marching implementation using fixed steps. I was able to get this to function, rendering a sphere within a cube. However, I've run into some difficulty when it comes to lighting.

    From what I understand about the Lambert shading model, the normal vector of an object is dotted with the direction vector of a light source (eg. a directional light), returning a value between 0 and 1 (after some processing).

    The immediate problem with rendering volumes is that there are no normals. From what I've found, the normals for rendering spheres are generated using an approximation technique in which the slope is found between 2 points. But, using the fragment function the world position of the pixel is available and so is the object center. Using these 2 values the vector from the center of the sphere to the surface can be calculated. This vector would be the exact normal of the sphere at that point.

    For visualization, I simply normalized this vector and used it for the output color. However, while I expected the color of each pixel to be static (because it uses each world position and object center which do not change), it instead changes the color of the whole sphere as the camera moves around the object; an effect that would be expected if the vector was from the pixel world position to the camera (in fact this is practically identical).

    I'm very confused as to why this does not work as expected. I assume I am misunderstanding 1 of, or multiple parts of this process. I would appreciate some assistance on this problem, thanks!
     
  2. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,352
    On a whole there aren't any assumptions here that are obviously wrong, however:
    What do you mean by the fragment function, are you just using the interpolated vertex world position, or the actual raytraced position on the sphere? Is the raytraced position still in world space, or did you transform into a tangent / local space?
    How are you getting the object center? Presumably from the unity_ObjectToWorld transform matrix, but this is unreliable if you have more than one "cube" in the scene using the same material as dynamic / static batching will merge meshes together cause the unity_ObjectToWorld transform to be an identity matrix.
     
  3. macdude2

    macdude2

    Joined:
    Sep 22, 2010
    Posts:
    686
    Ok, yah, those issues are in the syntax somewhere, make sure for all your matrix multiplications, you use two 4x4 matrices, I accidentally used a 3x3 for the longest time and encountered similar bugs. Basically, verify your math, but I agree, your understanding is largely sound.