Search Unity

Questions on how to get and work with the depth/z-position of a fragment

Discussion in 'Shaders' started by Ewanuk, May 25, 2017.

  1. Ewanuk

    Ewanuk

    Joined:
    Jul 9, 2011
    Posts:
    257
    I have a number of shaders that rely on the distance a given fragment is from the camera. It's often the case that I have a command buffer that collects a bunch of depth data, modifies it, and saves it to a texture. All processing that generally prohibits using the camera depth buffer for everything, so I write it myself in some render textures as needed. No issues there. Using DirectX 11 platform.

    Trouble is, I don't fully understand the code I'm using and could use some advice. Even just a link to page of shader documentation that I should read and absorb would be helpful. I'm not sure what core concepts I'm missing.


    Question 1) Looking at this vertex shader:

    Code (CSharp):
    1. v2f vert (appdata v)
    2. {
    3.     o.scrPos = ComputeScreenPos(o.vertex);
    4.     o.eyePos = mul(UNITY_MATRIX_MV, v.vertex);
    5. }
    then going to the fragment shader:
    Code (CSharp):
    1. float4 frag (v2f i) : COLOR
    2. {
    3.     float depth_1 = i.scrPos.z;
    4.     float depth_2 = i.eyePos.z;
    5. }
    What is the difference between depth_1 and depth_2? Should/Shouldn't they be the same?


    Questions 2 & 3)
    Code (CSharp):
    1.  
    2. v2f vert (appdata v)
    3. {
    4. ...
    5.     o.scrPos = ComputeScreenPos(o.vertex);
    6.     o.eyePos = mul(UNITY_MATRIX_MV, v.vertex);
    7. ...
    8. }
    9.  
    10. float4 frag (v2f i) : COLOR
    11. {
    12.     ...
    13.     //Do a manual depth test here with the _CameraDepthTexture;
    14.     float sceneDepthAtFrag = LinearEyeDepth(tex2Dproj(_CameraDepthTexture, UNITY_PROJ_COORD(i.scrPos)).r);
    15.  
    16.     float fragDepth = i.eyePos.z * -1;
    17.     ...
    18. }
    19.  
    1) I'm guessing that "sceneDepthAtFrag" is the linear distance from the camera? So, if the camera was 3 units* from a surface represented in _CameraDepthTexture, then sceneDepthAtFrag should equal 3?
    *(in view space, along the z-axis)

    2) Why do I have to multiply by -1 to get the frag depth?


    Question 4) Say I get the correct linear depth (0 = at the camera, 1 = exactly 1 meter from the camera along the z-axis in view space), can I correctly generate a normalized (0 = at near plane, 1 = at far plane) depth using:

    Code (CSharp):
    1. float linearDepthToNormalizedDepth(float linearDepth)
    2. {
    3.     return (linearDepth - _ProjectionParams.y) / (_ProjectionParams.z - _ProjectionParams.y);
    4. }
    5.  
    6. float normalizedDepthToLinearDepth(float normalizedDepth)
    7. {
    8.     return (_ProjectionParams.z - _ProjectionParams.y) * normalizedDepth + _ProjectionParams.y;
    9. }
     
  2. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,343
    No, they shouldn't be the same. ComputeScreenPos is a 0.0 to 1.0 range for x and y, which is really just the clip space position (UNITY_MATRIX_MVP) rescaled. The z component is left alone and is just the straight projection depth. The eyePos is in view space, which is basically world scale, but oriented to the camera, and not clip (aka projection) space. You should look at the code for ComputeScreenPos.

    Let's do this one before we get to the other question. The forward direction in UNITY_MATRIX_MV view space is actually negative Z. There's another function you'll see in a lot of shaders that deal with the depth buffer, I think it's something like "compute eye depth", which is just:
    -mul(UNITY_MATRIX_MV, v.vertex).z;
    That negative sign in the front isn't a mistake.

    It's the linear depth, yes, which is the "distance along the z axis". An object 3 units from the camera in editor will be show as a value of 3 in the shader as you surmised. However what the _CameraDepthTexture actually stores is the projection space depth, which isn't linear. It'll actually match the same values as the "depth_1"! It's a 0.0 to 1.0 value which goes from the near clip plane to the far plane, but a point halfway between the two in linear world or view space won't be "0.5", but another other value that depends on the field of view. The LinearEyeDepth function exists to convert that non-linear depth to linear depth.

    Probably, though there's also a LinearEyeDepth01 function (might be named something slightly different) that you can use to get normalized 0.0 to 1.0 linear depth. I also don't remember what exactly the components of the _ProjectionParams are, but those look like they should work if .z is the far plane, and .y is the near plane. (Which is what the documentation says, so you should be good.)
     
    PatrickHightree likes this.
  3. Ewanuk

    Ewanuk

    Joined:
    Jul 9, 2011
    Posts:
    257
    Thank you for the replies! This helps a lot, big thing was understanding the difference between eye space and clip space. Also the linear to non-linear depth values were confusing me.


    I'm guessing it's a just an idiosyncrasy of graphics and the underlying math? Or is there a theoretical reason "forward" should be negative at that point?


    For future readers, it's Linear01Depth (as of unity 5.5.1) to get the normalized 0-1 linear value: This documentation helps too: https://docs.unity3d.com/Manual/SL-DepthTextures.html