Search Unity

  1. Megacity Metro Demo now available. Download now.
    Dismiss Notice
  2. Unity support for visionOS is now available. Learn more in our blog post.
    Dismiss Notice

Object Depth shader?

Discussion in 'Shaders' started by kebrus, Sep 20, 2016.

  1. kebrus

    kebrus

    Joined:
    Oct 10, 2011
    Posts:
    415
    I kinda feel stupid because I was gonna swear I did this a couple of years before but I can't remember how. I want to render one object depth. NOT the whole scene, just one object.

    I can't seem to figure out what am I doing wrong, shouldn't be possible to just use the object z position and do some math with the near/far clip plane?

    I'm finding old code but It's not working.

    Any hints?
     
    Namey5 likes this.
  2. macdude2

    macdude2

    Joined:
    Sep 22, 2010
    Posts:
    686
    Couldn't you just put the object on a given layer and make the camera only render that layer?
     
    FreeFly90 likes this.
  3. kebrus

    kebrus

    Joined:
    Oct 10, 2011
    Posts:
    415
    thats like using a bazooka to kill a fly, I just want to create an object shader that displays the depth.
     
  4. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,329
    Are you trying to show depth as it's stored in the depth texture?
    float depth = COMPUTE_DEPTH_01;

    Are you trying to show depth in world space z depth from the camera?
    float depth;
    COMPUTE_EYEDEPTH(depth);


    You'll need to run those in the vertex shader and pass the value to the fragment shader through the v2f struct.
     
  5. macdude2

    macdude2

    Joined:
    Sep 22, 2010
    Posts:
    686
    Not sure I'd agree with that? A camera rendering only one layer is quite efficient.
     
  6. kebrus

    kebrus

    Joined:
    Oct 10, 2011
    Posts:
    415
    Correct me if I'm wrong but the first one would require me to use a screen space shader solution to render all of the camera depth right? I believe I want the second one because I want to calculate it for just one object using that shader by using it's z depth from the camera like you said.

    I searched for it and I tried changed the respective lines from the example in this page: https://docs.unity3d.com/Manual/SL-DepthTextures.html

    It does render something at close range, I believe it's not in the proper range? Am I right to use something like:
    Code (CSharp):
    1. return i.depth.x / _ZBufferParams.y;
    Somehow doesn't seem totally right

    EDIT: missed one, maybe it's this one?
    Code (CSharp):
    1. return  i.depth.x * _ProjectionParams.w;
     
    Last edited: Sep 24, 2016
  7. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,329
    Neither of those macros have anything to do with the _CameraDepthTexture, though COMPUTE_DEPTH_01 is what is used by Unity's shader when rendering the _CameraDepthNormalsTexture.

    From UnityCG.cginc
    Code (CSharp):
    1. // Depth render texture helpers
    2. #define DECODE_EYEDEPTH(i) LinearEyeDepth(i)
    3. #define COMPUTE_EYEDEPTH(o) o = -UnityObjectToViewPos( v.vertex ).z
    4. #define COMPUTE_DEPTH_01 -(UnityObjectToViewPos( v.vertex ).z * _ProjectionParams.w)
    5. #define COMPUTE_VIEW_NORMAL normalize(mul((float3x3)UNITY_MATRIX_IT_MV, v.normal))
    However it is important to note the depth stored in the _CameraDepthNormalsTexture is a linear depth (which is expanded to world space with the simple depth *= _ProjectionParams.z;), where as _CameraDepthTexture is not a linear depth and needs to be expanded with either Linear01Depth( depth ) or LinearEyeDepth( depth ).

    Also from UnityCG.cginc
    Code (CSharp):
    1. // Z buffer to linear 0..1 depth (0 at eye, 1 at far plane)
    2. inline float Linear01Depth( float z )
    3. {
    4.     return 1.0 / (_ZBufferParams.x * z + _ZBufferParams.y);
    5. }
    6. // Z buffer to linear depth
    7. inline float LinearEyeDepth( float z )
    8. {
    9.     return 1.0 / (_ZBufferParams.z * z + _ZBufferParams.w);
    10. }
    It really depends on exactly what you're trying to get when you say "depth". Are you trying to match exactly what comes out of the z buffer or _CameraDepthTexture, which is a non-linear depth? Or are you just trying to get the distance from the camera to the object, either in world space units or in fraction of the distance from the camera to the far plane?
     
    DrummerB, Alic and Iron-Warrior like this.
  8. kebrus

    kebrus

    Joined:
    Oct 10, 2011
    Posts:
    415
    Sorry if I'm about to sound completely retarded but you got me confused so my answer might be wrong. I've used those macros before in screen shaders successfully. I'm not trying to create a screen shader tho. While it could be possible like macdude2 said what I want to create is an object shader that renders the object depth in 01 range and use it to do some screen dependent effect. For instance, lets say I want to fade an object instead of clipping it when it gets close to the camera. I want to know the way of doing it first and then correctly change it to do something else.

    So I'm not trying to access the camera depth texture or anything, so I believe what I want is your very last question. COMPUTE_DEPTH_01 seems to be what I'm looking for?
     
  9. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,329
    Yes, you want COMPUTE_DEPTH_01 for the initial testing of getting it to show something. For fading out when getting close to the camera you'll probably want to use COMPUTE_EYEDEPTH instead as that'll be world unit distance from the camera to the object which will make adjusting the fade out range much easier to deal with. You may even want to pass the view space position to the fragment shader and calculate the length.
     
  10. kebrus

    kebrus

    Joined:
    Oct 10, 2011
    Posts:
    415
    Got it now! thanks a bunch. :)

    Just one more thing if you don't mind. What are you referring to in your last phrase? Do you mean calculating the depth in the fragment function instead of the vertex function?
     
  11. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,329
    Those two macros do basically the same thing, which is to transform the vertex positions into view space ... that is to say calculate the positions relative to the camera position and orientation.

    So UnityObjectToViewPos( v.vertex ).xyz gives you the position relative to the camera in the vertex shader, which you can pass to the pixel shader and do length( viewPos ) to get the actual distance from the camera. This is nice because it gives you the actual distance rather than just the depth.

    Here's a diagram of the difference between z depth and distance.
    zdepthvsdistance.png

    And here's another comparison of z depth and distance.


    Basically if you have your fade distances such that when you turn the camera you notice things fading in and out (much like what happens with Unity's fog) then using distance can get better results. However note that both of those macros, and that function above don't take into account the near clip plane, so if you have stuff fade out at "zero" it'll get get clipped out by the near clip plane before it completely fades out. With the z depth approach this is easy to account for by just subtracting out the near clip range, it's a little harder with distance. as just subtracting the near clip from the distance will still cause it to clip out on the sides of the view.
     
  12. kebrus

    kebrus

    Joined:
    Oct 10, 2011
    Posts:
    415
    You just went to the top of my favorite person list in this community, Thank you a ton for taking the time to explain it. Not many people would do that.

    You even addressed something I didn't asked and was noting in my experiments with the near clip plane.

    Really, than you :)
     
    Alic and ModLunar like this.
  13. EdyH

    EdyH

    Joined:
    Jun 20, 2019
    Posts:
    13
    @bgolus thank you very much for that explanation.