Search Unity

Possible to write temporary depth that only applies to additional passes of single shader?

Discussion in 'Shaders' started by HadynTheHuman, Jul 22, 2017.

  1. HadynTheHuman

    HadynTheHuman

    Joined:
    Feb 15, 2017
    Posts:
    14
    Situation:
    I have something similar to the (very useful) transparent shader with depth writes from the docs, except that the object is also visible through walls (at reduced opacity).

    It seems to me that I require both the camera depth texture (to determine whether I'm in front of or behind a wall) as well as additional depth information about the object I'm rendering - even when it's behind a wall. In other words, I need two layers of depth; the camera layer, and the object layer. That second layer only needs to exist within the scope of the shader.

    I can think of some tricks to make this work with a second camera, but I'd rather do it all within the shader if possible. Any advice is welcome!

    Kind regards,
    Hadyn
     
  2. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,343
    It might be possible with multiple passes, and using the camera depth texture.

    1. Render the object using a depth "punch out" shader with ZTest Always and which sets the depth buffer to the far plane.
    2. Render the object using a depth only pass, like in the example from the docs.
    3. Render transparent shader that compares it's depth with the camera depth texture and adjusts the opacity accordingly.
    4. Render object again using a depth pass that outputs the camera depth texture to "reset" the depth. Depending on the use case you may want only render depth closer to the camera than the object.
    The middle two passes are functionally no different than the example, apart from reading from the depth texture. The first and last passes will need to render directly to depth from the fragment shader. This will work best if you're not using MSAA as the camera depth texture isn't anti-aliased, but it will still work.
     
    HadynTheHuman likes this.
  3. RyanFavale

    RyanFavale

    Joined:
    May 30, 2014
    Posts:
    28
    I did this with a 2nd camera, but the depth values don't line up for some reason. [ PLEASE HELP!!! ]

    1.) I draw my "wall" depth, in my case a water plane, to a RenderTexture with the same dimensions as the back buffer and using an identical camera to Camera.main (I compared all the matrices per frame to make sure the camera's are the same).
    2.) Render a frag shader on my object, for debugging, reading the value from the RenderTexture and compare the depth with the depth of my fragment. The depth read from the RenderTexture is further than expected, where 1 meter of depth is spread across about 2 meters.

    @bgolus - I'm not sure your steps will work for me, since my "wall" is semi-transparent as well, so I don't want it to ever write depth to the main camera's depth buffer. Also, I'm not sure I follow how you're saying to reset the depth values. I think you're saying to render a temporary depth into the main camera for the wall and then compare it with the object's depth by reading the depth buffer, then restoring the depth somehow to the main camera.

    What I am trying to do is render semi-transparent fur either behind or in front of a semi-transparent water surface depending on depth. And it has to be done per pixel on the object.

    [Attached: image is showing the fur object depth on the left, and the water plane depth going out from the camera. You can see the depths do not match.]
     

    Attached Files:

    HadynTheHuman likes this.
  4. HadynTheHuman

    HadynTheHuman

    Joined:
    Feb 15, 2017
    Posts:
    14
    @bgolus Cool idea! The MSAA thing might hurt a bit (I'm in forward rendering for VR), but maybe I can counter some of the artefacting in the shader - and it's probably still workable either way. Thanks for the input :)
     
  5. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,343
    A trick I used for drawing things behind walls in VR that's MSAA friendly is draw the object once normally, but with a stencil, then draw the object again with a shader using ZTest Greater and Stencil Comp NotEqual. It won't be able to do the nice transparency, but it is MSAA friendly as it never uses the camera depth texture, which is the only thing that prevents this stuff from working well.

    Supposedly some version of 2017 will add support for directly sampling multi-sample depth textures, but who knows, maybe it'll be the same version that has nested prefabs...

    What do your shaders look like for the water and the rock? What are you actually drawing on the screen? I suspect they're different because you're drawing different things.

    The camera depth texture is the clip space depth values, which there are various macros for converting from that non-linear depth into linear depth.
     
    HadynTheHuman likes this.
  6. RyanFavale

    RyanFavale

    Joined:
    May 30, 2014
    Posts:
    28
    I am drawing a boar standing in water. You can see in my new screenshot where his legs intersect the water. But their depths are quite different on neighboring pixels.

    I am writing the water depth to an RFloat RenderTexture.

    Debug Shaders:
    // Rock/FurObject:
    v2f vert(appdata v)
    {
    v2f OUT;
    OUT.pos = mul(UNITY_MATRIX_MVP, v.vertex);
    return OUT;
    }

    half4 frag(v2f IN) : SV_Target
    {
    float2 dUV = IN.pos.xy / _ScreenParams.xy;
    float waterDepth = tex2D(_WaterDepth, dUV).r;//Linear01Depth
    float d = IN.pos.z;//-waterDepth;
    return half4(d,d,d,1);
    }

    // Water depth pass
    void vert(inout appdata v, out Input OUT)
    {
    UNITY_INITIALIZE_OUTPUT(Input,OUT);
    OUT.depth = mul(UNITY_MATRIX_MVP,v.vertex).z
    }

    frag write OUT.depth
     

    Attached Files:

    Last edited: Jul 26, 2017
  7. RyanFavale

    RyanFavale

    Joined:
    May 30, 2014
    Posts:
    28
    Wow, and I just swapped between a surface shader and a fragment shader to write the water depth and it completely changes the values for depth. :S
     
  8. RyanFavale

    RyanFavale

    Joined:
    May 30, 2014
    Posts:
    28
    Ok, I just realized I was getting the perspective divide in there. I think I got it working pretty close now. There's still a bit of difference, but for my case this is OK.
     

    Attached Files:

  9. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,343
    Yep, for the debug shader the IN.pos.z is actually the result of OUT.pos.z / OUT.pos.w, as the pos value is post rasterization of value of pos. Similarly IN.pos.xy is the pixel position, and IN.pos.w I think is 1.

    I believe the correct way to do it is to pass both the clip space z and w from the vertex shader and do the divide manually in the fragment shader.
     
  10. RyanFavale

    RyanFavale

    Joined:
    May 30, 2014
    Posts:
    28
    Thanks, ya, I just passed z without the w divide and used that.