Search Unity

[SOLVED] Building screen space depth from world position

Discussion in 'Shaders' started by N3zix, Oct 26, 2016.

  1. N3zix

    N3zix

    Joined:
    Oct 22, 2014
    Posts:
    23
    In a fragment shader, I am trying to compute a screen space depth from a 3D world space position (also computed in the fragment shader). The goal is to compare this reconstructed depth to the corresponding pixel in the depth texture.
    I have to compute the 3D position based on the fragment screen position. For debugging, the computed 3D object is a simple sphere placed on (0,0,0) with a radius of 0.5f.

    Code (Csharp):
    1. float4 frag(v2p i){
    2.  
    3.     float4 pos3DWorld = ComputeWorldSpacePos(...);//pos3DWorld.w is 1.0
    4.     //CustomP and CustomV are the camera matrices from the camera used to generate the depth texture
    5.     float4 pos3DProj = mul(_CustomP,mul(_CustomV,pos3DWorld));
    6.     //Should I do that ?
    7.     pos3DProj.xyz /= pos3DProj.w;
    8.  
    9.     //Screen space pixel position [0,1]
    10.     float4 pos3DScreen= ComputeScreenPos(pos3DProj );
    11.  
    12.     //Get depth value of the corresponding pixel in the depth texture
    13.     float depthValue = Linear01Depth (tex2Dproj(_CameraDepthTexture, UNITY_PROJ_COORD(pos3DScreen)).r);
    14.  
    15.     //Depth should be in [0,1] range to be compared with the other depth value
    16.     float posDepth = Linear01Depth(pos3DProj.z);
    17.  
    18.     if(depthValue < posDepth)
    19.         return float4(0,0,0,1);//Color in black
    20.     return float4(1,1,1,1);//Color in white
    21.  
    22.  
    I don't know what is wrong but I don't get the expected result. If I output the depthValue, I should get depth values only inside my 3D object projected on the screen but it doesn't seem right.

    The other way to do that is to reconstruct the depth value in world space from the depth texture like in https://mynameismjp.wordpress.com/2009/03/10/reconstructing-position-from-depth
    but I tested the first one more intensively and it seems simpler.

    Is there something I do wrong or steps I missed ?

    UPDATE: I think I should not do "pos3DProj.xyz /= pos3DProj.w;" as it is done by ComputeScreenPos and/or UNITY_PROJ_COORD and tex2Dproj. I still need to do this normalization to get the depth of the point in screen space : "Linear01Depth(pos3DProj.z/pos3DProj.w);"
     
    Last edited: Oct 26, 2016
  2. N3zix

    N3zix

    Joined:
    Oct 22, 2014
    Posts:
    23
    I solved my problem by doing this :

    Code (CSharp):
    1. float4 frag(v2p i){
    2.     float4 pos3DWorld = ComputeWorldSpacePos(...);//Make sure that pos3DWorld.w is 1.0 !!
    3.  
    4.     float4 pos3DProj = mul(_CustomP,mul(_CustomV,pos3DWorld));
    5.     //Should I do that ? = No ! Don't normalize 2 times
    6.    // pos3DProj.xyz /= pos3DProj.w;
    7.     //Screen space pixel position [0,1]
    8.     float4 pos3DScreen= ComputeScreenPos(pos3DProj );
    9.     //Get depth value of the corresponding pixel in the depth texture
    10.     float depthValue = Linear01Depth (tex2Dproj(_CameraDepthTexture, UNITY_PROJ_COORD(pos3DScreen)).r);
    11.     //Depth should be in [0,1] range to be compared with the other depth value
    12.     // Normalize here !
    13.     float posDepth = Linear01Depth(pos3DProj.z / pos3DProj.w);
    14.     if(depthValue < posDepth)
    15.         return float4(0,0,0,1);//Color in black
    16.     return float4(1,1,1,1);//Color in white
    17. }