Search Unity

Direct3D -> OpenGL camera depth difference

Discussion in 'Shaders' started by Baste, Aug 18, 2017.

  1. Baste

    Baste

    Joined:
    Jan 24, 2013
    Posts:
    6,333
    Hi!

    I'm working on getting a shader effect to work on OpenGL platforms, and I'm stumped. The shader is for a fadeout plane for Infinite Depth Pits Of Death. It fades out based on the difference in distance from the camera to the plane and the camera to the depth texture. This gives a nice, fog-like effect.

    It's not giving good results on OpenGL, though - I'm using OpenGLES3. Here's screenshots showing the effect, and the difference between the platforms:

    D3DOpenGLDiff.png

    Here's the shader code:

    Code (csharp):
    1.  
    2. Shader "Custom/HeightFogShader" {
    3.     Properties {
    4.         _Color ("Main Color", Color) = (1,1,1,1)
    5.         _MainTex ("Base (RGB) Trans (A)", 2D) = "white" {}
    6.  
    7.         _DistanceMultiplier("Distance multiplier", Float) = 1
    8.  
    9.     }
    10.     SubShader {
    11.         Tags {"Queue"="Transparent+2" "IgnoreProjector"="True" "RenderType"="Transparent"}
    12.  
    13.         Pass{
    14.  
    15.             Blend SrcAlpha OneMinusSrcAlpha
    16.             ZWrite Off
    17.             Cull Off
    18.  
    19.             CGPROGRAM
    20.             #pragma fragment frag
    21.             #pragma vertex vert
    22.             #include "UnityCG.cginc"
    23.  
    24.             fixed4 _Color;
    25.             uniform sampler2D _CameraDepthTexture; //Depth Texture
    26.             uniform sampler2D _MainTex;
    27.             uniform float _DistanceMultiplier;
    28.             float4 _MainTex_ST;
    29.  
    30.             struct v2f{
    31.                 float2 uv : TEXCOORD0;
    32.                 float4 pos : SV_POSITION;
    33.                 float4 projPos : TEXCOORD1; //Screen position of pos
    34.             };
    35.  
    36.             v2f vert(appdata_base v){
    37.                 v2f o;
    38.                 o.pos = mul(UNITY_MATRIX_MVP, v.vertex);
    39.                 o.projPos = ComputeScreenPos(o.pos);
    40.                 o.uv = TRANSFORM_TEX (v.texcoord, _MainTex);
    41.                 return o;
    42.             }
    43.  
    44.             half4 frag (v2f i) : SV_Target {
    45.                 float sceneZ = LinearEyeDepth(tex2Dproj(_CameraDepthTexture, i.projPos));
    46.                 float projZ = i.projPos.z;
    47.  
    48.                 half4 c = tex2D(_MainTex, i.uv);
    49.                 c.r *= _Color.r;
    50.                 c.g *= _Color.g;
    51.                 c.b *= _Color.b;
    52.  
    53.                 float distVal = (sceneZ - projZ) * _DistanceMultiplier * 1.5;
    54.                 c.a *= _Color.a * distVal * c.a - 0.6f;
    55.  
    56.                 return c;
    57.             }
    58.             ENDCG
    59.         }
    60.     }
    61.  
    62.     Fallback "Transparent/VertexLit"
    63. }
    64.  
    65.  

    From doing some debugging, it seems like

    Code (csharp):
    1. o.pos = mul(UNITY_MATRIX_MVP, v.vertex);
    2. o.projPos = ComputeScreenPos(o.pos);
    gives different values on the platforms. So if I create a debug color based on i.projPos.z, that changes. On the other hand,

    Code (csharp):
    1. tex2Dproj(_CameraDepthTexture, i.projPos)
    gives the same result on the platforms. So somehow sampling the depth texture takes the above difference into account. This causes the discrepancy between the platforms, but I can't figure out how to counteract that discrepancy.


    Can anyone explain what's going on and how I should go about getting the same result in OpenGL?
     
  2. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,339
    OpenGL and DirectX use different projection matrices. ComputeScreenPos() adjusts the x and y components in a way that is consistent across platforms, but it doesn't touch the z or w components of the float4 passed to it. If you need a depth value that matches the output of LinearEyeDepth you need to use the COMPUTE_EYEDEPTH macro.

    o.projPos = ComputeScreenPos(o.pos);
    COMPUTE_EYEDEPTH(o.projPos.z);


    See the built in particle shaders for further reference if you want, but the above should solve your issue.

    Also, in case you might think "why did Unity make them different?" This isn't a Unity "thing", this is the two different APIs chose to implement clip space differently long before Unity existed. OpenGL predates DirectX by nearly a decade 4 years, and DirectX's clip space is considered by many the "correct" way, so much so that OpenGL 4 and Vulkan offer extensions / options to emulate DirectX's clipping planes. The short version is OpenGL's projection space Z is -1 to 1, where DirectX is 0 to 1. OpenGL seems "cleaner" as the X and Y for both OpenGL and DirectX are also in -1 to 1, but because of floating point precision -1 to 1 can cause problems where the highest precision is mid way between the near and far plane rather than closer to the camera.
     
    Last edited: Aug 18, 2017
  3. Baste

    Baste

    Joined:
    Jan 24, 2013
    Posts:
    6,333
    Hey, thanks a lot for the help!

    I had an inclination to this being down to -1 to 1 vs 0 to 1, but I had no idea how to fix it. You put me on the right track!
    After adding the COMPUTE_EYEDEPTH macro, I had to remove the -0.6f from the final line of the frag function. It seems like that was there to compensate for... something relating to this.

    It looks the same on both platforms now. Again, thanks a bunch!


    Man, shaders are hard. Imagine if application code was like this. "Well, on consoles, the Z-direction in the scene is reversed, so you have to do this:"

    Code (csharp):
    1. #if UNITY_STANDALONE
    2. transform.Translate(direction) * Time.deltaTime;
    3. #else
    4. transform.Tranlsate(new Vector3(direction.x, direction.y, -direction.z)) * Time.deltaTime;
    5. #endif
    Nobody would accept it! But for some reason shader languages push handling platform differences to the user. Probably for speed concerns, but there has to be a better way to do this :p
     
  4. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,339
    It is. Unity just hides most of that weirdness behind their APIs. If you try to do stuff outside of Unity (in a native plugin) or just going beyond the built in APIs (like any kind of manual file handling) you're going to be using a lot of #if platform switches.

    Funny you should mention that, because the view depth is reversed on consoles. Again, Unity's code handles all of that for you so you don't have to think about it. For the most part Unity's shader macros try to make it all work with out you having to worry about it too. In shaders there's separate UNITY_MATRIX_V and unity_WorldToCamera matrices, which one matches Unity's scene space and the other matches what the rendering APIs actually want.