Search Unity

  1. Megacity Metro Demo now available. Download now.
    Dismiss Notice
  2. Unity support for visionOS is now available. Learn more in our blog post.
    Dismiss Notice

How do I get increased depth data from shader using ReadPixels() or something else?

Discussion in 'Shaders' started by BlackPete, Nov 29, 2016.

  1. BlackPete

    BlackPete

    Joined:
    Nov 16, 2016
    Posts:
    970
    I'm looking at implementing a basic collision detection system using the depth data of a render texture. I've gotten this working using a basic grayscale depth shader using a render texture with color format set to ARGB32. The shader is as follows:

    Code (csharp):
    1.  
    2. Shader "Custom/DepthGrayscale"
    3. {
    4.    SubShader
    5.    {
    6.      Tags{ "RenderType" = "Opaque" }
    7.  
    8.      Pass
    9.      {
    10.        CGPROGRAM
    11.  
    12.        #pragma target 5.0
    13.        #pragma vertex vert
    14.        #pragma fragment frag
    15.        #include "UnityCG.cginc"
    16.  
    17.        uniform sampler2D _CameraDepthTexture;
    18.  
    19.        struct v2f
    20.        {
    21.          float4 pos : SV_POSITION;
    22.          float4 scrPos:TEXCOORD1;
    23.        };
    24.  
    25.        v2f vert(appdata_base v)
    26.        {
    27.          v2f o;
    28.          o.pos = mul(UNITY_MATRIX_MVP, v.vertex);
    29.          o.scrPos = ComputeScreenPos(o.pos);
    30.  
    31.          return o;
    32.        }
    33.  
    34.        half4 frag(v2f i) : COLOR
    35.        {
    36.          float depthValue = Linear01Depth(tex2Dproj(_CameraDepthTexture, UNITY_PROJ_COORD(i.scrPos)).r);
    37.          half4 depth;
    38.  
    39.          depth.r = depthValue;
    40.          depth.g = depthValue;
    41.          depth.b = depthValue;
    42.  
    43.          depth.a = 1;
    44.          return depth;
    45.        }
    46.  
    47.        ENDCG
    48.      }
    49.    }
    50. }
    51.  

    I then use Texture2D.ReadPixels() to get at the depth data, using the R channel for depth. That's all well and good.

    However, that only gives me a granularity of 256 depth data steps. I'm looking to get at the full 16bit or 24bit depth data. I've tried changing the render target color format type to Depth, like so:

    Code (csharp):
    1. _camera.targetTexture = new RenderTexture(1, 1, 24, RenderTextureFormat.Depth, RenderTextureReadWrite.Linear);
    Now the shader returns nothing useful. Even more worryingly, if I change the shader frag function to return hardcoded values, I still can't get at those values through ReadPixels(), as it's always giving me a constant value of 205 for all ARGB channels for some reason.

    Ultimately, I want to be able to pack 16bit depth and 16bit normals data into a 32bit integer and pass that back to the CPU using ReadPixels... or some other method, if available.

    Documentation for this seems to be really sparse, and I've tried whatever code I could find, but no luck. The Depth color format simply doesn't seem to be well documented at all.

    Does anyone know how I can get 16bit or 24bit depth data to the script?
     
  2. BlackPete

    BlackPete

    Joined:
    Nov 16, 2016
    Posts:
    970
    Also, to pre-empt someone posting a link to the documentation here: https://docs.unity3d.com/Manual/SL-DepthTextures.html

    I tried that and got no data. I dug into UnityCG.cginc and found this gem:

    Code (csharp):
    1.  
    2. // Legacy; used to do something on platforms that had to emulate depth textures manually. Now all platforms have native depth textures.
    3. #define UNITY_TRANSFER_DEPTH(oo)
    4. // Legacy; used to do something on platforms that had to emulate depth textures manually. Now all platforms have native depth textures.
    5. #define UNITY_OUTPUT_DEPTH(i) return 0
    6.  
    So that's no good.