What are the values passed in _ZBufferParams? My understanding of how to linearize and then expand the OpenGL z-buffer seems to sub-par. I'm not able to use the helper DECODE_EYEDEPTH because my rendered depth buffer has different clipping planes from the actual scene camera, so I'm rebuilding the functionality. This was simple enough in DirectX. All I needed to pass to the shader was the camera's far clip plane and multiply that by the pixel's depth. However, OpenGL seems to be a bit trickier.
This text is helpful: Linearize Depth by Humus. But values in depth component textures on OpenGL are actually in 0..1 range (just like depth buffer in D3D), so actually D3D math has to be used. Here's paste from Unity code that sets the _ZBufferParams: Code (csharp): double zc0, zc1; // OpenGL would be this: // zc0 = (1.0 - m_FarClip / m_NearClip) / 2.0; // zc1 = (1.0 + m_FarClip / m_NearClip) / 2.0; // D3D is this: zc0 = 1.0 - m_FarClip / m_NearClip; zc1 = m_FarClip / m_NearClip; // now set _ZBufferParams with (zc0, zc1, zc0/m_FarClip, zc1/m_FarClip);
Thanks Using the D3D math that you provided worked like a charm in OpenGL. I'm still not understanding why though. It seems that in practice in Unity it's the following. Code (csharp): D3D: float sceneDepth = z * farClip; OpenGL: float sceneDepth = 1.0 / (zc0/farclip * z + zc1/farclip); But why is OpenGL using this and not the OpenGL math you provided? I'm just trying to make sense of the solution. Thanks again!
Yeah. Currently in D3D "depth" textures are single channel floating point texture, and we output linear 0..1 depth over far plane range when rendering into it. In OpenGL, the "depth" texture is much like a depth bufffer, i.e. it has non-linear range. Usually depth buffer range in OpenGL is -1...1 range, so in Humus' text, OpenGL math would have to be used. However, OpenGL depth textures seem to actually have 0..1 range, i.e. just like depth buffer in D3D. It's not explicitly written in the specification, but I found that out by trial and error. So since it matches D3D's depth buffer range, the D3D math has to be used.
Does this mean that if you scale and bias your Z value (z*2 - 1) as retrieved from the depth texture in OpenGL, you could use LinearEyeDepth() from UnityCG? Edit: Never mind. I just had a conversation with Shawn and realized that of course, the UnityCG version of the function "just works" because it's expecting the Z value in the [0,1] range.
Hey Aras, Could you see about popping that on this page? http://docs.unity3d.com/Documentation/Components/SL-BuiltinValues.html It would be put under Various (at the bottom), as: float4 _ZBufferParams (0-1 range): x is 1.0 - (camera's far plane) / (camera's near plane) y is (camera's far plane) / (camera's near plane) z is x / (camera's far plane) w is y / (camera's far plane)
Besides this being a 2-year necro, this is current content in shadervariables.cginc: Code (CSharp): // Values used to linearize the Z buffer (http://www.humus.name/temp/Linearize%20depth.txt) // x = 1-far/near // y = far/near // z = x/far // w = y/far uniform float4 _ZBufferParams;