Search Unity

Screen space raytracing and depth buffer sample filtering.

Discussion in 'Shaders' started by brn, Nov 5, 2012.

  1. brn

    brn

    Joined:
    Feb 8, 2011
    Posts:
    320
    I've been working on the core functionality for a screen space ray tracing shader for reflections and global illumination. Crude but I'm pretty happy with the results so far. Unfortunately I'm running into a depth buffer sampling issue when comparing the Reflection rays depth with the Screen pixels depth.

    What I think is happening is something along these lines. When the sampling screen space uv doesn't line up cleanly with a pixel and samples between a group of them, I'm getting an averaged depth for that sample ( as expected). What would be very handy is to be able to use point filtering so I don't get false collisions. The logic to detect a collision between the pixel depth and the marching ray is pretty simple, adding complexity would be a real performance hit as would setting up UV's that always line up with a pixel correctly.

    Hoping someone might come to the rescue with a , "Its here in the docs" Ive looked everywhere.



    Cheers
    Brn
     
  2. Martin-Kraus

    Martin-Kraus

    Joined:
    Feb 18, 2011
    Posts:
    617
    If you cannot activate point sampling, you could sample at those texture coordinates that correspond to "pure" texel colors. See Figure 8.3 on page 188 of the OpenGL 4.3 specification (http://www.opengl.org/registry/doc/glspec43.core.20120806.pdf ): in OpenGL you get pure texel colors by sampling at the center of the texel. For n texels, the coordinate of the i-th texel (i = 0 ... n-1) is (i+0.5)/n . (I'm not sure but Direct3D might work similarly.)
     
  3. brn

    brn

    Joined:
    Feb 8, 2011
    Posts:
    320
    Hi Martin,

    I might just have to go down that road, Thanks for the link. Frustrating when technically a cheaper option could have worked. Every extra calculation I put into the Ray march loop means less samples : ( Might have to move it over to DX11.
     
  4. Martin-Kraus

    Martin-Kraus

    Joined:
    Feb 18, 2011
    Posts:
    617
  5. brn

    brn

    Joined:
    Feb 8, 2011
    Posts:
    320
    I was just trying to keep it simple and avoid using render textures. I was hoping to put something together that didn't have to rely on scripts and the the end user to manage the rendering of the scene. I'll see if i can set the filter mode of the depth texture by treating it as a render texture in a moment. Interestingly the property's of the depth texture are set at the same time as the creation of the render texture. http://docs.unity3d.com/Documentation/ScriptReference/RenderTexture.RenderTexture.html
     
  6. brn

    brn

    Joined:
    Feb 8, 2011
    Posts:
    320
    I've been working on the technique some more and have made good progress. The shader in the vid below uses 16 steps/samples per ray and 1 ray per pixel. Performance is much better than I expected. Optimisation and integration with the Hardsurface shaders will be my next step.

     
    DanikV likes this.
  7. Lars-Steenhoff

    Lars-Steenhoff

    Joined:
    Aug 7, 2007
    Posts:
    3,527
    Really nice effects! I'll put them on my to buy list
     
  8. pvloon

    pvloon

    Joined:
    Oct 5, 2011
    Posts:
    591
    That looks so awesome/

    I've sent you a PM :)
     
  9. alleycatsphinx

    alleycatsphinx

    Joined:
    Jan 25, 2012
    Posts:
    57
    Badass dude. I'd love to hear more details about your approach.
     
  10. brn

    brn

    Joined:
    Feb 8, 2011
    Posts:
    320
    Another WIP vid, implemented as an image effect this time.

     
  11. castor76

    castor76

    Joined:
    Dec 5, 2011
    Posts:
    2,517
    Brn. I am trying to do this myself as a learning experience and I managed to get a basic version running. However, when I tried to make it a post processing effect, I can't get my view space position (plus reflection vector) into the screen space.

    I thought this must be as simple as multiplying with UNITY_MATRIX_P and then / by .w but this for some reason doesn't seem to work for a post process shader.

    http://forum.unity3d.com/threads/205549-Screen-Space-Local-Reflection?p=1389788#post1389788

    any ideas? comments welcome.

    thanks.
     
  12. Dolkar

    Dolkar

    Joined:
    Jun 8, 2013
    Posts:
    576
    UNITY_MATRIX_P in a post process shader is an identity matrix. You'll have to pass camera.projectionMatrix by yourself as another shader variable.
     
  13. castor76

    castor76

    Joined:
    Dec 5, 2011
    Posts:
    2,517
    yeah I thought so...

    but I have already tried that.. still not working right.. / .w (dividing by .w part after projection matrix multiply) is still wrong...

    float4 vspPosReflectT = mul (_ProjMatrix, viewPos +float4(vspReflect,1));

    I am passing the ProjMatrix matrix from the camera. ( as camera.projectionMatrix; )
     
  14. Dolkar

    Dolkar

    Joined:
    Jun 8, 2013
    Posts:
    576
    try:
    float4 vspPosReflectT = mul (_ProjMatrix, float4(viewPos.xyz + vspReflect, 1.0));
    and then divide
     
  15. castor76

    castor76

    Joined:
    Dec 5, 2011
    Posts:
    2,517
    Humm I have tried that already..

    But I have found that :

    UNITY_MATRIX_P != camera.projectionMatrix;

    Basically, they are not the equal matrices... I thought they were, but the result suggest otherwise. Maybe I am wrong here, but is there more to it than just getting camera.projectionMatrix and passing that instead of using UNITY_MATRIX_P is enough?

    I have even tried this on working non post processing shader, and it seems to me that UNITY_MATRIX_P != camera.projectionMatrix;

    ???
     
  16. castor76

    castor76

    Joined:
    Dec 5, 2011
    Posts:
    2,517
    I think . I have boiled down the problems to the following :

    1. UNITY_MATRIX_P != camera.projectionMatrix; mentioned above

    2. Reconstructing view Position using z value read from depth buffer and screen position


    however, function i use to reconstruct view space position is the one from Unity ambient obscurance shader. My initial guess for issue is that my application may be slightly different as far as usage of Z values goes.

    Also the extra matrices the ambient obscurance shader uses, ( like _ProjInfo , _ProjMatrix , _ProjectionInv = which i set them too ) seems to be there and should be working for the obscurance shader so maybe straight forward reuse may not fit to my case.

    So two issues,

    1. getting correct view space position
    2. getting correct projection matrix

    all in post processing shader.

    code for setting the extra matrices are as follows :

    Code (csharp):
    1. Matrix4x4 P;
    2.         P = this.camera.projectionMatrix;
    3.        
    4.         Vector4 projInfo = new Vector4
    5.             ((-2.0f / (Screen.width * P[0])),
    6.              (-2.0f / (Screen.height * P[5])),
    7.              ((1.0f - P[2]) / P[0]),
    8.              ((1.0f + P[6]) / P[5]));
    9.  
    10.         Shader.SetGlobalVector ("_ProjInfo", projInfo);
    11.         Shader.SetGlobalMatrix ("_ProjMatrix",P);
    12.         Shader.SetGlobalMatrix ("_ProjectionInv",P.inverse);
     
    Last edited: Oct 18, 2013
  17. Dolkar

    Dolkar

    Joined:
    Jun 8, 2013
    Posts:
    576
    Hmm.. the z component of the projected vector might be negated (or of the input one)... thats about the only difference I think there might be.

    I had to do it as well when i was passing my custom IT_MV matrix
     
    Last edited: Oct 18, 2013
  18. castor76

    castor76

    Joined:
    Dec 5, 2011
    Posts:
    2,517
    if you are talking about :

    viewPos.z = -z;

    then yes, I did change to -z just to experiment.. they both (+ or - ) doesn't quite work for me.
     
  19. castor76

    castor76

    Joined:
    Dec 5, 2011
    Posts:
    2,517
    Ok, I have done some experimenting. I have used both UNITY_MATRIX_P and _ProjMatrix (from camera script) and after doing perspective division, did a difference and magnified to visualize the difference.. and clearly, there are differences that are too great to call it a some minor computational difference.

    Code (csharp):
    1.  
    2. float4 vspPosReflectT  = mul (UNITY_MATRIX_P, float4(viewPos+vspReflect,1));
    3. float3 vspPosReflect   = vspPosReflectT.xyz   / vspPosReflectT.w;
    4.                
    5. float4 vspPosReflectT2 = mul (_ProjMatrix,    float4(viewPos+vspReflect,1));
    6. float3 vspPosReflect2  = vspPosReflectT2.xyz  / vspPosReflectT2.w;
    7.        
    8. return float4( abs(vspPosReflect - vspPosReflect2) * 100,0);
    9.  
    $ssr02.JPG

    Blue compont only means that something is not quite the same for z components... :eek:

    Edit :

    I "think" that the difference is not with perspective division as it seems to work for x and y components so the resulting Z value after doing mul with matrix is where the issue lies. This leads me to think... maybe coordinate system for unity camera (in script) is different than the UNITY_MATRIX_P used in the shader???
     
    Last edited: Oct 19, 2013
  20. castor76

    castor76

    Joined:
    Dec 5, 2011
    Posts:
    2,517
    Ok! I think I made some progress with projection matrix issue.

    It turns out indeed that it was difference between different way z was handled.
    I added the following to the script code and now my perspective matrix is working correctly.

    got the info from this search result :

    http://answers.unity3d.com/questions/12713/how-do-i-reproduce-the-mvp-matrix.html

    Code (csharp):
    1. bool d3d = SystemInfo.graphicsDeviceVersion.IndexOf("Direct3D") > -1;
    2. if (d3d)
    3.  {
    4.        // Scale and bias from OpenGL -> D3D depth range
    5.     for (int i = 0; i < 4; i++)
    6.        {
    7.          P[2,i] = P[2,i]*0.5f + P[3,i]*0.5f;
    8.        }
    9. }
    Now only if I can reconstruct the view space position correctly...
     
    Last edited: Oct 19, 2013
  21. WGermany

    WGermany

    Joined:
    Jun 27, 2013
    Posts:
    78
    This is great I haven't dug into any code yet but I have been researching this for a few weeks now. I hope things go great and if I need you will you be there to help? :) I always seem to struggle with the CPU portion when integrating complex effects. Thats probably where I will get stuck. If its not to much to ask for could you continue with explaining any issues and how you resolved them as you go along so that it may be of reference to me when I start getting into the code?
     
  22. castor76

    castor76

    Joined:
    Dec 5, 2011
    Posts:
    2,517