Search Unity

Skybox render queue value

Discussion in 'Shaders' started by veganders, Mar 21, 2011.

  1. veganders

    veganders

    Joined:
    Mar 8, 2011
    Posts:
    5
    Hi,

    I'm currently trying to get a Skybox to be seen through a window. What I do is basically to attach a Skybox to the camera and then render the window with the following shader taken from the unity documentation.

    Code (csharp):
    1.  
    2. Shader "ComplexInvisible" {
    3.     SubShader {
    4.         // Draw ourselves after all opaque geometry
    5.         Tags { "Queue" = "Transparent" }
    6.  
    7.         // Grab the screen behind the object into _GrabTexture, using default values
    8.         GrabPass { }
    9.  
    10.         // Render the object with the texture generated above.
    11.         Pass {
    12.             SetTexture [_GrabTexture] { combine texture }
    13.         }
    14.     }
    15. }
    16.  

    The skybox documentation (http://unity3d.com/support/documentation/Components/class-Skybox.html) states that it is rendered before everything and the skybox shader code also sets the render queue value to Backgrond.

    This would lead me to think that the above shader would simply take the skybox already rendered and display it on the window primitives.

    This however does not seem to be the case. Without the skybox the fill color is shown in the window but when the skybox is added the color in the window seems to be smeared out all over the window as if the buffer isn't cleared.
    The skybox is however rendered correctly where there is no geometry. The camera documentation however states that

    which leads me to think that the skybox really is rendered last contrary to the what the documentation states, and that checks the depthbuffer which would also explain why the above shader would block the skybox.

    Could anyone shed some light on why the above approach does not work, or have any idea on how to solve it?
     
  2. amartinez1660

    amartinez1660

    Joined:
    Mar 4, 2011
    Posts:
    2
    Did you manage to fix your issue?
    I'm really interested to finally know how it all works out... the render queues and all that.
    Recently started toying around with unity and it's an awesome tool, I mean really really awesome, but to be honest sometimes it's hard to find how things work and the documentation sometimes misses to explain tiny but very important bits.

    Skyboxes should indeed be drawn before alphablended geometry but AFTER opaque geometry or there will be unnecessary screen overdraw.

    The ideal scenario would be:
    1. clear device (cheap).
    2. draw opaque and alpha-tested (not blended) geometry.
    3. if there is a skybox draw a full screen quad with vertices Zposition on 1 -> (x, y, 1) (far clip plane) to automatically cull parts of the screen that were already drawn and no need to change device depth-stencil states.
    4. transparent.
    5. overlays.

    How do you say it is working for you? clear -> geom -> trasnsparent -> overlay -> skybox. (with a garbled bonus where the invsible geometry is).

    The scripting in Unity being so powerful, I guess it is entirely possible to override the skybox system unity uses and use your own shader with a RenderQueue of something like "Transparent - 1"
     
  3. jjxtra

    jjxtra

    Joined:
    Aug 30, 2013
    Posts:
    1,464
    Unity sky box render order appears quite bugged. Shader says queue is background, but it is actually rendering as the last thing in the Geometry queue.
     
  4. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,342
    A lot has happened in the last 6 years since this post was started, and Unity's skybox documentation is quite out of date, almost none of it is accurate since the release of Unity 5.0.

    The default skybox for Unity 5.0 is a procedural skybox that does physically based sky color calculations based on the sky material's atmospheric settings and scene's sun direction. However because it's more expensive than the old "cube" style skybox having it render first is inefficient as it means the whole screen is calculating the procedural sky color, then getting drawn over, so instead Unity renders the skybox between the opaque queues (<=2500) and the transparent queues (>2500), effectively at queue "2500.5", regardless of the queue set in the shader or on the material. It's also rendered with a special high poly sphere mesh, designed to work with the procedural sky, and special transform matrix, designed to render the sphere such that it's centered on the camera and scaled to the far draw distance. This means the skybox will render only where nothing has been written in the depth buffer.

    The side effect of this is a lot of effects you could do with Unity 4 don't work with Unity 5 without some extra work. So while your observations of the behavior are accurate, this isn't a bug but intentional, even if it still isn't documented.
     
    krzys_h, Xury46 and Liam-Lam like this.
  5. jjxtra

    jjxtra

    Joined:
    Aug 30, 2013
    Posts:
    1,464
    Good to know. I wish this was documented. I'll have to look at this more for my sky sphere. Using the depth buffer should save a lot of pixels.
     
  6. jobigoud

    jobigoud

    Joined:
    Apr 13, 2017
    Posts:
    8
    Liam-Lam likes this.
  7. SuzukaChan

    SuzukaChan

    Joined:
    Oct 14, 2016
    Posts:
    5
    I have a question, since had the Depth Buffer before the skybox and opaque object, it seems no matter what order of rendering is the skybox can process only pixels that are visible. so why unity will move skybox rendering after the opaque object?
     
    bleater likes this.
  8. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,342
    Two reasons:

    1) The depth isn't guaranteed to exist prior to rendering the opaques.
    The camera depth texture that gets rendered before the opaque pass is only rendered if there is a real time shadow casting directional light on non mobile devices, soft particles are enabled in the quality settings, or it's been enabled on for that camera from c#, the later of which is often done for post processing. If none of those are true, the camera depth texture is not rendered and the depth is not known until after the opaques have been rendered.

    2) The camera depth texture is not the camera's depth buffer. It is a camera's depth buffer, but it's not the one used when rendering the visible scene. Unity renders the opaque objects using their shadow caster pass to a separate temporary depth buffer that gets copied to a render texture so it can be sampled by shaders later. That temporary depth buffer and render texture both match the resolution of the eventual visible camera target, but this depth buffer has no MSAA enabled regardless of if it's enabled on the camera or not because sampling a depth render texture with MSAA results in incorrect depth values.

    This gets a bit complicated, but I'll try to go through it.

    MSAA works by rendering multiple values from slightly offset positions within the pixel, aka sub-samples. The image you see on screen when using MSAA is the average of each of those sub samples per pixel. The result is an anti-aliased image. (Note: MSAA is actually way more complex than that, but that description will suffice for this topic.)

    If you sample a render texture that is storing MSAA values, the GPU will "resolve" the render texture to a non-MSAA texture, which is that averaging. You don't want for a depth texture as it means no value in the resulting texture is accurate. The is especially true anywhere there's a large depth difference between the samples in the sub-samples, as the average will be some position between the near and far surface. This means it'll be a positions where nothing actually is. The "obvious" option would be to not resolve the texture and to sample the individual sub-sample values, but that either is not possible on some older hardware, or excessively expensive on the hardware which it was possible for.

    The easier solution is to render the depth texture without MSAA which avoids the resolve / averaging and ensures the values in the depth texture are actually for surfaces that exist rather than ghost "average" surfaces that don't.

    But if the camera is rendering using MSAA and the camera depth texture isn't using MSAA, it means they still don't match. You can't use the depth from the non-MSAA depth texture for the main rendering since the depth buffer needs MSAA for the MSAA to work. So if you use the non-MSAA depth texture to reject the skybox you'd get an aliased edge where it meets geometry in the scene since they do not perfectly match.


    Now you might think "but wait, if the depth in the depth texture doesn't match the depth buffer in the scene, won't that cause issues everywhere else the depth texture is used?" And the answer is yes, it does. Directional lighting and soft particles break on geometry edges if MSAA is enabled, and always has been broken. But most of the time people don't notice these artifacts.


    Ignoring all of that, and even if you're not using MSAA, sampling the depth texture in the skybox shader to make sure it doesn't render where the opaque geometry is doesn't improve performance at all. If anything it'll actually make the performance worse since you'd still be calculating the skybox everywhere, just throwing away the work for the pixels it should be hidden at. It's only a useful optimization if it is actually the current depth buffer as then the GPU can skip running the shader on hidden pixels completely. It's true that in this case the depth could theoretically be reused for the main rendering passes, but it's not. *shrug*
     
    SuzukaChan likes this.