Search Unity

N00b Shader questions & advice

Discussion in 'Shaders' started by Snouto, May 24, 2017.

  1. Snouto

    Snouto

    Joined:
    May 27, 2013
    Posts:
    9
    Hi everyone

    Although I'm a seasoned programmer with some experience of Unity I have very little knowledge of Shaders. What knowledge i do have has come from reading various sources and looking at source code over the last few days, so it's still very raw.

    I'm working on a volume rendering shader and I'm at the stage of wanting to optimise the shader rendering by progressively degrading the quality of the volume away from the camera view. I've got something working along a specific axis, however as the volume cube is rotated there is an obvious step change from the high quality front facing volume and the rest of the lower quality back volume. That is to say, where the volume quality changes is clearly delineated but what I want to happen is to always have the volume behind the current view to be degraded, even as the volume rotates.I'm not sure this is possible and that's part of the problem, but more fundamentally I have not been able to find any solid information that describes how often Unity will fire a shader. What I mean is, does the shader PASS sections run constantly over and over, like the standard scripting Update block does, or does a shader and all of its passes only get fired once, and then only again if any of the material properties change?

    If I presume for a moment that the shader only changes when some property is updated, and does not run on an update-like process, how might one update a rendered volume so that the quality from the middle to the back is of a lower quality than that of the front of the volume, but that also updates as the cube (or perhaps the camera) is rotated?

    In connection with the above, how do we determine or ensure that the rays being casted in to the volume originate from the position of the camera, as it is viewed on screen? I am aware of functions such as ObjSpaceViewDir() and UnityObjectToClipPos(), however I'm already using these in the vertex program yet the output of the volume render always remains the same even if I adjust the initial position of the camera.

    Finally if anyone has some thoughts or ideas on a best practice approach to this sort of thing I'd very very much appreciate hearing it. My eventual aim is to have e.g. 256x256x256 volumes rendering and running with good FPS on mobile devices (probably iPad pro).

    I hope someone can enlighten me. It's often the simplest of things that create the biggest roadblock to progress!

    Cheers

    Lee
     
  2. AcidArrow

    AcidArrow

    Joined:
    May 20, 2010
    Posts:
    11,741
    For vert/frag stuff, assuming meshes with the shader are visible, vert is ran once per vertex per frame and frag is ran once per pixel per frame.
     
  3. bgolus

    bgolus

    Joined:
    Dec 7, 2012
    Posts:
    12,343
    The important part to understand is this isn't a "Unity" thing, this is a how-GPUs-work thing. In effect Unity never "fires" a shader at all, ever. Instead it passes information to the graphics API; "here's a bunch of information, do something with it." It's essentially up to the graphics API, the GPU drivers, and ultimately the GPU to decide what it does with that information, and the GPU / drivers that eventually "fires" the shader code.

    If you want to understand what's happening, there are plenty of documents out there outlining how modern graphics rendering works, ranging from the in depth and generalized breakdowns like Fabian "ryg" Giesen's A trip through the Graphics Pipeline, or significantly more high level and Unity specific like Alan Zucconi's A gentle introduction to shaders in Unity3D, or something in between like Keith O'Conor's GPU Performance for Game Artists.

    At a high level Unity "fires" a shader for every time the screen renders, which could be simplified as "every Update() of the Camera object", but is more complex than that.
     
    Snouto likes this.