Search Unity

Understanding Render to texture applied in Shaders

Discussion in 'Shaders' started by Guirao, Oct 27, 2014.

  1. Guirao

    Guirao

    Joined:
    Nov 24, 2012
    Posts:
    68
    I would like to understand the Render to texture combined with shaders because it's a little ambiguous for me.

    For example, a cool looking outline glow effect (not the one in the example shaders), I'm not asking about the code so I can copy paste, I really want to know how is everything done. As fas as I know, the theory:



    1- Render the object in one color (vertex expanded through shader? Expand later as texture by rendering to a texture? If so using Unity's renderTexture property from the Camera?).

    2- Apply some kind of blur to the texture (ok, If I get to this point I think I should be able to do that looking around and searching, no problem here, not a priority).

    3- Render the base object on top (should any basic shader be enough right? It isn't done the same way like the example outline shader, where there are 2 passes, the first for the outline and the second for the base, correct?)

    This is the theory I have on my head, which I think is correct, but there are a lot things which I can't connect:

    a- To render the outline first to a texture, is it a shader pass in which you can get somehow the texture about to be rendered (I know you can get the background texture by GrabPass())? Or is it accomplished by Camera.Render() to a texture?

    b- Then, when you have the texture you process the image in the cpu (c#) or gpu (shader)? If it's in the shader, you pass the texture in a property as Sample? And finally do you apply the grow effect here and all the glossiness?

    c- If it's in the cpu, i presume you grow the object, apply glow and render the texture, but how? OnGui? 2 cameras?
     
  2. Guirao

    Guirao

    Joined:
    Nov 24, 2012
    Posts:
    68
    Bump! please...