Search Unity

Avoiding GrabPass?

Discussion in 'Shaders' started by BigRedSwitch, Mar 19, 2013.

  1. BigRedSwitch

    BigRedSwitch

    Joined:
    Feb 11, 2009
    Posts:
    724
    Hey, all,

    I've written a multipass shader that uses the results of the first pass as the source for the next pass. The issue is, in order to do this, at the moment, I'm using GrabPass. This has two key issues:

    1) It's slow.
    2) It grabs the whole screen.

    I want neither of these things! :) I just want to be able to use the image generated on the surface from the first pass so I can use it in the second pass.

    Is this possible, and if so, how?

    Thanks, in advance...

    SB
     
    Rs likes this.
  2. BIG-BUG

    BIG-BUG

    Joined:
    Mar 29, 2009
    Posts:
    457
    What do you want to do exactly? Could it be done in just a single pass using a CG program?
     
  3. BigRedSwitch

    BigRedSwitch

    Joined:
    Feb 11, 2009
    Posts:
    724
    Nope - the code is already in CG, and I've run out of registers to do it in a single pass... :-(
     
  4. brianasu

    brianasu

    Joined:
    Mar 9, 2010
    Posts:
    369
    Can you create a camera do a RenderWithShader or Graphics.DrawMesh to specific rendertexture using the first pass. Probably the later would be more efficient. Then set that rendertexture using Shader.SetGlobalTexture or yourMaterial.SetTexture and render the second pass?

    Grabpass is quite slow because it actually does some pixel copy operations on the cpu.
     
  5. BigRedSwitch

    BigRedSwitch

    Joined:
    Feb 11, 2009
    Posts:
    724
    Well, this uses a render texture to begin with (it grabs from a second camera as the source texture), so thatd be 2 render textures just to do this one effect?

    There's really no way to just drop the output of the first pass into the second? The only reason I really don't want to use getpass is the fact that it grabs *the whole screen*, not just the previous shader pass.

    Does no one else think that's a fairly major limitation?
     
  6. BigRedSwitch

    BigRedSwitch

    Joined:
    Feb 11, 2009
    Posts:
    724
    Also, I'm not really sure I understand how I'd use your suggestion (sounds like you're saying I should split the shader in two?) - any chance you could walk me through it?
    Thanks! :)
     
  7. Farfarer

    Farfarer

    Joined:
    Aug 17, 2010
    Posts:
    2,249
    Well, the *whole screen* is there anyway. It's not like it's re-rendering everything again for the grabpass, might be a bit slower now it's copying the full frame buffer but I don't imagine by a staggering amount or it'd be a silly move on Unity's part.

    If it's that much of a requirement perhaps drop back to the last version that had grabpass capture only the screen-space bounding box of the mesh.

    What effect are you trying to do, exactly? Perhaps we can suggest a better way or suggest optimisations.
     
  8. BigRedSwitch

    BigRedSwitch

    Joined:
    Feb 11, 2009
    Posts:
    724
    It's just a special effect which requires the splitting of RGB channels, with the addition of various alpha passes (combining moving textures along the way). This is to create an effect on the output of a second camera, which is used as an image in the main view.

    GrabPass isn't usable for what I want. The game I'm writing is pretty simple, graphically, but it needs some nice effects (like the one I describe), so I could take the hit on the GrabPass command, speed wise.

    The issue comes from the fact that GrabPass grabs screen space, not texture space, so if I'm using a render texture to process an image grabbed from a second camera, then applying that rendertexture to an object in the scene (main camera), using GrabPass then grabs the whole screen, which includes everything rendered including stuff I don't want to be in the second pass (like the background, for example).

    If GrabPass worked in Texture Space, I'd have no issues. I'd also have no issues if I could pass the output from one pass into the second pass. I'm struggling to believe that there's no way to do this??
     
  9. Farfarer

    Farfarer

    Joined:
    Aug 17, 2010
    Posts:
    2,249
    Oh, you mean you essentially want to bake out a texture?

    There are ways... have a shader that outputs UV coordinates rather than vertex coordinates as the sv_position, do the base pass you want with your shader, grabPass that and then use that grabPass as a regular texture in the next pass.

    Might need to be done on a second camera to ensure it's not drawing on-screen during the game...
     
  10. BigRedSwitch

    BigRedSwitch

    Joined:
    Feb 11, 2009
    Posts:
    724
    Well, that's it - that's pretty much what I'm doing - like so:

    (abstracted - code not with me at the moment):

    Code (csharp):
    1.  
    2. subshader
    3. {
    4.      pass
    5.      {
    6.           CGCODE
    7.      }
    8.      GrabPass{}
    9.      pass
    10.      {
    11.           CGCODE
    12.      }
    13. }
    14.  
    The issue being that the first pass occurs, is rendered to the texture, which is applied to a model in the scene, then GrabPass occurs, grabbing EVERYTHING in the scene (including the background etc), and the second pass then works on the captured texture.

    All I want to do is to take the texture output of Pass 1 and feed it into Pass 2, unpolluted by anything else.
     
  11. Farfarer

    Farfarer

    Joined:
    Aug 17, 2010
    Posts:
    2,249
    Well, GrabPass won't grab in texture space, it just grabs the framebuffer as it currently stands. So it sounds like GrabPass won't do what you want.

    Perhaps try using a camera with a replacement shader that renders the object to UV coordinates to a render texture, then you can use that render texture in your regular scene as a normal UV mapped texture?

    So for the render texture pass you probably want o.pos to be something like
    o.pos = mul(UNITY_MATRIX_MVP, float4(v.texcoord.x, v.texcoord.y,0,0); // Might have to do some normalizing to get it to draw full-screen.

    Then in your regular scene, just use the render texture like a regular texture.
     
  12. BigRedSwitch

    BigRedSwitch

    Joined:
    Feb 11, 2009
    Posts:
    724
    So there's really no way to actually pass the output of one pass to another?
     
  13. Farfarer

    Farfarer

    Joined:
    Aug 17, 2010
    Posts:
    2,249
    Not in texture space, no.

    Other than what I described, at least.
     
  14. Gibbonator

    Gibbonator

    Joined:
    Jul 27, 2012
    Posts:
    204
    If you want the output available in a shader register then you'll need to use an intermediate texture (either a RenderTexture or by using GrabPass). However you can access the current framebuffer contents at the alpha blend stage using DstColor, DstAlpha, OneMinusDstColor or OneMinusDstAlpha.
     
  15. BigRedSwitch

    BigRedSwitch

    Joined:
    Feb 11, 2009
    Posts:
    724
    OK - so I can add a second rendertexture to this and 'ping-pong' the image; How would I go about that?
     
  16. R-Type

    R-Type

    Joined:
    Oct 31, 2012
    Posts:
    44
    I had similar problems with my shaders in a research project, where I have to visualize special data consisting of several texture data and a lot of parameterization, all done in fragement shaders. When calculations got too complicated, I ran out of registers too.
    So this might be an option, you probably not thought about so far: Unity uses SM2 by default (at least it did for me). After changing to SM3, the number of registers raised dramatically and gave me enough room for my stuff.
     
  17. BigRedSwitch

    BigRedSwitch

    Joined:
    Feb 11, 2009
    Posts:
    724
    R-Type - that'd be great, but I'm writing this for iOS, so I'm not sure that'd work. If it'd work on certain iPhones and not others, I'd be OK with that, but as I understand it, iOS is SM2 only?
     
  18. Daniel_Brauer

    Daniel_Brauer

    Unity Technologies

    Joined:
    Aug 11, 2006
    Posts:
    3,355
    My guess is you can get all this done in one pass. Pick your worst target hardware, and try it out. iOS is nice because there are very few devices, and their graphics horsepower only increases from one generation to the next.
     
  19. R-Type

    R-Type

    Joined:
    Oct 31, 2012
    Posts:
    44
    Don't know about iOS. I'm working with D3D on Windows. Maybe you just give it a try and see how OpenGL compiles ...
     
  20. BigRedSwitch

    BigRedSwitch

    Joined:
    Feb 11, 2009
    Posts:
    724
    Daniel - the big problem is that the second pass is a blur, meaning in the vertex code, I've got to essentially grab 4 lots of uv's for even the most simple effect. In addition to this, the previous pass uses 4 textures, two of which it splits into separate (and moveable) RGB layers, so I'm pretty sure it can't be done in one pass. If you have a lightning fast way to do a blur that wouldn't use 4 sets of uv's, I'd love to hear it. Unfortunately, iOS doesn't support tex2DARRAY, so I can't grab surrounding pixels all at once...
     
  21. Daniel_Brauer

    Daniel_Brauer

    Unity Technologies

    Joined:
    Aug 11, 2006
    Posts:
    3,355
    Blurs are usually split into multiple iterations. Can everything before the blur be done in texture space, too?
     
  22. Farfarer

    Farfarer

    Joined:
    Aug 17, 2010
    Posts:
    2,249
    Does it have to be blurred with multiple samples? Can't you just use tex2Dlod to grab a lower mip level?
     
  23. BigRedSwitch

    BigRedSwitch

    Joined:
    Feb 11, 2009
    Posts:
    724
    Daniel - the iterations still require UV offsets:

    Code (csharp):
    1.  
    2. -1, -1    0, -1    1, -1
    3. -1, 0     0, 0     1, 0
    4. -1, 1     0, 1     1, 1
    5.  
    I'm only using 4 (the corners), and I may be missing something with how I could do this outside of a seperate pass.

    The current format is:

    Code (csharp):
    1.  
    2.      Pass
    3.      {
    4.           //Do all graphical work to alter the image
    5.      }
    6.      Pass
    7.      {
    8.           //Blur
    9.      }
    10.  
    If there's a better way to do it, I want to know!

    Farfarer - I could use a lower mip, I guess? Are these generated live for Rendertextures, as the source of the input to the shader is a feed from a camera, remember...?
     
    Last edited: Mar 20, 2013
  24. Farfarer

    Farfarer

    Joined:
    Aug 17, 2010
    Posts:
    2,249
    They can be set to generate mips, yeah. Gotta be a power of two, though.

    myRenderTexture.useMipMap = true;
     
  25. BigRedSwitch

    BigRedSwitch

    Joined:
    Feb 11, 2009
    Posts:
    724
    How do I access the lower mips in the shader? I was under the impression that tex2Dlod wasn't supported in Unity?
     
  26. Farfarer

    Farfarer

    Joined:
    Aug 17, 2010
    Posts:
    2,249
    Works fine. You'll have to add
    #pragma glsl
    if you want it to compile to openGL.

    To access the mips, you pass in a float4 where the regular float2 UV coords would go.

    Code (csharp):
    1.  
    2. float4 uv;
    3. uv.xy = yourRegularUVs.xy; // Your regular UV coords.
    4. uv.z = 0; // Doesn't matter what this is.
    5. uv.w = mipLevel; // Lower = higher resolution.
    6. fixed4 myBlurredTexture = tex2Dlod(_MyTexture, uv);
    7.  
    mipLevel is 0 to access the highest resolution, and each increment will give you the next mip down (i.e. more blurred, as it's half-res).
    i.e. 1 gets you mip 1, 2 gets you mip 2, 3 gets you mip 3, etc...
     
  27. BigRedSwitch

    BigRedSwitch

    Joined:
    Feb 11, 2009
    Posts:
    724
    Thanks, Farfarar - that's really useful!

    Just one thing, though - seems I can't use tex2Dlod on the PC? What's the alternative so I can have the same effect run across all platforms?
     
  28. Farfarer

    Farfarer

    Joined:
    Aug 17, 2010
    Posts:
    2,249
    Should run on PC. Works fine for me, at least.

    You'll need to add
    #pragma glsl
    to your pragmas.

    As far as I understand it, OpenGL only allows it to be used in the vertex shader, but GLSL will allow it to run in fragment shaders. That pragma will force the OpenGL shader to use GLSL. Or something.

    Any errors in the console?
     
  29. BigRedSwitch

    BigRedSwitch

    Joined:
    Feb 11, 2009
    Posts:
    724
    Yeah -

    Program 'frag', texlod not supported on this target (compiling for d3d11_9x) at line 38
    Program 'frag', invalud internal function declaration for "float tex2Dlod (sampler2D, float4) at line 47

    That's it.

    I have #pragma glsl before the pragmas defining the shader program names...

    if I comment out the line with tex2Dlod on it (assign the var to 0), it all works fine...
     
  30. Farfarer

    Farfarer

    Joined:
    Aug 17, 2010
    Posts:
    2,249
    Hmm, a bit of fiddling suggests it will need
    #pragma target 3.0
    if you're compiling to d3d9. Everything else seems cool with it being 2.0.

    I didn't realise 'cause the shader I was using it on had target 3.0 set anyway. So you can put in a keyword check to bump it up to 3.0 if you're on D3D9.

    So with my pragma block looking like this, I get no errors;
    Code (csharp):
    1.  
    2. #pragma vertex vert
    3. #pragma fragment frag
    4. #pragma fragmentoption ARB_precision_hint_fastest
    5. #pragma glsl
    6. #if defined(SHADER_API_D3D9)
    7.     #pragma target 3.0
    8. #endif
    9.  
     
    Rs likes this.
  31. hippocoder

    hippocoder

    Digital Ape

    Joined:
    Apr 11, 2010
    Posts:
    29,723
    I'm doing 4 passes per frame on 3GS with tob and its around 60fps on 3GS, iPhone4 and iPad, with some variation but never lower than 50 or so. For all current devices it's 60fps solid.

    It's not what you're grabbing its what you do with it (shader complexity goes a long way to keeping those pixels moving fast).
     
  32. BigRedSwitch

    BigRedSwitch

    Joined:
    Feb 11, 2009
    Posts:
    724
    hippo - this wasn't really about speed, as I've said; it was about what was being grabbed by GrabPass. I ONLY wanted to capture what was being output by the previous pass, NOT the entire frame buffer....
     
  33. BigRedSwitch

    BigRedSwitch

    Joined:
    Feb 11, 2009
    Posts:
    724
    Farfarer - you win, man. Works, and it's awesome. :)

    Thanks for your help!

    Still would be nice to be able to use the output of one pass in another, though... :p
     
    Rs likes this.
  34. BigRedSwitch

    BigRedSwitch

    Joined:
    Feb 11, 2009
    Posts:
    724
    Gah! I was wrong!! :-(

    The blur needs to occur on the texture generated by the previous passes (as it creates an image from multiple sources) - process is this:

    1) Combine _MainTex and Tex2
    2) Combine result with Tex3
    3) Combine result with Tex4
    4) Blur

    So now I need to access the resultant image as a texture before I can use the blur properly! :-(

    Is there any way to do that? :-(
     
  35. BigRedSwitch

    BigRedSwitch

    Joined:
    Feb 11, 2009
    Posts:
    724
    UPDATE: I reversed the shader, and decided to 'pre blur' it; I don't know if this will work properly yet, but I'm giving it a go.

    The issue now seems to be that I can't get the rendertexture to generate mip-maps. Does it need to be a code generated RenderTexture to do this?

    At the moment, I'm doing the following:

    Code (csharp):
    1.  
    2.      public RenderTexture RT;
    3.  
    4.      void Start ()
    5.      {
    6.           RT.useMipMap = true;
    7.      }
    8.  
    and assigning the RenderTexture in the editor to the public RT slot.

    When I examine the RenderTexture while the game is running, guess what - NO MipMaps.

    What's going on?
     
  36. Farfarer

    Farfarer

    Joined:
    Aug 17, 2010
    Posts:
    2,249
    What size is your render texture? It has to have power of two dimensions for mip maps to be generated (i.e. 256x256 or 1024x512).
     
  37. BigRedSwitch

    BigRedSwitch

    Joined:
    Feb 11, 2009
    Posts:
    724
    1024x1024 - always powers of 2...
     
  38. BigRedSwitch

    BigRedSwitch

    Joined:
    Feb 11, 2009
    Posts:
    724
    Any ideas?
     
  39. ChiuanWei

    ChiuanWei

    Joined:
    Jan 29, 2012
    Posts:
    131
    how about use Command Buffers
     
  40. TestingTinybop

    TestingTinybop

    Joined:
    Mar 9, 2015
    Posts:
    1
    How about frame buffer fetch? It's mostly limited to iOS devices but is very fast I think.
     
  41. Invertex

    Invertex

    Joined:
    Nov 7, 2013
    Posts:
    1,546
    So, I came across this problem for our character customizer, we need some 38+ texture inputs for the level of customization we want (the result gets combined after the customization, so this shader doesn't need to perform well in a more complex setting). I couldn't find anywhere talking about how to use GrabPass to simply pass your texture computations from one pass to another and this thread seemed to imply you couldn't really do that, but I think I've figured out a really simple formula that seems to work and doesn't even require a vert program to compute anything, can do all your passes with just surface shaders if you want.

    Simply do these calculations with your base UV values before sampling your _GrabTexture.

    Code (CSharp):
    1. //Retrieve screen dimensions, which are dimensions of the GrabPass
    2. //Also normalize so we can use it in a meaningful way
    3.     float2 screen = normalize(_ScreenParams.xy);
    4. //Divide screen Y by X to get the ratio in size our GrabPass's width is to its vertical
    5.     float uvOff = screen.y / screen.x;
    6. //Multiply our horizontal UV size by the ratio of our GrabPass width
    7. //to scale our UV dimensions to the size of our previous pass's data on the GrabPass
    8.     mainUV.x *= uvOff;
    9. //Subtract the ratio from 1 to get the scale of the extra blank data on the sides of the GrabPass
    10. //Then split in half & add it to our horizontal UV so that we sample from the center of the GrabPass
    11.     mainUV.x += (1 - uvOff) * 0.5;
    12.  
    13. //If we're rendering in DirectX, we need to flip the UV vertically for the texture to match
    14.     #ifdef SHADER_API_D3D11
    15.         mainUV.y = 1 - mainUV.y;
    16.     #elif SHADER_API_D3D9
    17.         mainUV.y = 1 - mainUV.y;
    18.     #endif
    19.  
    20.     o.Albedo = tex2D(_GrabTexture, mainUV);
    Basically what I'm doing is computing all my Albedo info in the first pass, then doing my Normals and Specular in my second pass.

    I'm still not much of a shader expert though, so I would love some input on this from any of the Unity devs or other knowledgeable folks if this might cause some unforeseen issues...
     
    Last edited: Aug 25, 2016