Search Unity

MRT (Multiple Render Target) in Unity5, how to/best practice?

Discussion in 'Shaders' started by Lost-in-the-Garden, Feb 3, 2016.

  1. Lost-in-the-Garden

    Lost-in-the-Garden

    Joined:
    Nov 18, 2015
    Posts:
    176
    For the last days I am desperately trying to get a simple MRT running. We want to do custom effects (glow and others) that require separate layers, but we don't want to do multipass rendering.

    I tried countless combinations of Graphics.setRenderTarget() and Camera.SetTargetBuffers() with no success. The overarching question floating over my head now is:

    How are MRTs actually supposed to work in Unity 5 and how are we intended to use them?

    The documentation is very thin in this area, and does not offer more insights besides the method signature in most cases. I created a test project that you can find here: https://drive.google.com/folderview?id=0B-NQQxq4JO8AOVV0WHBRakJMRkU&usp=sharing

    When using Camera.SetTargetBuffers I get it to use the smaller dimensions of the renderbuffer I supply, but my glow shader, that writes to SV_target0 (also tried COLOR0) does not work. When using Graphics.setRenderTarget() in can clear the renderbuffer with gl.clear... but still I cant write to it.

    In the frame debugger, I can see that stuff is rendered, but It looks like it is rendered into the default back buffer, and not into the MRT.

    What's the difference (in meaning or intent) in either using Graphics.SetRenderTarget() or Camera.setTargetBuffers(). Is it just global vs. local, or is there more meaning to it. Neither of it worked for me, which leaves me guessing.

    Another question is how to blit the MRT to the screen then. The docs mention:
    But this sounds unnecessarily complicated and also smells of opengl immediate mode. I just set up the renderbuffers as textures in the shader. Is this also a viable way to go?


    Any hints on the topic are appreciated!
     
    Deleted User likes this.
  2. Lost-in-the-Garden

    Lost-in-the-Garden

    Joined:
    Nov 18, 2015
    Posts:
    176
  3. jvo3dc

    jvo3dc

    Joined:
    Oct 11, 2013
    Posts:
    1,520
    Really pretty much the same as how you use them in DirectX or OpenGL.
    All render targets should have the same dimensions. And usually also the same number of total bits per pixel.
    That should work. SV_target0, SV_target1, SV_target2 and SV_target3. Use a struct as output from the pixel shader to write to multiple targets.
    One method uses RenderTextures as parameters, the other uses RenderBuffers. (Which are parts inside RenderTextures.) The intent is the same.
    You can use a full screen effect that reads in the MRT targets as inputs and outputs them to the backbuffer.
     
    Last edited: Feb 4, 2016
  4. Lost-in-the-Garden

    Lost-in-the-Garden

    Joined:
    Nov 18, 2015
    Posts:
    176
    I probably worded my question in an odd way, but your answer is not very helpful to me.

    The question on how to use MRTs is meant in the way of how are we supposed to use them in unity. At what point in the pipeline can I call Graphics.set... Is it just once when I start the application, or on each frame, for each camera individually, etc... The question is also one of using either the global Graphics.set.. call or the local Camera.set... call.

    Apparently Graphics.set.. and Camera.set... do different things, thus I suspect the intent to be different.

    All the other answers you gave are addressed in the example project I provided. I did everything as you said, but still it does not work, or at least, I haven't found the error yet.

    Still, thanks for your answer, I appreciate it (really, I am not trying to be sarcastic here)
     
  5. jvo3dc

    jvo3dc

    Joined:
    Oct 11, 2013
    Posts:
    1,520
    Well, it depends on how you want to use it. That's really not that much related to MRT, but applies to any rendering to a RenderTexture. There are basically 2 ways:

    1. Most commonly you just make another camera that renders a specific object (layer mask). By changing the camera depth you can make this camera render before or after the main camera. You only need to set the targets once, since it's just a property of the Camera. It's common to let this camera mimic the main camera. In that case you can copy the main camera into this camera every frame and you'll need to adjust the layer mask and target every frame too. Any Camera can do MRT.

    2. If you need to fill the RenderTexture(s) between some steps in the Unity pipeline, you can use a CommandBuffer instead of a Camera. This way you can also add RenderTexture commands while the main camera is running. (Instead of before or after.) Any command in a CommandBuffer can do MRT.

    For planar reflections for example, I've made extra reflected camera's that render before the main camera. Then during the (deferred) reflection calculations on the main camera, I've inserted a CommandBuffer that filters and applies the planar reflections. This doesn't use MRT, but if I wanted to, I could use that any pretty much any step.

    So simply put. Every time anything is rendered to a RenderTexture, you could also choose to render to 2, 3 or 4 RenderTextures using MRT.

    I'm guessing that you were just not in your happy place. That tends to happen when nothing is making sense. So that's ok.

    I haven't opened your project, because, well, that's what happens on a forum. ;-)
     
  6. Lost-in-the-Garden

    Lost-in-the-Garden

    Joined:
    Nov 18, 2015
    Posts:
    176
    As mentioned in the introduction, we don't want to use multi pass rendering. We are constrained on setPass/draw calls and thus want to keep it lean.

    My problem is that I am very specific on what I want, and I know exactly how I would do it in OpenGL, It just does not work here in Unity.

    So, someone, please have a brief look at the project (I specifically created it as a reduced error case, only has a handful of files), and tell me what I am doing wrong.
     
  7. Lost-in-the-Garden

    Lost-in-the-Garden

    Joined:
    Nov 18, 2015
    Posts:
    176
    bumping it one more time, maybe someone has a hint for us.
     
  8. PatHightree

    PatHightree

    Joined:
    Aug 18, 2009
    Posts:
    297
    Lost-in-the-Garden likes this.
  9. Lost-in-the-Garden

    Lost-in-the-Garden

    Joined:
    Nov 18, 2015
    Posts:
    176
    Hi Pat, hi everybody!

    thanks for posting the link and reminding me of this thread. After some more poking, we found a solution seems to work for us. In case someone else tries do MRTs (and you should, there is much more that you can do with it in post then) here are the key steps we did. I am not sure that all steps are actually needed, so there might be some more room for optimizations/removal of redundant or unnecessary steps:

    First, we have to secure a reference to our default render buffers. I did find no other way to getting the back once you set them, so hold on to those pointers. After rendering to the MRTs we need to switch back to the default buffer, so that we can resolve our MRT to the screen. We do this once during init of the camera. Btw: all our code lives inside the a script that is attached to the camera.

    Code (CSharp):
    1. defaultColorBuffer = Graphics.activeColorBuffer;
    2. defaultDepthBuffer = Graphics.activeDepthBuffer;
    Also during init we set up renderbuffers, store them in an array and also set them in the camera to render to:
    Code (CSharp):
    1.             color0 = new RenderTexture(width, height, 24, RenderTextureFormat.ARGB2101010);
    2.             glowRGB = new RenderTexture(width, height, 0);
    3.             reflectionParameters = new RenderTexture(width, height, 0);
    4.             viewSpaceNormal = new RenderTexture(width, height, 0);
    5.  
    6.             buffers = new RenderBuffer[] { color0.colorBuffer, glowRGB.colorBuffer, reflectionParameters.colorBuffer, viewSpaceNormal.colorBuffer};
    7.             ShipCamera.SetTargetBuffers(buffers, color0.depthBuffer);
    No the really interesting part comes in the OnPostRender() callback of the camera. Since we set the MRT as the target buffer, the scene is already rendered to the the MRT correctly, now we have to resolve it into the default buffers:

    Code (CSharp):
    1. void OnPostRender()
    2. {
    3.     // some temp buffers and filtering like blurs and such
    4.     // ...
    5.  
    6.     // resolve
    7.     MrtResolveMaterial.SetTexture("color0", color0);
    8.     MrtResolveMaterial.SetTexture("glowTex", glowOutput);
    9.     MrtResolveMaterial.SetTexture("viewspaceNormal", viewSpaceNormal);
    10.     MrtResolveMaterial.SetTexture("diffuseReflectionTex", diffuseReflectionOutput);
    11.     MrtResolveMaterial.SetTexture("parametersTex", reflectionParameters);
    12.  
    13.     // draw to defaul buffer
    14.     // note:
    15.     // We are rendering a split screen scenee and Unity weirdly constraints the area where we can draw to
    16.     // setting the cameras target buffer to a full screen buffer solves this somehow...
    17.     ShipCamera.SetTargetBuffers(defaultColorBuffer, defaultDepthBuffer);
    18.     Graphics.SetRenderTarget(defaultColorBuffer, defaultDepthBuffer);
    19.  
    20.     // activate material and draw full screen quad
    21.     MrtResolveMaterial.SetPass(0);
    22.     Graphics.DrawMeshNow(Quad, Matrix4x4.identity);
    23.  
    24.     // set the camera's target buffers back to the MRT for the next frame
    25.     ShipCamera.SetTargetBuffers(buffers, color0.depthBuffer);
    26. }
    Addendum A: the full screen quad
    Code (CSharp):
    1. Quad = new Mesh();
    2. Quad.vertices = new Vector3[]
    3. {
    4.     new Vector3(0, 0, 0),
    5.     new Vector3(1, 0, 0),
    6.     new Vector3(1, 1, 0),
    7.     new Vector3(0, 1, 0)
    8. };
    9.  
    10. Quad.uv = new Vector2[]
    11. {
    12.     new Vector2(marginOffsetUV.x, marginOffsetUV.y),
    13.     new Vector2(1 - marginOffsetUV.x, marginOffsetUV.y),
    14.     new Vector2(1 - marginOffsetUV.x, 1 - marginOffsetUV.y),
    15.     new Vector2(marginOffsetUV.x, 1 - marginOffsetUV.y)
    16. };
    17.  
    18. Quad.triangles = new int[] { 0, 1, 2, 0, 2, 3 };
    19. Quad.UploadMeshData(false);
    20. Quad.name = "full screen quad";
    Addendum B: bare bones shader to draw to MRT:
    Code (CSharp):
    1. Shader "Custom/Trail"
    2. {
    3.     Properties {... }
    4.  
    5.         SubShader
    6.     {
    7.         Tags { ... }
    8.  
    9.         Pass
    10.         {
    11.             CGPROGRAM
    12.  
    13.             #pragma vertex vert
    14.             #pragma fragment frag
    15.             #pragma target 4.0
    16.  
    17.             #include "UnityCG.cginc"
    18.  
    19.             struct vertexData
    20.             {
    21.                 float4 vertex : POSITION;
    22.                 float3 normal : NORMAL;
    23.                 float4 color : COLOR;
    24.                 float4 texcoord0 : TEXCOORD0;
    25.                 float4 texcoord1 : TEXCOORD1;
    26.                 //...
    27.             };
    28.  
    29.             struct fragmentData
    30.             {
    31.                 float4 position : SV_POSITION;
    32.                 float4 color : COLOR;
    33.                 //...
    34.             };
    35.             struct fragmentOutput
    36.             {
    37.                 float4 color : SV_Target0;
    38.                 float4 glow : SV_Target1;
    39.                 float4 parameters : SV_Target2;
    40.                 fixed4 viewSpaceNormal : SV_Target3;
    41.             };
    42.  
    43.             fragmentData vert(vertexData v)
    44.             {
    45.                 fragmentData o;
    46.                 // vertex transform...
    47.              
    48.                 return o;
    49.             }
    50.  
    51.             fragmentOutput frag(fragmentData fragment)
    52.             {
    53.                 output.color = fragment.color;
    54.                 output.glow = fragment.color;
    55.                 //... write to other renderbuffers
    56.  
    57.                 return output;
    58.             }
    59.  
    60.             ENDCG
    61.         }
    62.     }
    63. }
    Addendum C: post processing resolve shader
    Code (CSharp):
    1. Shader "b35k/post/MrtResolve"
    2. {
    3.     Properties
    4.     {
    5.         //...
    6.     }
    7.         SubShader
    8.     {
    9.         // No culling or depth
    10.         Cull Off ZWrite Off ZTest Always
    11.  
    12.         Pass
    13.         {
    14.             CGPROGRAM
    15.             #pragma vertex vert
    16.             #pragma fragment frag
    17.             #pragma target 4.0
    18.  
    19.             #include "UnityCG.cginc"
    20.             #include "../baseInclude.cginc"
    21.  
    22.             int enablePostProcessing;
    23.  
    24.             sampler2D color0;
    25.             sampler2D glowTex;
    26.             sampler2D viewspaceNormal;
    27.             sampler2D diffuseReflectionTex;
    28.             sampler2D parametersTex;
    29.             //...
    30.  
    31.             struct vertexData
    32.             {
    33.                 float4 position : POSITION;
    34.                 float2 uv : TEXCOORD0;
    35.             };
    36.  
    37.             struct fragmentData
    38.             {
    39.                 float2 uv : TEXCOORD0;
    40.                 float4 position : SV_POSITION;
    41.             };
    42.  
    43.             fragmentData vert(vertexData vertex)
    44.             {
    45.                 fragmentData output;
    46.  
    47.                 output.position = vertex.position;
    48.                 output.uv = vertex.uv;
    49.                 return output;
    50.             }
    51.  
    52.             fixed4 frag(fragmentData fragment) : SV_Target
    53.             {
    54.                 // fetch and unpack render targets
    55.                 fixed4 color = tex2D(color0, fragment.uv);
    56.                 fixed4 glow = unpackGlow(tex2D(glowTex, fragment.uv));
    57.                 fixed4 normal = unpackNormal(tex2D(viewspaceNormal, fragment.uv));
    58.                 float4 reflectionParams = tex2D(parametersTex, fragment.uv);
    59.  
    60.                 // do the magic...
    61.  
    62.                 return finalColor;
    63.             }
    64.  
    65.             ENDCG
    66.         }
    67.     }
    68. }
    69.  
    @Unity In case someone from unity stumbles over this: We would love to have more fine grained control over the rendering pipeline in terms of having callbacks when certain events happen. Like being able to swap out the MRT for another combination of buffers when drawing transparent objects. Something like a OnRenderQueueStart(int queue) and a OnRenderQueueFinised(int queue).

    I hope this helps anyone who also tries this, and as mentioned in the beginning, you should. :D
     
  10. XRA

    XRA

    Joined:
    Aug 26, 2010
    Posts:
    265
    very clear example, thing I'm wondering is, are there any ways to read the depthBuffer that is shared/written into for the MRT (the depthBuffer of color0) ?

    Lets say I have some rendering passes which render into the MRT for additional geometry.. and at the end of these extra passes I need to store a copy of the depth as previous depth, but _CameraDepthTexture only gives me the depth of just the camera, not any additional stuff that has drawn into color0.depthBuffer.

    **EDIT**

    I'm thinking that the depth probably needs to be encoded to some of the color channels, correct? So that way the depth can be read/modified in other passes (this seems to be why most all engines with GBuffers pack normals and depth together)
     
    Last edited: May 26, 2016
  11. Tudor

    Tudor

    Joined:
    Sep 27, 2012
    Posts:
    150

    Hey, your "Addendum B: bare bones shader to draw to MRT:" shader example doesn't seem to work. I don't register any changes done to the MRTs. I also wrote a second pass just under it which should sample all the previous buffer changes but all I see is the default (unchanged) MRTs as if the writing didn't happen.

    Could you please post a more complete version of the shader that's verified to work? e.g you can set the normal gbuffer to (1,0,0,0) and then in a second pass read red to the screen?
     
    Last edited: May 26, 2016
  12. Lost-in-the-Garden

    Lost-in-the-Garden

    Joined:
    Nov 18, 2015
    Posts:
    176
    @XRA we don't use the depth buffer, so I never investigated how to use it. The API has it, so there should be a way to access it. However it can still be cheaper to save it to the MRT by hand, if you have a free channel. If you already need the additional texture, then the read/write is basically free, and you might also not need the full bit resolution of the device native depth buffer. But that's just guessing... :D

    @Tudor Unfortunately I don't have the time yet to cook up a full, working example with proper documentation. Our production plan does not leave time for that right now unfortunately, but I will try to answer further questions here.

    To your question: The important part of the Addendum B shader are the semantics in the fragment output struct:

    Code (CSharp):
    1.             // copied from the original post
    2.             struct fragmentOutput
    3.             {
    4.                 float4 color : SV_Target0;
    5.                 float4 glow : SV_Target1;
    6.                 float4 parameters : SV_Target2;
    7.                 fixed4 viewSpaceNormal : SV_Target3;
    8.             };
    Instead of setting the semantic to 'COLOR' in the fragment function we return the struct that packs the various targets. Try the frame debugger to see if the MRT is set up correctly. If it is, then you should be able to see the various render buffers in the drop down. (see screenshot below)

    Bildschirmfoto 2016-05-27 um 09.28.21.png
     
  13. Tudor

    Tudor

    Joined:
    Sep 27, 2012
    Posts:
    150
    Thanks for the info. I got -something- to work, but as Bgolus pointed out in this thread, apparently you can only write to MRT if you (in addition to what you wrote in your example):
    1). write to all the targets (SV_Target0, SV_Target1, SV_Target2, SV_Target3)
    2). enable the "lightmode" = "deferred" tag in your pass.

    Do you know if you can write to just one target and leave the rest untouched? (isn't it slower to write to all targets?)

    Props on pointing out that I can see the render targets on the object in the frame debugger!
     
  14. Lost-in-the-Garden

    Lost-in-the-Garden

    Joined:
    Nov 18, 2015
    Posts:
    176
    1.) The question weather or not you can selectively write to targets is platform dependent, and has only been recently addressed in certain APIs. The default is that you write to all targets. The behaviour is undefined in some cases, and thus the result heavily depends on the hardware/driver/api combination.

    The problem is that unity does not let us finely control the rendering. It's either all MRT or all the standard way, at least unless you want to go down the route of multi pass rendering. This can be a problem for instance with transparent objects, where you want to have some targets untouched. We solved this by only having additive transparent blending. Outputting 0 for all the targets we don't intend to write to does the trick for us there.

    2.) We use the standard forward rendering path in unity. I guess when you select the deferred path, unity does some more magic, like lighting with many light sources and such.

    edit: to be a bit more precise in 1.). Afaik different blend modes for different render targets are a thing of the recent past. Unity supports only one blend mode per shader though.
     
    Last edited: May 27, 2016
    Tudor likes this.
  15. Lost-in-the-Garden

    Lost-in-the-Garden

    Joined:
    Nov 18, 2015
    Posts:
    176
    Here is a minimal working setup using the onPostRender callback in the camera controller. This should be a good starting point for everyone who wants to implement MRTs https://drive.google.com/open?id=0B-NQQxq4JO8AOVV0WHBRakJMRkU

    Next thing I am trying is to use command buffers to inject the renderbuffer swaps into the right places in the rendering pipeline.
     
    Tudor likes this.
  16. Tudor

    Tudor

    Joined:
    Sep 27, 2012
    Posts:
    150
    My experience is that you can do pretty much everything you want if you start in the AfterLighting command buffer stage, and ignore the whole pipeline before that. I haven't been successful in using the stages before that.

    For example there's that thing with the deferred decals not working in the shadows because the emission render target is used by unity internally. But you can do your decals AfterLighting, as long as you have also implemented your own AfterLighting point lights, directional lights etc. Which honestly you should do since unity's point lights are atrocious (shadows are calculated every frame, they're low res, there's peter-panning, there's shadow acne etc).