Search Unity

  1. Megacity Metro Demo now available. Download now.
    Dismiss Notice
  2. Unity support for visionOS is now available. Learn more in our blog post.
    Dismiss Notice

Deferred Renderer + Multiple Cameras + Posts/SSAO = Bugged! (UFPS, OnRenderImage())

Discussion in 'Editor & General Support' started by Andy2222, Sep 2, 2013.

  1. David-Williamson

    David-Williamson

    Joined:
    Jan 6, 2015
    Posts:
    3
    OnPostRender is called after rendering but before OnImageRender. Using the solution already posted above you can move the code from OnPreRender to OnPostRender and it works.
    Code (csharp):
    1.  
    2. using System.Collections;
    3. using UnityEngine;
    4.  
    5. // fixes the deferred lighting missing final copy&resolve, so the next camera gets the correctly final processed image in the temp screen RT as input
    6. // NOTE: The script must be the last in the image effect chain, so order it in the inspector!
    7. [ExecuteInEditMode]
    8. public class CopyToScreenRT : MonoBehaviour
    9. {
    10.     private RenderTexture activeRT; // hold the org. screen RT
    11.  
    12.     private void OnPostRender()
    13.     {
    14.         if (GetComponent<Camera>().actualRenderingPath == RenderingPath.DeferredShading) {
    15.             activeRT = RenderTexture.active;
    16.         }
    17.         else {
    18.             activeRT = null;
    19.         }
    20.     }
    21.  
    22.     private void OnRenderImage(RenderTexture src, RenderTexture dest)
    23.     {
    24.         if (GetComponent<Camera>().actualRenderingPath == RenderingPath.DeferredShading && activeRT) {
    25.             if (src.format == activeRT.format) {
    26.                 Graphics.Blit(src, activeRT);
    27.             }
    28.             else {
    29.                 Debug.LogWarning("Cant resolve texture, because of different formats!");
    30.             }
    31.         }
    32.  
    33.         // script must be last anyway, so we don't need a final copy?
    34.         Graphics.Blit(src, dest); // just in case we are not last!
    35.     }
    36. }
     
  2. danger726

    danger726

    Joined:
    Aug 19, 2012
    Posts:
    184
    I've been using @Andy2222 's CopyToScreen solution for a long time and it worked great, even after upgrading to Unity 5. However, once I switched from the Legacy Deferred (light prepass) rendering path to the Deferred path, it no longer worked reliably.
    This didn't help for me, seemingly depending on how many image effects I happened to have enabled, it would either work, or the screen would go totally black.

    So, here's the solution I came up with, using the same principle as CopyToScreen, I use a command buffer to blit the result after image effects back in to the camera's render target.
    Code (CSharp):
    1. using UnityEngine;
    2. using UnityEngine.Rendering;
    3.  
    4. // UNITY BUG: When using the deferred rendering path it seems that the result after image effects does not get copied back into the
    5. // camera's render target.  So, when daisy chaining multiple camera passes with image effects, the final processed image from one
    6. // camera does not get used as input to the next camera.  In other words, image effects from all but the last camera are lost.
    7.  
    8. // FIX: Add this to each camera in the chain, it copies the final processed image back into the camera's render target.
    9. // Note that this script must be ordered after all other image effects!
    10.  
    11. [RequireComponent(typeof(Camera))]
    12. public class MultiCameraImageEffectFix : MonoBehaviour
    13. {
    14.     RenderTexture resultAfterImageEffects = null;
    15.  
    16.     void Awake()
    17.     {
    18.         // Add a command buffer to blit the result after image effects back into the camera's render target.
    19.         CommandBuffer commandBuffer = new CommandBuffer();
    20.         commandBuffer.name = "MultiCameraImageEffectFix";
    21.  
    22.         commandBuffer.Blit( resultAfterImageEffects as Texture, BuiltinRenderTextureType.CameraTarget );
    23.  
    24.         GetComponent<Camera>().AddCommandBuffer( CameraEvent.AfterImageEffects, commandBuffer );
    25.     }
    26.  
    27.     void OnRenderImage( RenderTexture src, RenderTexture dest )
    28.     {
    29.         resultAfterImageEffects = src;
    30.     }
    31. }
    Hopefully this is helpful to others suffering from this longstanding and frustrating issue!

    Cheers,
    Sam
     
    UnLogick likes this.
  3. jhughes_otherside

    jhughes_otherside

    Joined:
    Jul 18, 2016
    Posts:
    4
    I've been fighting with this exact issue all day. I must have tried a few dozen solutions. None work. Not even your solution, although it's much more compact and elegant than most I've put together myself. I'm guessing I just have my cameras configured in a way nobody else has, or I missed something critical.

    I have two cameras, A and B. Neither have a render texture set. Both are Deferred, HDR. I added your script to them both (A is at depth -1, B is at depth 0, so they render A, B order). A is set to solid color clear, B is set to none. I think this should have worked.

    What I did find is that I could create an RT and assign it to A, then assign it to B, then create a camera C that renders a quad with that RT as a material to the full screen and it works 100%. But it requires that 3rd camera or Unity complains about no camera rendering to the screen, and I kind of wanted to avoid both the 3rd camera and the warning.

    Ideas? Other than Unity fixing their own bug, I mean. :-S
     
  4. jhughes_otherside

    jhughes_otherside

    Joined:
    Jul 18, 2016
    Posts:
    4
    Update. So, I managed to solve the issue in a pretty straightforward way. The above code didn't work for me, but it was a good starting point. In the end, I embraced the 3rd camera and it worked straight away. I have absolutely no idea why I could never get a camera to blit a render texture created by another camera to the screen before rendering to it. Just never worked for me.

    1. All deferred rendering cameras should be set to render to the same texture. This properly resolves all post effects.
    2. Create a final RT camera that does NOT render to the texture, but instead has a script on it that refers to that texture, and have it draw to screen. I haven't tried to use this on a subsection of the screen for like a rear view mirror, so that probably doesn't final-render right.

    Put this script on the final RT camera in your scene:
    Code (CSharp):
    1. using UnityEngine;
    2. using UnityEngine.Rendering;
    3.  
    4. [RequireComponent(typeof(Camera))]
    5. public class FinalDeferredRTQuad : MonoBehaviour
    6. {
    7.     public RenderTexture deferredRT;
    8.     public Material mat;
    9.  
    10.     void Awake()
    11.     {
    12.         Debug.Assert(deferredRT != null, "Needs RT assigned.");
    13.  
    14.         RebuildRT();
    15.  
    16.         CommandBuffer buf = new CommandBuffer();
    17.         buf.Blit(deferredRT as Texture, BuiltinRenderTextureType.CameraTarget, mat);
    18.         GetComponent<Camera>().AddCommandBuffer(CameraEvent.AfterEverything, buf);
    19.     }
    20.  
    21.     void Update()
    22.     {
    23.         Camera c = GetComponent<Camera>();
    24.         if (deferredRT.width != c.pixelWidth || deferredRT.height != c.pixelHeight)
    25.         {
    26.             RebuildRT();
    27.         }
    28.     }
    29.  
    30.     private void RebuildRT()
    31.     {
    32.         Camera c = GetComponent<Camera>();
    33.         deferredRT.Release();
    34.         deferredRT.width = c.pixelWidth;
    35.         deferredRT.height = c.pixelHeight;
    36.         deferredRT.depth = 24;
    37.         deferredRT.format = RenderTextureFormat.DefaultHDR;
    38.         deferredRT.anisoLevel = 0;
    39.         deferredRT.antiAliasing = 1;
    40.         deferredRT.dimension = TextureDimension.Tex2D;
    41.         deferredRT.filterMode = FilterMode.Point;
    42.         deferredRT.generateMips = false;
    43.         deferredRT.hideFlags = HideFlags.DontSave;
    44.         deferredRT.useMipMap = false;
    45.         deferredRT.wrapMode = TextureWrapMode.Clamp;
    46.         deferredRT.Create();
    47.     }
    48. }
    49.  
    As you can see, it knows when your RT is the wrong size and updates it. Note, Unity has a bug where cameras that render to texture do NOT recalculate their new aspect ratio properly when the resolution changes (even though they do pick up the resolution change... stupid). So I throw this bit into every camera's LateUpdate:
    Code (csharp):
    1.  
    2.         Camera c = GetComponent<Camera>();
    3.         Rect oldRect = c.rect;
    4.         c.rect = oldRect;  // trick the camera into recalculating aspect ratio
    5.  
    Details on the rendering material is I simply used Legacy/Diffuse and attached the render texture as diffuse.
     
  5. danger726

    danger726

    Joined:
    Aug 19, 2012
    Posts:
    184
    Hrm, odd that this didn't work. My script doesn't need to go on the last camera in the chain by the way (so in your case should only go on camera A), I don't think this would have prevented it working for you though. The only suggestion I can think of would be to double check that the script is definitely ordered after all other image effects.

    Well anyway, good to know you got things working with your extra camera solution at least!

    Ha, hadn't come across this one before, good to know!

    Interesting, I'm curious why you needed to pass a material into the Blit? It's an optional parameter and AFAIK not required if all you're doing is a copy without any image processing.
     
  6. jhughes_otherside

    jhughes_otherside

    Joined:
    Jul 18, 2016
    Posts:
    4
    The blits don't work for me, for some reason. Your script runs ok, but I never get post effects in the second camera. I literally tried a dozen or so ways to make this happen, but never got there any other way than via render-to-texture. Very frustrating. I'm not on oddball hardware, either, it's a desktop with an NVidia 970. The command buffer code works in some things, but I just had zero luck getting a render texture to draw when it existed outside a single command buffer. I tried a bunch of things and they always failed if I referred to a buffer that wasn't a temporary RT created in that camera's command buffer during that frame. It should work, but doesn't for me.

    Good point about the blit not needing a material. I'll try that. Thanks!
     
  7. DDNA

    DDNA

    Joined:
    Oct 15, 2013
    Posts:
    116
    This has been causing me problems for years. I don't know why they refuse to fix this. Danger... This didn't work for me either. Maybe I have noticed that some effects work differently than others. I am trying this with GlobalFog. Have you tried that with your solution?
     
  8. danger726

    danger726

    Joined:
    Aug 19, 2012
    Posts:
    184
    I just tried out my solution in combination with GlobalFog, and you're right, it doesn't work. I had a quick look at the GlobalFog code and found the problem. It's because GlobalFog's OnRenderImage function is tagged with [ImageEffectOpaque], which changes where it gets called in the render pipeline (i.e. after opaque geo but before transparent geo, as opposed to after everything). I'm not exactly sure why this stops my fix from working, I'd have to do some further investigation.

    Anyway, if you comment out the [ImageEffectOpaque] tag in GlobalFog, it works with my solution. Of course that means you'll now get fog on transparent stuff which is probably not what you want. To work around this you could render your transparent geo in another camera pass after the opaque + fog pass.
     
    chelnok likes this.
  9. greg-harding

    greg-harding

    Joined:
    Apr 11, 2013
    Posts:
    523
    Basic multi-camera compositing using deferred rendering doesn't work correctly, let alone with image effects or fog or anything else going on.

    A simple scene with 2 cameras and 2 cubes, layered so each camera only sees a single cube, fails to render as expected when using various clearing modes on the second camera. The depth buffer gets very messy, so either wanting to respect and add to the existing depth or clearing the depth in higher cameras doesn't work well for us at all. It definitely gets even more messed up when applying image effects on any of the cameras, even just the last one, and particularly when they're using the depth buffer for calculations.

    I'd like it to be magic and just work but I suspect the deferred plumbing might not really accommodate it.

    I've seen various other threads talk about fixes using render textures, other pass-through cameras that just blit things around, etc. but none of them fix the basic problems I have. I filed a simple bug report (again) a few months ago with a simple scene that shows the basic deferred compositing issue to try and get an answer from Unity about it but it has not been looked at yet. I assume Unity are aware of the issue and would probably just recommend using forward rendering when doing any complex compositing. It looks like tradeoffs in all directions at the moment.
     
    jhughes_otherside likes this.
  10. ahokinson

    ahokinson

    Joined:
    Apr 28, 2016
    Posts:
    22
    Gonna bump this to say that in 5.5 I am able to use deferred on both cams and use Amplify Color. Was not able to do this before 5.5. Maybe they finally fixed it?
     
    chelnok likes this.
  11. danger726

    danger726

    Joined:
    Aug 19, 2012
    Posts:
    184
    I don't think anything's changed unfortunately, 5.5 still seems broken in the same way it's always been, at least for the use cases in my game and test projects.
     
  12. greg-harding

    greg-harding

    Joined:
    Apr 11, 2013
    Posts:
    523
    I've retested my test project in 5.5.0p1 and deferred compositing is still broken/not particularly useful. My bug reports still haven't been looked at.
     
    chelnok likes this.
  13. bigrip

    bigrip

    Joined:
    Mar 25, 2015
    Posts:
    9
    Where is an official response from the developers?
     
  14. Jesus

    Jesus

    Joined:
    Jul 12, 2010
    Posts:
    501
    Sorry for the revive, but is there any news of this?

    I'm on 5.5.1 and I'm using the new Post-processing stack. Linear, HDR, Deferred for both cameras of course.

    There's a way to get it to work, kind of, and that's to apply tonemapping to the 'far' camera so that it's brought into LDR range (0-1). Setting the tonemapping to natural or filmic does it, but 'none' doesn't and it bugs out.

    Problem being, then the near camera is tonemapping the stuff it renders, and the far camera output is getting double-tonemapped.

    The result is that there's a visible seam based on the near camera's far plane.

    Ideally there'd be a way that the far camera could exist in a non-tonemapped way and just apply it once on the near camera.

    Maybe, just maybe, if you could get a close-to-no tonemapping on the far camera so that the near one would be doing 99% of the work?

    Or if you break up the far camera's 0-8 HDR range into an 8-deep stack of 0-1 additive particles at the far plane of the near camera to keep the colour depth there? Is there HDR render texture options perhaps?


    I heard there's some sort of camera refactor coming with 5.6, would that perhaps make some headway on this?
     
  15. Quatum1000

    Quatum1000

    Joined:
    Oct 5, 2014
    Posts:
    889
    Why not use one pps on each cam, and enable what is required on each? Or render into one RT and use a selected pps before the second cam starts. There a re a lot of solutions in this case. I think therefore the unity devs didn't reply here.
     
  16. Jesus

    Jesus

    Joined:
    Jul 12, 2010
    Posts:
    501
    Tried that, can't.

    If I use this setup:
    Far Cam: Def, HDR, PPS with NO tonemapping - either unticked or set to none (so it stays in hdr range)
    Near Cam: Def, HDR, Clear Flags Depth Only, PPS With tonemapping

    It bugs out.

    Basically deferred doesn't seem to accept 'lower' or 'far' cameras with a HDR output, only a LDR output. And rendertextures don't seem to be a viable solution either. Since to combine them, I'd need a pixel-perfect mask for the near camera to lerp to it over the far camera's output, which means no AA, bloom, etc. And it'd end up a frame behind (unacceptable).

    Right now I'm trying to tonemap the far camera as lightly as possible, because since I need the close camera tonemapped as well, it gives the stuff in the distance a second pass (one from far cam, one with near cam) and starts to get washed out.


    I could be wrong here, if someone can post a working example of how to get 2-deep Deferred HDR camera behaving nicely and as expected (each in full color range) with no visible seam at the join.
     
  17. Quatum1000

    Quatum1000

    Joined:
    Oct 5, 2014
    Posts:
    889
    From what range your far cam is set, and does the far cam renders any backdrop scene only?
     
  18. Jesus

    Jesus

    Joined:
    Jul 12, 2010
    Posts:
    501
    Same Scene, for now anyway. This might change to a miniature for the far camera at some point down the line, but that shouldn't have any affect here since it'd still have the same camera depth, etc.

    Near camera is 0.1 to 100 (factor of 1000), far camera is 100 to 100,000 (factor of 1000). I've been experimenting with pushing the near camera to 0.5-250 and far to 250-100,000 to try and get some better factor numbers at the expense of not being able to push right up on close things.

    Can't push the near plane on the far camera back very far (even 400m is way too much) because I've got an image effect that does need to be applied at close-ish ranges.
     
  19. tanoshimi

    tanoshimi

    Joined:
    May 21, 2013
    Posts:
    297
    Just encountered this in 5.5 too. Don't have much to add that hasn't already been mentioned other than, in my case, I want the opposite of what @Luckymouse described in the first page - that is, two deferred cameras layered on top of each other but with image effects on the top camera, not the bottom. Every combination of solutions suggested so far leads to a different selection of unwanted artefacts :(
     
  20. Quatum1000

    Quatum1000

    Joined:
    Oct 5, 2014
    Posts:
    889
    Some artefacs causing because of its nature and logic. Means, having a screen part with and a part without an effect. The reason is simple, most effects doesnt calculating per pixel only.

    They took pixels in a matrix around the original pixel position. So, on the border of a screen area, there is nothing to calculate or black and these pixels are based on the calculation too. Eg the depth, emission, reflecton, etc buffer

    Perhaps its required to combine the depth or any other buffer (based on artefacts) with the one for the image effect.

    Anyway, its always pretty diffucultely without any screenshots to see whats happen exactly.
     
  21. Quatum1000

    Quatum1000

    Joined:
    Oct 5, 2014
    Posts:
    889
  22. Steve-Tack

    Steve-Tack

    Joined:
    Mar 12, 2013
    Posts:
    1,240
  23. greg-harding

    greg-harding

    Joined:
    Apr 11, 2013
    Posts:
    523
    (Crossposting from the SSAO Pro thread)

    Our closed issue: #835332 "Deferred rendering path does not support 'Clear Flags: Don't Clear" - showed rendering problems compositing multiple deferred cameras with no effects being used.

    Active issue: https://issuetracker.unity3d.com/is...s-of-two-cameras-is-rendered-completely-white

    Our original issue still isn't working in Unity 5.6 - compositing multiple deferred cameras does not work.
     
  24. Steve-Tack

    Steve-Tack

    Joined:
    Mar 12, 2013
    Posts:
    1,240
    D'oh!
     
  25. jhughes_otherside

    jhughes_otherside

    Joined:
    Jul 18, 2016
    Posts:
    4
    For what it's worth, I spent about a week on multiple deferred compositing methods in 5.5 and found no workable solution, whether using render textures, CommandBuffer solutions, capturing depth and manually re-blitting, etc. None work. The best you can do is to NOT USE TWO CAMERAS. Manually render all your second camera objects with a CommandBuffer, attached to the FIRST camera, while the depth buffer is still there, to a different render texture along with any post effects you want. Clearly, not a way forward.

    However, in 5.6, you can completely script your own render loop. I plan to do that, and expect to get exactly what I want out of it. It seems like there's something wrong with multiple cameras and deferred in general that blows away the G buffer, so avoiding multiple cameras is the way forward. At least there is a way.
     
  26. greg-harding

    greg-harding

    Joined:
    Apr 11, 2013
    Posts:
    523
    If you can, try enabling HDR rendering - for some reason compositing with deferred cameras sometimes works when HDR is enabled. The issue is still open with Unity.
     
  27. danger726

    danger726

    Joined:
    Aug 19, 2012
    Posts:
    184
    I've just upgraded to 5.6, and found that this issue* is now finally fixed (at least for me). I no longer need the "copy to screen" hack / fix, awesome!

    * NOTE: Just to be clear, the issue I mean here is specifically the one where, when using multiple camera passes with deferred rendering, HDR enabled, and image effects, the image effects would disappear.

    There is another separate problem where, when using multiple camera passes with deferred rendering and HDR disabled, earlier camera passes show up white. This is the bug Greg is referring to I believe. I was never able to work around this one, even with the copy-to-screen thing. I just checked, and yeah it is still broken in 5.6.
     
  28. DDNA

    DDNA

    Joined:
    Oct 15, 2013
    Posts:
    116
    Just ran into this myself. Works with HDR but without it everything is white on the last camera

    I can't believe after all this time such a basic feature is still broken in Unity.