Search Unity

The Wish for an Alternative Rendering Path

Discussion in 'Works In Progress - Archive' started by Dolkar, Sep 1, 2013.

  1. Dolkar

    Dolkar

    Joined:
    Jun 8, 2013
    Posts:
    576
    Hello there, fellow Unity users! I'm a self-taught graphics programmer working in an indie group on a yet undisclosed project. Totally not hiding a thing from you guys! Whats important though, we're aiming for high-end machines and AAA graphics quality. Unfortunately, Unity doesn't meet all of our expectations in that area, so instead of hopping to a different engine once again, I decided to try and implement a custom rendering path in Unity itself that I'm willing to share once it's done. It has been about a month since I started and I'd like to share my progress with you.

    First things first, this post assumes basic knowledge of the standard rendering paths as well as general graphics programming knowledge. If you don't understand a term, google is your friend! Now sit comfortably...


    So what's wrong with Unity's rendering process?

    To understand that, lets see our wish list: Nice BRDF, up to 10 filtered shadow casting lights (often point lights) visible at once, even more unshadowed lights, efficient particle effects, fast decal rendering and it should be capable of handling high vertex counts. Dynamic GI would be quite nice as well. Sounds like the best way to turn your favorite card into a garden grill!

    So, forward rendering path is out of the question. While it's possible to implement most of these features in it, we'd like to make a game, not an interactive slideshow. Unity's deferred (lighting) path partially solves the high light count + high vertex count performance problem, but there are many others associated with it. For example, you have to squeeze the lighting information for the final pass into a single ARGB buffer, which usually means you have to sacrifice the specular color and pass only the intensity the same way Unity does. That might be a problem if a white specular reflection of a colored light breaks the immersion for you. Having to render the geometry twice is not free either. There's also the everlasting problem with transparency. While you can't effectively support the general case without killing either quality, bandwidth or the designers, it's still possible to render decals or particle effects without falling back to forward rendering.

    The major thing, though, are the shadows. Although the quality of spot and directional lights can be tolerated and the quality of point lights can be understood, there seems to be no way you can bake some shadowmaps! Yes, you do have lightmaps, but you can't cast high frequency shadows on dynamic objects with them and you can't correctly blend them with additional shadows from dynamic shadow casters. Yes again, that's why Unity supports dual lightmapping, but the distance has to be quite large for the transition between a lightmap and real-time lighting to become hardly noticeable, and if you happen to have more than three shadow casting lights in that radius, you're screwed.

    And because Unity behaves like a big black closed box (for reasons I can understand), I have to reimplement the whole rendering pipeline to change anything. So I did just that! :)


    Pen, paper... and a light, sigh...

    The first task was to choose the actual rendering path. From a study on this subject became apparent that Deferred shading wins performance-wise as it scaled well both in vertex count and light count. While the extra bandwidth cost of the fatter G-Buffer poses a problem for consoles, it's bearable on PC. Another argument against it is that you're limited in terms of materials you can use. Deferred lighting does not solve this issue completely either, though. You're still stuck with only a single BRDF you can use at a time. I don't like Inferred lighting as the thought of sacrificing quality isn't all that pleasant to me. On the other hand, there are tile-based deferred shading and tile-based forward shading. Those don't work on D3D9 though and look like more trouble than it's worth.

    So Deferred Shading it is! That needs a solid G-Buffer design! First things first, though, we need to establish the lighting BRDFs. During my hunt for these, I fell in love with Oren-Nayar. It's quite a subtle change in the end, but... it simply packs that extra punch. Rough things should look rough, right? It's a generalization of the Lambert's diffuse model as with they are equivalent with roughness = 0. So, here's a quite heavily optimized and approximated version of it, where I combined stuff from various sources (I can't remember which... but I love you!) to save as many instructions as humanly possible, and then optimized it some more:
    Code (csharp):
    1. float A = 1.0 - (gbuf.roughness * gbuf.roughness * 0.3184713);
    2. float B = gbuf.roughness * 0.412844;
    3.  
    4. float t = LdotV - NdotL * NdotV;
    5. if (t > 0.0) t /= max(NdotL, NdotV); // Please tell me this does not branch...
    6. return max(0.0, NdotL) * (A + B * t);
    Specular BRDFs are a bit more complicated. Since I was already going for Oren-Nayar, I guess I could also use a physically-based way of making pixels glow under mysterious conditions. For that we need one cup of Fresnel, a geometry term and finally a slice of a properly normalized distribution term, then mix it up with the Nayar goodness nice and easy to conserve all that energy. Here's the recipe in more detail. The actual terms I used reflect the fact that I don't care about being super accurate and that I'm also obsessed with optimizations.

    For the distribution term, I used a quite rare Trowbridge-Reitz instead of the classic Blinn-Phong. It looks smoother and seems comparable in performance, due to the lack of the pow instruction:
    Code (csharp):
    1. float specStrength = exp(shininess * 6.0); // I use an exp instead of Unity's *256 to make it scale nicely
    2. float baseSpec = 1.0 / (specStrength - specStrength * NdotHSquared + NdotHSquared); // A personal optimization
    3. float distribution = baseSpec * baseSpec; // Surprise square!
    4. float normalizationTerm = specStrength / 3.14159 * 8.0; // Hand-tuned... looks about right
    I have no idea where I found this fresnel approximation, so sorry about the lack of source. It does not simulate the color shift, but oversaturation often takes care of that. So it's quite a simple one...
    Code (csharp):
    1. float minimalisticFresnel = 1.0 / max(1.0 - shininess, LdotH * LdotH * LdotH);
    But where is the geometry term? Turns out the Oren-Nayar takes care of that already. So, even though it's probably the best way to ensure this model is NOT physically based, I multiply the specular term with the diffuse term. Looks fine though... And the energy conservation... I included it at first. It was a simple lerp between diffuse and specular, but it seemed to steal a ton of power from the artists, so I dropped it out.

    And this is how the lighting looks in quite a simple scene with a bunch of rough and glossy balls, concrete walls and golden... things using the first bump map I could find:

    $bwuk1.png

    As you can see, the shadows are missing, because that's where I got stuck...


    The wrath of Multiple Render Targets

    So all was going fine and dandy. I went in with the basics designed and I had an idea how to implement them. The whole thing resides in a camera script, but I'm not really using the camera itself. I can't just disable it, because then the OnPostRender event wouldn't fire up. So I just set the culling mask to zero, disable all lights and change the rendering path to VertexLit. It still triggers a clear, but that's actually useful. Now came the time to build the actual pipeline.

    The first thing on the list is to render the G-Buffer via Multiple Render Targets. The only function that supports these is the Graphics.SetRenderTarget. But when I call a Camera.Render afterwards, the active render targets will reset back to Camera.targetTexture... it's the expected behavior after all... but you can't set multiple buffers to a Camera.targetTexture! That's kinda shortsighted. You can't render the whole camera view to MRTs or RWTs! The only way it seems to be possible is to waste a LOT of time reimplementing all the unity's batching, culling, sorting and drawing of meshes and make it all behave similarly to the way Unity does that. Even I'm not crazy enough to do that! ... right?

    Well, I thought I could get away with it. I found a method how to pack enough data into a single ARGBFloat (32 bits per channel) texture. The issue is that I can use only 24 bits per channel. There is no way I can put the extra 7 exponent bits to work. Also, float targets are quite expensive, so when I finally brought myself to do a proper performance test, it was performing similarly to Unity's approach. Unacceptable! I had half the draw calls!

    The final imaginary nail to the imaginary coffin were the not so imaginary point light shadows. Our scene makes rather heavy use of them, so the rendering needs to be as fast as possible. Shadow cubemaps work quite well. They are not THAT expensive and allow for fine culling and caching of the faces. Assuming I'd have static objects cached in a separate shadowmap, I would render a face only when it's frustum intersects the camera's frustum AND when that face contains some dynamic objects. On average, that means one face per light per frame. Bearable. The problem with cubemaps is that they can't be easily prefiltered. Not only it's slow, because you need to filter every face separately, but you get seams, for the very same reason. PCF is out of the question as well, because I must filter both static and dynamic shadow maps each frame and PCF is expensive as it is. I'd choose a separable gaussian blur any day.

    The alternative is Dual-Paraboloid mapping. It's rendered in only two passes and there is just one seam to hide. I assume only a single face would need rendering for every light each frame, as long as the light is rotated appropriately. Unity's frustum culling broke everything though... There is no way to turn it off and setting FoV to 180 doesn't help much either. I couldn't find any other solution other than to.. render everything by myself! Yay!​


    The G-Buffer Playground

    I was having dreams about this.
    The goal is to maximize the variety of materials and objects that can be rendered via deferred shading, while using as little bandwidth as possible. I'm not willing to go over 16 bytes per pixel, or 4 32-bit render targets. Let's first list what I'd like to handle with this path:
    • General opaque materials: Everything from wood to metals... vast majority of the scene.
    • Translucent materials: Grass, foliage and maybe even hair would benefit from some kind of translucent lighting. If the light is behind the object, some of the light should bleed through.
    • Glowing materials: It would be a shame if we couldn't do this as well.
    • Transparent decals: You can't normally render transparent objects to the G-Buffer, because what you'll get out of it after shading won't look like one material over another, it will look like two materials blended together, but that's exactly what we want to achieve with decals!
    • Some particle effects: This is a totally different type of magic. I'm not even sure how will it look or perform, but the idea is to use a separate G-Buffer to blend the individual particles together in a similar way as the decals, then shade the whole particle system at once and alpha blend it with the rest of the frame as usual. It should look fine for thick smoke effects and explosions.
    That's quite a range, considering the quite general BRDFs we are using. So, what toys do we need? Ideally, we would have depth, normals, albedo color, specular color, translucency color, HDR glow color, roughness and shininess. That sums up to 32 + 24 + 24 + 24 + 24 + 48 + 8 + 8 = 24 bytes... not good. We can compress depth and normals into a single 32-bit texture by transfering linear depth (16 bits are enough for reconstructing position) and by shrinking the normals using some of these methods (thanks Aras!). 3 bytes saved, that's not enough.

    We can't compress much else at this point without losing quality or sacrificing some material support. There are way too many colors for my taste, though. Do we really need a separate glow color? Can something glow in a different color than it reflects? I think it's safe to cross that one out and use albedo color * glow factor instead. The same goes for translucency color. Even though leaves get a tad more yellowish when a light shines though them, that can be faked... somehow... please?. Anyways, since we're going for a semi-physically-based shading and since object shininess basically depends on roughness.. I think they can be merged into a single attribute just fine.

    32 + 24 + 24 + 8 + 8 + 8 = 13 bytes. We're under the limit.. but.. you know, it would be a waste to use one MRT for just a single byte of data. Where can we squeeze some more from? Specular color... what do you set it most often to? Plastics have a white-ish specular reflection, but metals like gold or copper reflect the light tinted by their own color. It seems like all physical materials reflect something between their own color and the color of the light, so let's use that as an attribute! We'll need a glossiness kind of attribute as well, to control the strength of the specular. That conveniently saves us a byte, so we can fit everything in just three 32-bit MRTs! Sweet.

    So, to reiterate:
    • 2B | Depth: Distance to camera divided by the distance of the far clip plane. Encoded in two bytes as (depth, frac(depth * 256)).
    • 2B | Normals: View space normals encoded into two bytes with either the spheremap transform or stereographic projection.
    • 3B | Color: We're using only one color that controls everything.
    • 1B | Roughness: Controls both the roughness component of Oren-Nayar and the specular exponent. I might invert it and call it Shininess instead.
    • 1B | Glossiness (Specular): This is a tough one. It can either affect just the brightness of the specular reflection or I could just as well go full physical and make it lerp between diffuse and specular. But as I said, it steals power from the artists.
    • 1B | Specular color blend: Lerp between (matColor * lightColor) and lightColor
    • 1B | Glow factor: How much the material emits light. Probably on a scale from 0 to 10?
    • 1B | Translucency: Controls how much the light can shine through. I haven't looked into how exactly is this computed... Hopefully something as simple as an inverse LdotN? :)
    By the way, here is a handy chart I found of some of the physical attributes:

    $dontnodgraphicchartforblinnmicrofacet_lowres2.png


    Now for the actual layout... here's a rather obvious one:
    Code (csharp):
    1.         R        G        B        A
    2. 1:  <    NORMALS     ><  16-BIT DEPTH  >
    3. 2:  <       MAIN COLOR        >< GLOW  >
    4. 3:  < ROUGH >< GLOSS >< SPCOL >< TRANS >
    It looks quite alright. The grouping seems sensible... but... will it blend?
    It would work just great if we didn't want decals. To blend them in, we need to use the alpha channel to control transparency, but that means we can't write anything to alpha. The most logical thing is to leave alpha values untouched. It would make no sense for decals to change the depth. You can't really change the translucency either... bullets would shoot right through leaves, leaving a hole instead. So let's move these to alpha channels:

    Code (csharp):
    1.         R        G        B        A
    2. 1:  <    NORMALS     >< GLOW  >< DEPTH >
    3. 2:  <       MAIN COLOR        >< DEPTH >
    4. 3:  < ROUGH >< GLOSS >< SPCOL >< TRANS >
    Looks fine... I'll have to sample two textures now to get the precise depth, but that shouldn't be such a huge problem. The reason I put the glow factor with normals is to minimize the need to use that texture for decals. Paintings will more than likely change the roughness, glossiness and maybe the specular color, but it probably won't make the area glow or change the normals. For such decals, I can pass only the last two textures.​


    That's about it for now! Kudos to you if you've read through it all. I'll update this post once I get the process of rendering a camera view with MRTs done, along with a performance comparison. I'll hop onto shadows right after that (brace yourselves, screenshots are coming!). I might include some sort of GI as well or something... probably using a compute shader to generate and sample light probes in screen space, or at least that's what I have in mind now.


    I plan to release this for you this year, hopefully in two versions: One, with just the basic lighting and no shadows, but still a fully customizable, robust renderer and one with all the sexy stuff like physically-based lighting and soft shadows in a ready-to-go package. No closed source. Both should also perform better than Unity's standard deferred lighting on a PC. It might not work so well or at all on mobiles.


    TL;DR: ... ehm... just wish me luck :)
     
  2. Kridian

    Kridian

    Joined:
    Jan 24, 2012
    Posts:
    55
    You had me at, "Whats important though, we're aiming for high-end machines and AAA graphics quality."

    Good luck in your journey to higher-tier rendering!
     
  3. blueivy

    blueivy

    Joined:
    Mar 4, 2013
    Posts:
    632
    Good luck guys. And I consider cryteks shadows to be the best in the buisness so heres a paper on how they do shadows in their next gen games. Wouldn't hurt to throw in that soft shadow approximation ;)
    http://www.crytek.com/download/Playing with Real-Time Shadows.pdf

    EDIT: lol I just realized you are the exact same guy I was chatting with in the other thread so you've seen this all before. haha anyway good luck!
     
    Last edited: Sep 2, 2013
  4. Dolkar

    Dolkar

    Joined:
    Jun 8, 2013
    Posts:
    576
    Well.. mostly the former than the latter ;) but thanks!
     
  5. BIG-BUG

    BIG-BUG

    Joined:
    Mar 29, 2009
    Posts:
    457
    Good Luck!

    Do you really think it is such a good idea to split the Depth? It just feels cluttered to me. You would have to consider this each time when handling depth, writing as well as reading, that surely has to have some impact on performance?
    Maybe you could mix "Glow" with "Specular color blend" giving each a 4-byte precision or would this be a bad idea?
     
  6. Dolkar

    Dolkar

    Joined:
    Jun 8, 2013
    Posts:
    576
    I'll have to split the depth into two channels either way. The only performance impact is when I want to get the depth only. In that case, I'll need to sample two textures instead of one. But the decoding process will be the same.
    And yes, I could do that... I'm not sure about glow, but translucency and specular color blend would be fine with just a 16 value range, but that would add just another cost for encoding and decoding and I'd rather to avoid that.

    Thanks for your input!
     
  7. carking1996

    carking1996

    Joined:
    Jun 15, 2010
    Posts:
    2,609
    Nice job. Will be waiting for more.
     
  8. brianasu

    brianasu

    Joined:
    Mar 9, 2010
    Posts:
    369
    Awesome stuff.

    About the dual paraboloid mapping. I've implemented it and have ran into the same problem. The solution was to attach a script that creates some invisible vertices that expand it's bounding box. It makes the culling less efficient but that is how I worked it out. Another disadvantage of dual parabaloid is the scene has to be highly tessellated since the verts are warped in the vertex shader.

    Another options is to use something similar to imperfect shadow maps. You render using dual paraboloids but rather than rendering the mesh you kinda voxelize the scene by rendering a bunch of point sprites that represent the scene into a really low-res shadowmap then upsample. This gets around the tessellation problem but it's a bit hard on non-dynamic objects.

    I also implemented a hybrid Unity/Custom deferred renderer for realtime GI code (splatting direct illum). I didn't render out the g-buffer myself but just utilised the DepthNormalsTexture and did the lighting myself. I also tried implemented point sprites via the method I mentioned above but never finished it.
     
  9. Dolkar

    Dolkar

    Joined:
    Jun 8, 2013
    Posts:
    576
    Thanks... I was thinking about something similar, but "less efficient" isn't an option for me :) My current renderer, without any optimizations, is around 10 - 30% faster than Unity's... mostly because a deferred shading algorithm uses just a single pass, although export heavy.

    Splatting indirect illumination? Isn't it way too slow for larger scenes? I like this approach: http://www.youtube.com/watch?v=olDIFEs76CE But Cryengine's is probably faster.
     
  10. blueivy

    blueivy

    Joined:
    Mar 4, 2013
    Posts:
    632
  11. brianasu

    brianasu

    Joined:
    Mar 9, 2010
    Posts:
    369
    Yeah it's slow for large scenes. The only good methods for large scale environments are light propagation volumes or radiance hints but splatting is probably the simplest method to grasp. I think Crysis even mixes in some screen space GI for really far objects.
     
  12. virror

    virror

    Joined:
    Feb 3, 2012
    Posts:
    2,963
    Really interesting read, good luck : )
     
  13. nofosu1

    nofosu1

    Joined:
    Jan 13, 2011
    Posts:
    73
    Just wow :eek:. This topic is amazingly too advance for me, and i dont know if i aim to ever be there, but good luck on you Journey
     
  14. Dolkar

    Dolkar

    Joined:
    Jun 8, 2013
    Posts:
    576
    Amazing! Great find! I'll need to do some testing, but it might even beat prefiltered VSM/ESMs! Which means I don't have to bother with dual paraboloid mapping (which don't solve the seam problem completely anyways) and go with the better behaving cubemaps.

    Hmm... I think this could somehow work with GI from reflective shadow maps as well! It's strangely similar to PCF.
     
  15. blueivy

    blueivy

    Joined:
    Mar 4, 2013
    Posts:
    632
    Any progress on them shadows? :p
     
  16. virror

    virror

    Joined:
    Feb 3, 2012
    Posts:
    2,963
    You might want to give him more than one day since he said he was going to investigate : p
     
  17. Dolkar

    Dolkar

    Joined:
    Jun 8, 2013
    Posts:
    576
    More like a month :D I didn't have as much time as I wanted to for the last month, but I'm working on this almost every day now. This week I've changed the lighting pass to be computed in view space, which saves a few transforms and also allows to improve the precision of normals... I also added a more realistic light falloff instead of Unity's texture lookup one. By the way, I found out about an open source dynamic occlusion culling engine, which I hope can be made to work with Unity. For the next few days I plan to refactor the shaders, put all the helper functions in one big cginc file and gradually implement a forward renderer for transparent objects. I also need to add spotlight support :D
    Then come the shadows, GI, etc, etc...
     
  18. blueivy

    blueivy

    Joined:
    Mar 4, 2013
    Posts:
    632
    Last edited: Oct 17, 2013
  19. Dolkar

    Dolkar

    Joined:
    Jun 8, 2013
    Posts:
    576
    Thanks! Didn't know about that presentation either! Keep em coming :)
    It seems they are using the light buffer for forward rendered objects? But that can't simply work for transparent objects! You don't solve the problem that way.
     
  20. blueivy

    blueivy

    Joined:
    Mar 4, 2013
    Posts:
    632
    Haha no problem! I try my best since I have no graphics programming knowledge to do this stuff so I try to help out! What did you think about the msaa and smaa?
     
  21. Dolkar

    Dolkar

    Joined:
    Jun 8, 2013
    Posts:
    576
    SMAA > MSAA. It's faster and post-process. Why am I listing that as an advantage? It gets rid of specular aliasing as well. I think it looks quite comparable to 4x MSAA otherwise.
     
    Last edited: Oct 17, 2013
  22. blueivy

    blueivy

    Joined:
    Mar 4, 2013
    Posts:
    632
  23. Dolkar

    Dolkar

    Joined:
    Jun 8, 2013
    Posts:
    576
    It's not perfect, but yeah.. a bit...
    http://www.hardocp.com/images/articles/1362959270a9V2nme9e6_8_14_l.png

    It also smooths out alpha tested geometry.. or just about any aliased edge that might show up somewhere.


    EDIT: I just came up with a maybe great idea for the GI: Generate reflective shadow maps as a base... As far as I know, it would work just fine if I were to sample them for every pixel. BUT, indirect lighting is quite low-frequency and that's why there are methods like radiance hints to reduce the amount of this sampling you have to do, by sampling each point in a volume. Now, if you have a vast outdoor scene, that might actually end up performing worse than the per-pixel approach, because most of the hints will be very far, very small and probably end up occupying empty space anyways.
    So... how to reduce the amount of sample points you have to process? Do stuff screen-space again! Since it's low frequency, even a 16 times smaller buffer would do... that's 256 times less computations! So yeah, for every fragment of this buffer, you'd output the SH to support even high frequency normals and then bilateral upsample to frame buffer size to handle depth discontinuities. On top of that if you prefilter the RSM first, you don't have to take as many samples.
    Unless I'm missing something horribly obvious, the most demanding part of this might be just the RSM rendering.
     
    Last edited: Oct 17, 2013
  24. blueivy

    blueivy

    Joined:
    Mar 4, 2013
    Posts:
    632
    How expensive you think that gi solution will be?
     
  25. Dolkar

    Dolkar

    Joined:
    Jun 8, 2013
    Posts:
    576
    No idea... but it should be fairly cheap. I'm hoping for < 2ms
     
  26. blueivy

    blueivy

    Joined:
    Mar 4, 2013
    Posts:
    632
    Also about this, it seems unreal engine 4 uses a deffered renderer with one forward pass. That's two of the most well known engines using a hybrid technique, how do they handle transparency?
    Edit: source http://www.unrealengine.com/files/misc/The_Technology_Behind_the_Elemental_Demo_16x9_(2).pdf
     
  27. Dolkar

    Dolkar

    Joined:
    Jun 8, 2013
    Posts:
    576
    I don't think they're doing anything fancy with the forward pass... They just render everything they can't render with the deferred path in it, which is quite a standard approach. Unity does it too.
    I like their bloom effect, by the way.
     
  28. blueivy

    blueivy

    Joined:
    Mar 4, 2013
    Posts:
    632
    Oh I didn't know that! :p can you explain what you mean when you said this? Unity doesn't support transparent objects?
     
  29. Dolkar

    Dolkar

    Joined:
    Jun 8, 2013
    Posts:
    576
    It does of course, but it has the very same problem as the deferred renderer. Camera.Render() is a big, black box, so I can't pass in my custom objects and lights easily. It's no biggie, though. All I have to do is to gather all the objects influenced by each light, and simply render them with a forward shader, instead of a G-buffer writing shader. I could also ensure the pixels covered by them won't get shaded in the deferred lighting pass by marking the depth as infinite or something, which also serves as a depth pre-pass of sorts for them, so no pixel gets shaded twice :)
     
  30. WGermany

    WGermany

    Joined:
    Jun 27, 2013
    Posts:
    78
    Now this is the good stuff! I wish for the same and I also wish you luck! A lot more people should chime in on this and help out as a community this is something Unity really needs. I'm just here to learn from you guys, the pros. Im just a begginer but hopefully over time I can do just as good to completely change up Unity's rendering. *grabs popcorn* *waits for updates*
     
  31. blueivy

    blueivy

    Joined:
    Mar 4, 2013
    Posts:
    632
    Are you looking for any suggestions on gi or do you want to try your idea first?
     
  32. Dolkar

    Dolkar

    Joined:
    Jun 8, 2013
    Posts:
    576
    Thanks for the support :) And yes, please post any and all suggestions you have!

    EDIT: By the way.. I really need a scene of sorts to test stuff on. Is there the Sponza scene for Unity free to download somewhere?
     
    Last edited: Oct 21, 2013
  33. Breyer

    Breyer

    Joined:
    Nov 10, 2012
    Posts:
    412
  34. Dolkar

    Dolkar

    Joined:
    Jun 8, 2013
    Posts:
    576
    Oh, it's a single model.. that's convenient. Thanks, but this one is not the one I wanted... It doesn't have those curtains and stuff. Not to mention the bump maps seem to be missing.

    I found what I meant here: http://www.crytek.com/cryengine/cryengine3/downloads But I unfortunately don't have 3d max and the .obj version doesn't work properly.
     
  35. larsbertram1

    larsbertram1

    Joined:
    Oct 7, 2008
    Posts:
    6,900
    simply wow! i wish you good luck!
     
  36. Pyromuffin

    Pyromuffin

    Joined:
    Aug 5, 2012
    Posts:
    85
    Hah! You remind me of me. I've spent a lot of time considering re-writing unity's deferred renderer (struggling with lack of MRT and weird RWT bugs), but I didn't think someone would actually do it! I appreciate what you're doing, and I wish you the best of luck.

    Thanks!
     
  37. blueivy

    blueivy

    Joined:
    Mar 4, 2013
    Posts:
    632
    How do you feel about. Voxel cone tracing for gi. It offers quality comparable to an offline renderer and naturally does reflections and everything! http://youtu.be/4-KSMRjUqGU
     
  38. Dolkar

    Dolkar

    Joined:
    Jun 8, 2013
    Posts:
    576
    It's DX11 only, doesn't handle glossy reflections in real time and overall feels too slow.
     
  39. blueivy

    blueivy

    Joined:
    Mar 4, 2013
    Posts:
    632
    Last edited: Oct 24, 2013
  40. Dolkar

    Dolkar

    Joined:
    Jun 8, 2013
    Posts:
    576
    Oh, I must have seen a different implementation then... which was done in a compute shader. Anyways, I'll try my method first anyways... It's trivial to implement: Just extend shadow maps to render the color and normals as well, sample the indirect light in screen space to a tiny buffer as spherical harmonics and finally upsample and apply to the frame buffer. Glossy reflections can be done by building mipmap tree for the RSM and then just sampling it by the reflection vector and with LoD = specular power.
     
  41. blueivy

    blueivy

    Joined:
    Mar 4, 2013
    Posts:
    632
    Is there a specific paper you got your gi idea from?
     
  42. Dolkar

    Dolkar

    Joined:
    Jun 8, 2013
    Posts:
    576
    No, but it's just screen-space radiance hints, really.

    By the way... I just realized I can't base glossy reflections on reflective shadow maps... the reflected image wouldn't include radiance information! So I need to find another way to get reflections in real-time. I suppose I could bake them into tons of parallax-corrected cubemaps all over the level. But how to make them react to lighting changes? Modulate them with the final radiance buffer? I remember I've read this is being used in Battlefield 3, but I can't imagine how it could work accurately.

    Soo yeah... if you like searching around. Please try and research the possibilities in this area. Even perfectly sharp reflections are fine as I suppose they can be blurred in screen space... hopefully :D
     
  43. blueivy

    blueivy

    Joined:
    Mar 4, 2013
    Posts:
    632
    Last edited: Oct 25, 2013
  44. Dolkar

    Dolkar

    Joined:
    Jun 8, 2013
    Posts:
    576
    Interesting... so spherical harmonics, now in per-vertex form? Too bad they capture only low-frequency lighting... so you'll never get mirror-like reflections out of it as well, not to mention the bunny was rendered at 20 fps. I'd love to see some method that supports both sharp specular and blurry glossy reflections in real time... I guess I want too much :D
     
  45. blueivy

    blueivy

    Joined:
    Mar 4, 2013
    Posts:
    632
    Yeah, I'm 99.9 percent sure only ray tracing or voxel cone tracing can offer real time reflections. screen space reflections is also an option but it only renders things on screen and would have to be mixed with something else like cubemaps which brings us back to square one.
     
    Last edited: Oct 25, 2013
  46. Dolkar

    Dolkar

    Joined:
    Jun 8, 2013
    Posts:
    576
    Yeah.. erm.. I guess you're right. You have to somehow access scene information in areas that are out of screen as well. A single dynamic cubemap at camera position could do the trick, but that still doesn't solve all the cases.

    Anyways, I won't be doing voxel cone tracing. It's slow and an implementation for Unity is already being worked on. Thanks for the effort though! :)
     
  47. blueivy

    blueivy

    Joined:
    Mar 4, 2013
    Posts:
    632
    will asset store products still work with what your doing?
     
  48. Dolkar

    Dolkar

    Joined:
    Jun 8, 2013
    Posts:
    576
    Depends... The way I see it, all shaders not designed for this won't work. They could technically be rendered in a forward pass after everything else, but they won't benefit from GI and shadows, so the lighting would be inconsistent. I think in that case it's better to just disable them and pop a warning. Some image effects dependent on Unity's depth / normal textures also won't work for obvious reasons. Other than that, anything that doesn't touch the graphics should work just fine.

    It's simply not possible to design this in a way to make it all work flawlessly just by drag&dropping a few scripts. I believe that most of the people around here with a pro license have a clue about what they're doing, though... So I'm trying to make it as easy to modify and extend as possible.

    On the other hand... I could make the parts like the occlusion culling engine / GI / reflections / whatever work with Unity and release them separately as well..

    Edit: Just wondering.. how should I name it?
     
    Last edited: Oct 27, 2013
  49. blueivy

    blueivy

    Joined:
    Mar 4, 2013
    Posts:
    632
    Not really good at picking names but it probably should be informative and to the point about what the product does. Don't think you need to get too creative for it :p
     
  50. xalener

    xalener

    Joined:
    Nov 28, 2011
    Posts:
    20
    Oh god, don't do it. It looks really really bad. I don't think BF3 has it, but BF4 does and... It's distracting.