Search Unity

Jove 2.0: DX11 Rendering System (Alpha Release)

Discussion in 'Assets and Asset Store' started by Aieth, Aug 17, 2014.

  1. lazygunn

    lazygunn

    Joined:
    Jul 24, 2011
    Posts:
    2,749
    I've had megadaz and got the refund, its a redundant asset now daz studio has the morph export tools it has, but it did do a great job when it was needed. You can simply export any morphs from daz studio through its morph rules section (it takes a bit of getting used to) but any morphs marked as baked will appear in your model as they appear, any morphs marked as exported will appear in your skinned mesh component after import to unity as blendshapes
     
  2. HappyCoder84

    HappyCoder84

    Joined:
    Aug 6, 2014
    Posts:
    72
    :)Wow,,Jove , are you doing this all by yourself?
    If so, could you shed some light on how you`ve managed to to get your skills?
    I`ve been studing rendering programming(mainly DX, Opengl, some published rendering-related papers, open source codes) from time to time.
    It is one of the programming fields I find fascinating. :)
    I`m currently using Cryengine and working on some physics-related codes. I have strong physics and math skills(although still rough). I know it is a long hard journey.
    Any tip wil be appreciated. Cheers.
     
  3. lazygunn

    lazygunn

    Joined:
    Jul 24, 2011
    Posts:
    2,749
    While I can't speak for Aieth I have to offer these tutorials on directcompute http://scrawkblog.com/ they've really opened my eyes to the whole black art of such things and i'm quite excited at the prospect of implementing my own in future
     
  4. Licarell

    Licarell

    Joined:
    Sep 5, 2012
    Posts:
    434
    To All - This question is off topic but I'm sure many are well versed in this area, @lazygunn how did you handle your textures for your interior scene... or how would one go about working with multiple meshes with multiple materials, could all meshes point to one large texture atlas to reduce draw calls?

    Say you have a 5k image and every mesh texture in your game is pointing to that... would that reduce draw calls to one or would you still have 1 draw call per mesh, how does that work exactly?

    Sorry if this is a ultra noob question, but inquiring minds want to know...
     
  5. lazygunn

    lazygunn

    Joined:
    Jul 24, 2011
    Posts:
    2,749
    In my scene, the static objects (never move) use both sets of UVs available in Unity - the first set is specific to that object and is used for the direct colour and other attributes, this is pretty much as standard thing in graphics and something you should recognise. In simplifying the scene in terrms of amount of seperate objects i attached closely grouped objects together and that resulted in some meshes with multiple materials - these would still use the first UV set for their primary material properties but list as several materials in the mesh renderer - this is not ideal as each different material a model has counts as an extra drawcall, so multiple materials can mean a mesh adding up to several drawcalls, to get this down you can add an extra UV set for the entire model and bake the texture information into a single atlas, less drawcalls but any large heavily tiled-texture objects might suffer in texture resolution

    In my scene the second UV set given to the static objects groups areas of the scene into sections each with its entire area unwrapped into UV2, this is used for the lightmap which I rendered in 3ds max using vray in a render to texture. As I was only baking indirect light, no shadows, i used the vrayrawglobalillumination render element which was then added as a map to each object in each group as the objects used a specific flavour of Jove shader combining the functionality of the deferred diffuse bump and a lightmap slot - these textures tend to be quite large as they have a lot of surface to cover but as currently there can be only a single shadow casting light and there is no GI, it can add a huge deal of attractive shading to the scene. It's worth noting I used Substance Designer's baking tools to get an AO map for each section that was stored in the alpha channel of the lightmap, this was used for specular occlusion.

    I'd usually use Knald for baking AO but it seems better suited for single refined objects rather than chunks of level geometry

    It's worth mentioning Knalds new product, Knald Lys https://www.knaldtech.com/lys-open-beta/ could this prove useful to Jove users?
     
  6. Licarell

    Licarell

    Joined:
    Sep 5, 2012
    Posts:
    434
    @lazygunn Thanks for the mini tutorial, I'm a maxer since the kinetix days... and am very new to the world of 3d games and all the black arts that come with it... Getting my head around mesh and texture conservation has been a bit of a hurdle that I am trying to get over and understand "best practices" to get the most out of the GPU/CPU.


    BTW my plan is when I build my models both static and dynamic in max is to pack all the uv's to one map and use it in my game?

    Good idea or crazy?
     
  7. bac9-flcl

    bac9-flcl

    Joined:
    Dec 5, 2012
    Posts:
    829
    Licarell, questions like that would be best answered at polycount.com, it's an incredibly good community dedicated to realtime 3d. On the topic of your question:

    - Drawcalls are separated by material (textures mean nothing, always think materials - if two materials share one texture, they don't magically become one drawcall)
    - Drawcalls are separated per mesh
    - Meshes with same material can be batched and rendered in one drawcall by the engine, but this only works to a relatively low vertex limit, there is no point in counting on batching with 10k+ objects
    - Drawcalls differ between different shaders - a transparent normalmapped shader with a GrabPass will be inherently more expensive than an opaque unlit shader, taking more draw calls

    Aside from drawcalls, keep an eye on vertices:

    - It's impossible to represent an edge with a UV seam using two vertices (you can't store two sets of UV coords from the same channel on the same vertex). All your UV seams hide double the vertex count, so the fewer of them you have, the better.

    - It's impossible to represent a hard edge using two vertices (you can't store two normals at once). All your hard edges hide double the vertex count, so again, the fewer of them you have, the better. You can also use one-face bezels with smooth edges at zero vertex count cost, because the same amount of vertices will be needed to represent a hard edge without a bezel.

    - It's impossible to represent an edge between two materials using two vertices, as that would require storing two material IDs in them. Again, all edges like that have double the vertex count.

    A good introductory tutorial, even though it's pretty old:

    http://www.ericchadwick.com/examples/provost/byf1.html
    http://www.ericchadwick.com/examples/provost/byf2.html

    Relevant image to the last points:



    Other essential articles:

    http://www.polycount.com/forum/showthread.php?t=81154
    Understanding averaged normals and ray projection/Who put waviness in my normal map?

    http://www.polycount.com/forum/showthread.php?t=107196
    Making sense of hard edges, uvs, normal maps and vertex counts

    Only do that when it's convenient to do so - for example, when making a set of objects that are always placed together. Like an atlas for Skyrim-like modular cave walls. Putting things like a character that can be also present in a different environment is wasteful and will impair your productivity.

    There is far better way to atlas and merge whole scenes, if you're into that. It's called megatexturing, is done automatically and is used in id Tech 5 (Rage, Wolfenstein New Order). It's also available for Unity through Amplify Textures 2 asset. Very useful tech if you are able to produce enormous amount of artistic content, like individual textures for every meter of every wall of every room of every level - keeps performance requirements very low while allowing to present tons of content.
     
    Last edited: Sep 18, 2014
  8. Licarell

    Licarell

    Joined:
    Sep 5, 2012
    Posts:
    434
    @bac9-flcl - Thank you for taking the time and locating all that information... I greatly appreciate it!
     
  9. bac9-flcl

    bac9-flcl

    Joined:
    Dec 5, 2012
    Posts:
    829
    @Aieth, is there a way to render unlit materials at the moment? Specifically, I'm interested in rendering UI, which I achieve by placing a quad in front of the camera, with a render texture being fed to it's material. I'd like it to ignore all lighting, completely maintaining colors as they were written by the RT camera - how should I go about that? A nice alternative is to simply make the shader emissive (I get neat glow from UI for free too), but still, I'm curious if unlit option is possible.

    On a related note, is there a way to render such a quad on top of everything, to prevent it from clipping into walls?

    Edit: Oh wait, I think I know the first one. I have complete access to the out color of the forward shaders since they call/calculate absolutely everything inside and output the final color directly, in contrast with just passing properties to GBuffer like deferred shaders do. Slicing off whole ShadingProperties section and outputting the color directly will probably achieve what I want, with exception of it being truly unlit (as the output will still be subject to the tonemapping in the camera).
     
    Last edited: Sep 19, 2014
  10. Jesse_Pixelsmith

    Jesse_Pixelsmith

    Joined:
    Nov 22, 2009
    Posts:
    296
    Went ahead and bought this just now. Hadn't even heard of it until someone mentioned it to me in a random conversation about Unity 5 (that started getting into GI and lighting, so I guess not so random), as I had just read the Unity blog talking about Enlighten that they basically had to compromise on some of the dynamic light stuff to cater to mobile and lower end machines. Smart move IMO as a whole, but I couldn't help but feel a little disappointed as my current project is more high end/current gen.

    Anyways, I've been reading this thread, actually went through all the posts, because it's actually very interesting. I'm very much a novice when it comes to rendering knowledge, past the basic fundamentals, and my shader writing skills leave a lot to be desired. But @Aieth seems to break it down into understandable language.

    More importantly for me though (as a developer who's making a game) is the practical usability now and over the next 6 months in a project. As I have to admit, while the logic you gave in your first post for basically completely re-writing the rendering pipeline and not being compatible with 3rd party rendering assets is very sound .. it also seems a bit all or nothing (speaking strictly from my point of view of, if I use this for a game, I pretty much have to kiss some other assets goodbye and hope that equivalent features can be developed in a reasonable time frame).

    For example, not long ago I bought and started playing around with Sunshine http://unitysunshine.com/, which is a pretty cool asset in its own right, but it's likely not something going to be compatible (and indeed it looks like Jove has or will have soon all of the features that Sunshine does).

    I'm excited for the basic terrain shades coming soon as I'll be able to start to bring it into my game. Which is a "sandbox" crafting/survival sim with large-ish outdoor areas (4kmx4km+) with player created buildings and dynamic trees that can be cut down.

    I actually was about to pull the trigger on RTP this weekend and then I saw that called out specifically earlier on in the thread. Again, looks like a great asset in its own right, and I'm messing with terrain stuff now, and Unity's terrain shaders look...well they still look like they did in 2009. I don't see anything else that rivals RTP currently (Marmoset Skyshop maybe? but I've heard buzzings that their stuff doesnt scale well for large areas). But anyways, sounds like this would be fundamentally incompatible. Not a negative, and I get the reasons, but still kind of feels like putting all the eggs in one basket, if you can understand where I'm at :)

    Also just seems incredible that as one person (correct me if I'm wrong) you've been able to create lighting that's better looking than anything in Unity 4 to-date, and looks like most of Unity 5 (minus the GI which I guess is coming - but as I mention I'm doing a lot of procedural stuff in my game so Unity's GI probably won't help me) - and Unity has a team of engineers working on it (+ Geomerics).

    Anyways - My hat is off to you, I'm hoping that this will be usable in my project in the near future, but I realize it's alpha and there's that whole rule of not planning your project around presently developing technology...(but so purrrrrrrty!) In any case, like Fholm said worth the $100 price of admission just to play around :)
     
    Last edited: Sep 21, 2014
  11. Jesse_Pixelsmith

    Jesse_Pixelsmith

    Joined:
    Nov 22, 2009
    Posts:
    296


    Dat volumetric scattering + fog... screen doesn't do it justice, so much more impressive in realtime (and I'm just futzing around with sliders). I could totally get the creepy night time village thing going:

    Hey, who the heck are you and what have you done with Unity?
     
    Last edited: Sep 21, 2014
    shkar-noori, hopeful and bac9-flcl like this.
  12. elias_t

    elias_t

    Joined:
    Sep 17, 2010
    Posts:
    1,367
    I agree with the above!!

    Even while I don't need Jove for my current project I will buy it just to support the development!

    Keep it up Aieth.
     
  13. Aieth

    Aieth

    Joined:
    Apr 13, 2013
    Posts:
    805
    I am the one who primarily works on Jove and most of the code in it is done by me. I work with two other people though, whose help and input have been invaluable. The way I learn best is by just diving in way over my head and then just keep at it until it makes sense ;) I remember when starting out I spent days just figuring out how to sample a cubemap on the CPU. Basically, decide on something you want to do and then keep at it. The secret is there is no secret. It is all hard, and at times very frustrating, work :p

    You can throw whatever unlit shader in a forward layer and it should work, since it doesn't use any lighting. Is there a specific reason you don't want to use a second camera? My work with UI in Unity has been very limited, but I seem to recall the most painless way of doing it is having a second camera with a higher depth and using a "GUI" layermask, on which you throw all your GUI stuff.

    I completely understand where you are coming from with the eggs in one basket, I would have been cautious too :) The only other alternative is trying to combine a bunch of other assets in the same project, which I can see easily turning into a major pain. That, or moving to UE4 but that is a whole other can of worms with its own issues. It could have been worse though, instead of several alternatives with various advantages and disadvantages you could have had zero alternatives ;)

    As for other assets, the simple rule of thumb is that any other shader that somehow interacts with Units lighting system (e.g it responds to a directional, point or spot light) or if it uses the camera depth texture, it won't work.

    It is looking like I might have to dump shadow casting for terrains, at least until I can make a proper terrain system, just so you know. Terrain LODs are screwing with the shadow rendering, leading to very funky results.

    Your support is much appreciated :)

    Thank you, I hope you will enjoy it!







    On an unrelated note, I have decided to slightly switch around my priorities. I've decided to make ambient light my next priority, including a GI solution. I want Jove to be as general case a solution as possible, it should work out of the box for as many scenarios as possible.

    Firstly, for those of you who are unfamiliar with the term, ambient lighting involves everything except for the direct light. This includes reflections, diffuse bounces etc.

    Currently the ambient system works like this. You have a global ambient cubemap which is used both for global specular and global diffuse (although the diffuse is a SH convolution for those of you who question two lookups per pixel!). You can also place reflection probes, with either a sphere or a box influence volume. These probes provide local data, and contrary to what the name indicates they supply not only specular ambient but also diffuse ambient (sorry about that :p).
    This system works fairly well for outdoor scenes as just using a general cubemap baked one meter over the ground is a reasonable approximation for most lighting conditions (although for example the ground in a dense forest would end up too bright). However, it quickly breaks down for indoor scenes. The lighting data from a single cubemap cannot capture the detail required to lit a room with furniture and other occluders, it simply looks flat.

    Note that the following list is subject to change, nothing is guaranteed at this point.
    Future Jove ambient modes
    • Global ambient cubemap with reflection probes (currently available)
    • Static light map with light probes for dynamic objects
    • Static light probes for all objects
    • Static radiance probes for all objects, GI for dynamic lighting

    Now, I am well aware most of you are tired of light maps. But lets face it, light maps exist for a reason. Light maps allow for very high quality, albeit only diffuse and not specular.
    There are two major issues with light maps however. The first is that it only supports static geometry. Even with light probes added for dynamic objects that only works when your dynamic objects are few and small, e.g humans running around in a building. You can't tag a wall dynamic and expect it to fit in with the other walls. The second issue is baking times, with complex scenes possibly taking hours or worse.
    Now, while my implementation of light maps cannot solve the static issue, as that is an insurmountable obstacle due to the nature of the technique, I can do something about baking times. Given the general sexiness of DX11, queue GPU powered radiosity. My implementation is only in its infancy as of yet, but no scene should ever really take longer than 15 minutes. The best part, it doesn't compromise with quality and it scales very well with larger scenes.

    Lightmap.png
    GPU baked radiosity light map calculated in 25 seconds
    Okay, what if you don't like light maps? Or what if you actually have a wall that you don't want to have to tag static? Well, then there's the static light probe mode. It is similar to light maps in that it only receives color from static objects, but the resulting light is distributed among objects the same, no matter if it is dynamic or static. This mode is basically the bastard child of the radiance probe solution and light maps. Light probes in regular unity are calculated on a per object basis, meaning that they work poorly for large meshes. Which probe gets assigned to a large room that has 10 probes inside of it?
    This solution instead using a world aligned grid to distribute the probes. It does have its own issues though, first off you need a 3D texture following the camera. If one voxel of the 3D texture equals 1 meter in the world, you get light bleeding over 1 meter. This may or may not be an issue, it will for example cause light to bleed through thin walls, but on the other hand all lighting is smooth and there are no harsh transitions. Also, you don't want a 3D texture of infinite size as that also means infinite memory. This likely means that you will want/have to fade out the ambient at a distance of ~100 meters away from the camera. You can of course fade to something else, e.g a global cubemap or just a flat ambient, but the local data is limited to a local area around the camera.
    The downside to using this over light maps is less quality, everything is averaged into a probe. The upside is way less storage requirements, a scene with 10000 probes is only a single megabyte of storage. Also, since the lighting solution is unified, everything will fit together properly and dynamic objects will look the same as static objects (although only static objects contribute to the lighting).

    As for the final mode, this is something I've been fiddling with for the past 10 months. My first implementation never made it to production ready status (http://forum.unity3d.com/threads/jove-gi-precomputed-radiance-transfer-volumes.216868/) , mainly due to issues with integration into the Unity lighting pipeline but also precomputation times. What's that you say? Use the lightmap algorithm I am now working on for this as well? Well that's a great idea!
    I have yet to start on parts of this system, but given the fact that it is probe based instead of texel based like a lightmap that eases the precomputation requirements heavily (it does not matter if the calculation is noisy as that disappears when stored in a probe). I estimate the baking times to land at minutes, even for the largest of scenes. This system will function similarly to the global illumination in Far Cry 3, or (I'm almost sure of this but don't shoot me if it turns out I'm wrong) Tom Clancys The Division.
    By now it might sound like this is years away, but the fact is all of the hard parts are nearly done! The probe distribution algorithm is in, and the light map baker is well on its way. When both of those are done, the rest of the system is actually pretty simple.

    I'm gonna be upfront with the disadvantages of this system. The major disadvantages are gonna be that it bleeds over distances of a 3D texture voxel (you can set the 3D texture resolution and the world size of each voxel) and that it does not factor in dynamic objects (they do get lit). The major advantage is that its fast, this system is probably gonna be faster than light maps given the size of a large light map, or at the very least around the same speed. It is unlikely that it will be able to reach the same highest quality as Enlighten, but it won't suffer from the issues Enlighten has with with flashes of lights (since Enlighten converges on the solution, it always trails a few frames behind). It will also precompute faster, and the stored data will be very light weight (likely as most a few megabytes). It is also going to scale very well with point and spot lights, the overhead per light is negligible.

    After I'm done with this, I am moving on to specular ambient, screen space reflections. When this is done Jove will support the lighting requirements of all the most common genres. The only thing possibly lacking is proper sky occlusion for completely dynamic scenes, but due to how rarely such a feature would be required (compared to the others) it will have to be put on the back burner for now.
     
    braaad, vivi90, JecoGames and 3 others like this.
  14. bac9-flcl

    bac9-flcl

    Joined:
    Dec 5, 2012
    Posts:
    829
    Sounds absolutely incredible! I agree that general use features like ambient lighting and screen space reflections should take priority over features like improved sky rendering, great call. :)

    One question: how would such a system handle environments that are not assembled until runtime? Nothing radical like Minecraft, though. Let's say I have a set of Skyrim-style dungeon segments that are procedurally assembled into a random dungeon when the player enters a level. Some segments feature openings on the ceiling that let the sky light in. What would be the best way to handle ambient light precomputation in that case?

    Some things, like high-frequency AO, do not depend at all on which segment would be attached to which, so I can easily precompute them independently and store that in per-segment prefabs. But ambient lighting is a different beast, it varies over the level, gradually falls off over dozens of segments when some light shines into the interior, and so on.
     
    Last edited: Sep 21, 2014
  15. Aieth

    Aieth

    Joined:
    Apr 13, 2013
    Posts:
    805
    You would handle that like you handle baking lightmaps for such a situation. Take a bunch of your assets and throw a probe in the middle. Hit precalculate and then store it in a prefab. I suppose I can work in a system that lets you do something like "Bake only targeted", so you don't have to rebake the entire scene. In such a case however, you want to target all objects that are to contribute to that probes lighting. If you fail to target the ceiling for example you would get an outdoor probe :p
     
  16. bac9-flcl

    bac9-flcl

    Joined:
    Dec 5, 2012
    Posts:
    829
    What information do those probes store, generally speaking? "This area can propagate ambient lighting in such and such direction"? Or a cubemap (or some other description format) of literal ambient light at the moment of bake?

    I'm not sure how the latter would be of any help in the use case I have described, as the environment would most certainly not match the environment that was around during per-segment probe creation.
     
    Last edited: Sep 21, 2014
  17. Aieth

    Aieth

    Joined:
    Apr 13, 2013
    Posts:
    805
    The probes store a radiance transfer encoded in spherical harmonics. I'm not sure what you are unsure about? :p Basically, if you can bake a lightmap for your modular geometry, you can bake a probe for it. With the added bonus that you are far less likely to have seams like you likely would in a lightmap, since the probes are averaged together.
     
  18. TheHenk

    TheHenk

    Joined:
    Mar 24, 2011
    Posts:
    13
    Aah, you have to stop talking about future stuff like this! As soon as you mention something I suddenly need it right now..
     
  19. bac9-flcl

    bac9-flcl

    Joined:
    Dec 5, 2012
    Posts:
    829
    @Aieth, I'm unsure what precomputed radiance transfer actually describes. Is it describing ambient specific to environment it was calculated in (how ambient light falls on a piece floating in empty air, for example), or is it describing properties of space that are independent from environment of affected area (for example, "surrounding space transfers ambient light through such and such vector and occludes it through other vectors"). Simple illustration :D



    Basically, I'm wondering if I can get the result on the bottom right if I only ever have the isolated pieces from the top to precompute data with.
     
  20. Aieth

    Aieth

    Joined:
    Apr 13, 2013
    Posts:
    805
    Haha, I feel the same way when designing the system on a piece of paper :p

    Nice picture :p It would be the left example that is the correct one, "describing ambient specific to environment it was calculated". What it does, simply put, is it calculates how light bounces when coming from different directions without assuming any properties of the light (e.g color, intensity or direction) based on the environment it is in.
     
  21. lazygunn

    lazygunn

    Joined:
    Jul 24, 2011
    Posts:
    2,749
    Loads of great stuff! They much provide a direct answer to any of the features i was still hankering for via Unity 5, but there again, all fully integrated, one system, very much looking forward to the results, and it's now almost an indisputable shoe-in for the main application i've had in mind for a while.

    On the subject of that, Unity are implementing a VR 'mode' as such as standard into Unity 5.x and even patching it into 4.5.x - this is quite surprising, in a good way, and very fortunate for myself. While this is not there now (Although it's been informed that it wont be long) i'll be spending a lot of time with it, so i'll likewise be giving Jove some good testing with it too
     
  22. bac9-flcl

    bac9-flcl

    Joined:
    Dec 5, 2012
    Posts:
    829
    Are you saying that ambient light is not propagated at all (is static/isolated to each precalculated object) and the final result would be the zerba-like one on the bottom left, with ambient contribution completely fixed no matter what geometry and probes surround the segment?

    The fundamental question I still don't understand is whether that implementation does anything at realtime, whether there is any interplay between precalculated points. If I add another corridor segment to the end of an existing one, with the same probe inside of it, will the amount of ambient the they receive from the connection point drop? Or will they stay completely isolated, only considering themselves and subtracting global ambient within their bounds?
     
  23. Aieth

    Aieth

    Joined:
    Apr 13, 2013
    Posts:
    805
    Each probe is completely isolated and knows nothing of any other probes. They don't interact or talk at all, all they care about is the position and direction of light sources. So it would indeed be the zebra one.
     
  24. bac9-flcl

    bac9-flcl

    Joined:
    Dec 5, 2012
    Posts:
    829
    Aha, so what would I have to do to get the bottom right example working? Is there a way, to, let's say, scatter and precalculate the probes over the level immediately after loading? I would not mind stalling the player on a loading screen or something like that if it's the only way to get lighting that would be correct not for individual pieces but for procedural structure as a whole.
     
    Last edited: Sep 21, 2014
  25. Aieth

    Aieth

    Joined:
    Apr 13, 2013
    Posts:
    805
    You would either have to recalculate, we'll see how fast I can get that, but it is probably going to be slower than what you might want for a loading screen. How much are your modular pieces going to vary in color? Can't you get away with say, tiling the same corridor 5 times and then baking the center corridor?
     
  26. bac9-flcl

    bac9-flcl

    Joined:
    Dec 5, 2012
    Posts:
    829
    Obvious approach would be to simply use far bigger segments for procedural generation, with some standardized ambient level at attachment point (let's say zero, achieved by always placing attachment point multiple turns away from the closest ambient light source). That would make it work. But I'm not sure what's the benefit of the new system in that use case, because you can achieve the same using Beast-lightmapped ambient occlusion in UV2. Transfer of light to dynamic objects passing through those environments?
     
  27. Aieth

    Aieth

    Joined:
    Apr 13, 2013
    Posts:
    805
    I might be missing something, I don't really understand the issue. When you say Skyrim sized, what exactly does that mean? :p Perhaps it is much smaller than what I imagine.
     
  28. bac9-flcl

    bac9-flcl

    Joined:
    Dec 5, 2012
    Posts:
    829


    Imagine a level generated from linear sequence of turns and straight sections of different types. They can be 4 meters high and 8 meters long, or 16 meters long, or maybe 16 meters high and 64 meters long. Basically a snail-like dungeon made from isolated chunks. No complex cases like tile-based rooms, just a linear sequence of corridors.

    Let's say one of the segments has a window to the sky. How can I make ambient spike nicely propagate from such a segment no matter what segments surround it? Obvious solution is to place a relatively low-radius ambient light within such a segment, manually. Would precalculating anything help at all in this case?

    Sorry if I'm not clear with the description, I can make an illustration depicting the particular case a bit later.
     
    Last edited: Sep 21, 2014
  29. Aieth

    Aieth

    Joined:
    Apr 13, 2013
    Posts:
    805
    The best way to handle that would be, I think, precalculating say 10 probes in different combinations of geometry, and then just assign the "most fitting" probe to each "chunk" of modular stuff. Its really not gonna make a difference if there were pipes there or not, but the color of the ground/wall will affect it.
     
  30. bac9-flcl

    bac9-flcl

    Joined:
    Dec 5, 2012
    Posts:
    829
    Hmm, okay. So, in the most primitive case, that can be achieved by creating three probe sets for each segment:

    - Illuminated from endpoint A, occluded from endpoint B
    - Illuminated from endpoint B, occluded from endpoint A
    - Occluded from both endpoints

    And then (having access to info about neighbor segments from the generation code) activating the set that fits those neighbors?
     
  31. nasos_333

    nasos_333

    Joined:
    Feb 13, 2013
    Posts:
    13,360
    A question

    is there a specific way Jove handles the close to one another point lights ?

    Or something like Unity "not important" render setting etc

    I plan to make a manager to handle point lights that get very close (merge or eliminate etc) for the GI Proxy system. This should be especially useful for the GI case where each GI light spreads bounce lights that may overlap if these GI systems come close.

    Now i am currently looking at how exactly Unity handles the lights, so i dont create something redundant (same goes for Jove)
     
  32. Aieth

    Aieth

    Joined:
    Apr 13, 2013
    Posts:
    805
    Yeah that sounds about it. Although, I'm not sure if this is what you meant, but you can ignore anything that has to do with lights. Lights should "just work" when plopped in, what's relevant is the color and/or occlusion from the sky provided by the geometry around the probe.

    Jove does not do anything special with lights :) There will be a "not important" setting in the future when DX9 support is added (you don't want hundreds of lights in DX9 :p), but at the moment no such thing exists and it won't for a while to come.
     
  33. nasos_333

    nasos_333

    Joined:
    Feb 13, 2013
    Posts:
    13,360
    There is a DX9 mode coming ? :)

    That is some stunning news, i dont use DX11 in my game and DX9 mode would be amazing to have as option

    Thanks for the clarification, i will roll the manager for both Unity and Jove, will be interesting to have in both cases i guess

    Also i cant wait to use GI Proxy with Jove DX9, i will tailor the system to give the best effect possible for sure.

    The results with the DX11 are also stunning (IanStanbridge kindly offered these pics)

    http://forum.unity3d.com/attachments/lots-of-gi-png.112841/

    This uses 300 lights with 80 GI casters though, so i need to lower the number :), since raycasting is getting heavy at that point.
     
  34. lazygunn

    lazygunn

    Joined:
    Jul 24, 2011
    Posts:
    2,749
    The dx9 version could be really useful for swapping a scene to a mobile friendly flavour from the dx11 Jove maybe?
     
  35. nasos_333

    nasos_333

    Joined:
    Feb 13, 2013
    Posts:
    13,360
    I dont use DX11 because i target PS4, PC and maybe xbox one. Hopefully the DX11 mode will be extended with OpenGL support in Unity 5, so the GPU can be used in all modes.

    Until then i would not limit the game to DX11, seems too limiting for now platform wise
     
  36. Aieth

    Aieth

    Joined:
    Apr 13, 2013
    Posts:
    805
    It still uses full deferred shading, so most mobile platforms, if not all, would choke on the bandwidth. I might be wrong though, the technology advances quickly.

    I'm not sure how a Unity compute shader compiles to PS4/XB1, but hypothetically it should work just fine with Jove. PS4/XB1 support everything required to run Jove, although as I said I am not sure how Unity as an engine handles it.
    IIRC, the issue about OpenGL isn't really with Unity. Unless things have changed recently, I recall that Apple does not even support the version of OpenGL which runs compute shaders. So Unity supporting it would make no difference in that regard.
     
  37. bac9-flcl

    bac9-flcl

    Joined:
    Dec 5, 2012
    Posts:
    829
    You are correct, Apple is behind with it's OpenGL support. Not really surprising though, GPU-related things they are concerned about are usually of the "let's accelerate 4k video editing in Final Cut Pro" sort. They don't seem very concerned with that vi-de-o-ga-me fad today's youth keeps nagging about :D
     
  38. nasos_333

    nasos_333

    Joined:
    Feb 13, 2013
    Posts:
    13,360
    I dont know how they handle it, since PS4 does not use DX11 probably the compute shaders will just not compile for a non DX11 supporting platform.

    But i cant be 100% sure of that, though the fact that the mode is called DX11 maybe means that uses DX libraries to do the final compilation, thus would probably not work on PS4
     
  39. bac9-flcl

    bac9-flcl

    Joined:
    Dec 5, 2012
    Posts:
    829
    nasos_333, PS4/Xbox One fully support compute shaders and with Unity using intermediate shader language instead of direct HLSL/GLSL, I see absolutely no issue with existing compute shaders being properly compiled for PS4/XBO builds.
     
  40. nasos_333

    nasos_333

    Joined:
    Feb 13, 2013
    Posts:
    13,360
    Do compute shaders work on OpenGL ?

    Or if i compile with DX11 mode on, will this be compatible with non DX11 systems (that do use OpenGL of the same level) ?

    I though DX11 mode was limited to just DX11 platforms :)
     
  41. Aieth

    Aieth

    Joined:
    Apr 13, 2013
    Posts:
    805
    What Unity calls DX11 isn't really DX11, it is "this bunch of features that apply to systems that have DX11-like capabilities". PS4/XB1 both have their own versions of compute, which I assume Unitys compute shaders compile to. But then, I have never developed for consoles with Unity, so I'm not sure exactly how it works.
     
  42. bac9-flcl

    bac9-flcl

    Joined:
    Dec 5, 2012
    Posts:
    829
    Just looked it up on the net and yeah, PS4 builds support the same SM5.0 and compute shaders since Unity 4.3.
     
  43. nasos_333

    nasos_333

    Joined:
    Feb 13, 2013
    Posts:
    13,360
    This is incredible, thanks for that. Opens up a whole new world :)

    I will have to contact Unity i suppose for details on how i should prepare my game though to be compatible with "Unity for PS4" version.

    BTW, has Jove been tested on PS4, like have a project compiled in the PS4 Unity version ?
     
  44. IanStanbridge

    IanStanbridge

    Joined:
    Aug 26, 2013
    Posts:
    334
    Unity already announced that in Unity 5 Xbox one , PS4 , Directx 11 , Directx 12 , Apple Metal will all use the same shaders. They will all just be recompiled into the format needed by that platform. They also said if you create a Directx 11 shader in Unity 4 it will be able to be recompiled like this in Unity 5.

    They mentioned it in the blog about apple metal support in unity 5. All of these platforms basically use the same shader model 5 capabilities they just use different drivers in different ways to reduce the strain of the cpu dealing with driver communication to more efficiently handle draw call processing.

    For example think of Directx 12 as identical to Directx 11 except that you will be able to get away with more draw calls without bringing the cpu to it's knees.
     
  45. JecoGames

    JecoGames

    Joined:
    Jan 10, 2013
    Posts:
    135
    Could you maybe look into light propagation volumes for more dynamic use, I loved the GI in far cry but feel that having the choice of LPV would be nice
     
  46. nasos_333

    nasos_333

    Joined:
    Feb 13, 2013
    Posts:
    13,360
    Thanks for the info, seems the new approach will be great. Since it will work with Unity 4 shaders i could start working with some DX11 controlled integration i guess (just in case something goes bad in the transition).
     
  47. Aieth

    Aieth

    Joined:
    Apr 13, 2013
    Posts:
    805
    We'll see in time. What I am doing now is very daunting in its own, and adding a whole other solution on top of that would be hard :p Also, LPVs work badly with multiple light sources, and that kind of doesn't really roll with "hundreds of lights!" Jove has got going. But we'll see in the future.
     
  48. JecoGames

    JecoGames

    Joined:
    Jan 10, 2013
    Posts:
    135
    But with GI you generally dont need hundreds of light sources ;) I understand that this woudnt be implemented at the same time as the main solution,but for open world games that require a lot of realtime specular it could be something to add to the roadmap;)
     
  49. Jesse_Pixelsmith

    Jesse_Pixelsmith

    Joined:
    Nov 22, 2009
    Posts:
    296
    Not to break up the PS4 / Jove discussion (as I'm very interested in targeting the platform) but just wanted to clarify Jove's plans for terrain stuff as I've only seen bits of it here and there.

    So, first off, when you say doing a proper terrain system, I assume you just mean the rendering part right? As in, it will still use Unity's native terrain tools to compose?

    Second, you mentioned that you'd be cloning Unity's standard terrain shader to work with Jove somewhat soon, although now that will be without terrain shadow casting (shadows created from a mountain etc, but a tree might still cast a shadow on the terrain?) - until you create the new terrain system.

    As far as the new terrain system goes, what are the key features you have planned? I'm not clear on all the buzzwords but I assume something to make vertical cliffs look good (tri-planar?), the terrain shadow casting you mentioned, bringing in some PBR stuff for metals and wet rocks and such, stuff to make it look good up close (POM or tessellation)?

    Other things that I would not assume, but I think look awesome:
    -Dynamic snow: http://www.stobierski.pl/unity/RTP31_Webplayer8L/Webplayer8.html
    -Water flow shader:

    (though I guess this is mostly based on POM? Would a third party be able to make a shader like this as an extension once that's in?)

    When I type all this out it definitely seems like a tall order. Risking being "that guy" do you have an ETA for some of this stuff?

    We'll be in the prototyping stage for the next couple of months while we get other systems working, at that point we're going to start cranking up the visual factor which includes lighting, which might be a good time to switch over to Jove - I assume the terrain system won't be in by then , but maybe the basics will (i.e. it won't be black)? Then we plan on showing some player-facing gameplay footage in Jan / Feb early next year. While we won't be releasing for a good while after that, it would be nice to start showing a trailer for crowdfunding around then, so if some of the terrain system was slated for around then we can probably strongly consider putting Jove on the roadmap.

    Otherwise we'll probably need to bite the bullet and go the initial route which is using Unity's stock system + RTP + Sunshine + probably a mix of other camera effects etc. Your point of too many things in the pot as it were makes a lot of sense, and getting things to work together nicely can be a pain, but I've been on a project with those tools and has managed to release, so that will be the "Plan B" :)
     
  50. bac9-flcl

    bac9-flcl

    Joined:
    Dec 5, 2012
    Posts:
    829
    I'm still convinced that porting existing RTP shader (either yourself or through asking RTP developer) to Jove lighting is the most reasonable way to approach the terrain problem.

    The feature set of the RTP shader and tools tied to it is absolutely insane, it literally contains more than a year of non-stop work and in my opinion tops some triple-A solutions like CryEngine terrain system both in it's features and it's tools. Starting from scratch on that is an extremely big and somewhat unreasonable undertaking.