Search Unity

3D Scanning for 3D scenes

Discussion in 'Editor & General Support' started by thempus, Sep 5, 2014.

  1. thempus

    thempus

    Joined:
    Jul 3, 2010
    Posts:
    61
    Is there any technology similar to kinect fusion that instead of scanning real world scenes would be usable to scan 3d scenes?



    I look for this kind of tech because there are 3d scenes made for high quality architecture rendering that are not optimized for real time use. If you have to optimize these scenes manually it takes a lot of time to have everything perfectly done. Specially when you want to bake the light into a lightmap, there are many problems that can generate light leaks if the modelling is not perfect.

    I imagine if it was possible to scan a 3d scene by defining a navigable area we would be able to generate only the necessary polygons for intended detail visibility:

    If with a "limited" technology like kinect you can already have some good results scanning real world places, I imagine what you could get with a "virtual kinect".

    In the case of lightmap only I'm already using a similar solution with reasonable success, I define the navigable area with a grid of points, and I calculate the GI on each of these points one by one, so if there is a problematic area similar to what was described in the picture above, the light would leak from inside to outside where it's not visible. But it still leaves wasted space in the lightmap that will be loaded into the memory, and in some cases when you have a thin wall dividing two rooms with a continuous polygon defining the ceiling, you may still get the light leaking.
     
    Last edited: Sep 5, 2014
    Ness likes this.
  2. thxfoo

    thxfoo

    Joined:
    Apr 4, 2014
    Posts:
    515
    http://pointclouds.org/
    Plus some coding. I think fusion algorithm is integrated and much other cool stuff. If you have a kinect, you must play with that if you haven't. (have 2 kinects but no xbox)

    So you could use the kinect fusion algorithm for your virtual world. But I did not really get why you want to do that.

    I think there are much better algorithms that just reduce the polys in your world. Because I think reducing the noise from the fusion output will be much harder than doing a good reduction of the high-poly stuff with some of the available reduction algorithms.

    Edit:
    Maybe I got you wrong? You don't want to use fusion algorithm to generate a reduced poly version of your world?

    If you want to remove stuff never seen and reduce the rest:

    1) find visible triangles relevant for you from all viewpoints
    2) remove those never visible
    3) reduce poly count with standard reduction algorithm and bake normal/AO/... maps

    For 1) you can just assign an id to each triangle, render each with that id as color, render scene from current viewpoint, check all pixel and add all color values to a list (visibleTrianglesList). That for each viewpoint you have. Like 20 lines of code in OpenGL or DirectX, but no idea how difficult in Unity. But the problem is that you have a huge amount of view points and still can miss some. So this will have errors or take very long for large scenes.
    There is a lot of optimization potential (e.g. don't render any triangles found as visible earlier, or smart viewpoint selection).
    I think you better study the available literature if you really want to solve that problem, there sure are better ways than this. Still my 2 cents.

    Edit2:
    Note: both things described above (fusion or custom algorithm) will create a ton of holes in your geometry (but not visible for the player) except in the most basic cases (e.g. your room is just some cube). So for your lighting problems you would have to create a "hole closer" algorithm too, and that is hard (because all your new patches must not be visible triangles from any view point).

    So I would not try either of the 2.

    Or you just want to get a geometry that has no "wrong holes"?
    That is much easier. Some of the reduction algorithms can merge the stuff separated in the models. So just use those. Or just do a boolean merge before you reduce? I just don't get what your problem exactly is.

    Short answer: Whatever you exactly want:
    SIGGRAPH is your friend. I'm sure you find a solution there (if you know how to search).
     
    Last edited: Sep 5, 2014
  3. thempus

    thempus

    Joined:
    Jul 3, 2010
    Posts:
    61
    So I've used the kinect fusion as my example, because it was the closer tech I could find to explain what I need. The similarity is that you move around where you can physically reach with the kinect to scan only what you can see, and in the end you have some kind of a continuous mesh with no holes.

    If you scan a virtual 3d scene with a "virtual kinect" you could take as much as possible of the original geometry as needed. It doesn't necessarily means that I want the geometry to be generated the same way kinect fusion does. It would be much more a mix between boolean and cutting polygons out, all that based on a predefined navigable volume that will tell the system what should be cutted out or not. You could compare it a little bit with the idea of occlusion culling, but making it so aggressive that it cut out even the part of the polygon that is not visible, but instead of basing it on a single point of view, it should take into consideration a volume with infinite points.

    Imagine if I have for example a chair that the legs are going through the floor a bit, I would want the system to cut out the part of the leg that is going through the floor and leave a hole on the floor. But for the system to decide that it should cut out the part of the leg that is going through, not the other part, it should consult a predefined volume (blue area on the pic I've include on my first post) to know what is not visible.

    The main purpose is to allow me to get some complex scenes made for architectural rendering on 3ds max. In many cases the base geometry already comes from a BIM architectural software that generate a messy geometry. Then I would want this system I'm describing to you to clean up the mess, so I can bring the scene to unity without having to do a lot of manual work fixing every polygon that may produce light leaks.

    1 - could be part of the process
    2 - also important
    3 - not so important in my particular case, but would be nice to have
    4 - Not only polygons that are not visible must be deleted, but the part or excess of polygons should be cut out according to the predefined navigable volume.


    I will try to search specifically through SIGGRAPH material. Thanks for that tip.
     
    Last edited: Sep 5, 2014
  4. CaoMengde777

    CaoMengde777

    Joined:
    Nov 5, 2013
    Posts:
    813
    not so sure ...
    but you want polygon reduction? .. polygon decimation ? ..
    might want to look at a program that does so..
    like simplygon, usually used to generate LOD meshes for games .. probably many other programs out there too

    but then im not sure itd work for you.. ? .. since those programs try to keep as much detail as it can, but still remove polys.. but maybe itd work i dunno?
     
  5. thempus

    thempus

    Joined:
    Jul 3, 2010
    Posts:
    61
    The idea is not about polygon reduction or decimation, it's is more like an automated cutting of polygon >>excess<< so you don't get light leaks when baking lightmaps.

    You can see on this image an example of light leak:
    http://www.shadowood.uk/Store/Store/Misc/Y2014-Mo09-Unity5/Images/img-1PastedGraphic77.jpg

    This image is from this blog post made by an user of this forum called HeliosDoubleSix:
    http://www.shadowood.uk/Store/?u=2014-09-01&&ln=Unity 5 Realtime GI#Unity 5 Realtime GI
     
  6. thempus

    thempus

    Joined:
    Jul 3, 2010
    Posts:
    61
    Here is another attempt to explain what I'm looking for:

     
  7. Zomby138

    Zomby138

    Joined:
    Nov 3, 2009
    Posts:
    659
    I'm not sure I agree with that practice. While it should help make the baked light less leaky and with less black lines in the cracks, it will also cause the real time lights to "peter pan" their way through the walls from the outside.

    Personally I would think an optimal solution for your example would be to have one solid, closed, L shaped mesh with no internal polygons.
     
  8. thempus

    thempus

    Joined:
    Jul 3, 2010
    Posts:
    61
    I think that solving the real time light problem would be much simpler than solving the lightmap problems. You could just model some sort of cover to stop light from going through. If you are using lightmaps you are less dependant on real time light anyway.

    Having a L shaped mesh with no internal polygon would definitely help (apart from the wasted space on the lightmap or extra work if you exclude the not visible part from the lightmap). I would use this tool on some sort of architectural model that comes from a BIM software that generates tons of unnecessary polygons.
     
  9. HeliosDoubleSix

    HeliosDoubleSix

    Joined:
    Jun 30, 2013
    Posts:
    148
    Light 'leaks' are caused by all sorts of issues it is not as simple as objects passing thru others and making the objects no longer intersect or making one big continuous model would not solve anything, Enlighten already does some clever internal retopology, but as it is limited as one 'lightmap' per object max, and indeed aims to put multiple objects onto one 'system' which is just a huge lightmap of sorts you get limited by resolution pretty quick, they basically invented a new form of geometric algebra to create Enlighten so I'm sure if anyone an figure it out they will, also some of the issues are actually Unity's interface to Enlighten and not Enlighten itself I think, such as how Unity tackles laying out UV's and padding them, a post process could in theory use line of sight and other things to clean up the leaks, but it would probably take forever to process and time man wise and cpu wise is better spend on tackling the core reasons for the problem versus trying to patch up the errors

    It is a nice thought to auto boolean everything into one mesh, but not really practical, booleans will in many other cases just destroy your model, add impossible triangles everywhere and screw things up way worse as the real world is actually not made of triangles at all :)
     
  10. thempus

    thempus

    Joined:
    Jul 3, 2010
    Posts:
    61
    On baked lightmaps and on my specific scenes this is what causes 99% of the light leaks, and I have to manually do what a autobollean tool would do. Of course not everything have to go through the "autoboolean" process, you could use it mostly on static environmental geometry and you can keep the original geometry stored in case you make changes to the scene. I do it all manually and it is not that difficult.

    What I think can solve the light leaks is better explained as something like a "inverse light leak" where the light leaks from the visible area to the area that is not visible, like the floor going under the wall. When you calculate the lighting there is no way for the renderer to decide which side should have priority, the solution in this case is to increase the quality of the lighting processing and even doing that you still get a visible line in some cases. If you could say to the renderer "this is where the camera will be, so don't worry about rendering the lighting on the part that is inside the wall" I think it could solve many problems with very low settings, I do it manually using vray for example and it works.

    The autoboollean suggestion is in my mind a step further because there are some cases where the "inverse light leak" could reach another area like the floor going under the wall and getting to another room, if the wall is too thin you will still have leaks.

    "...such as how Unity tackles laying out UV's and padding them, a post process could in theory use line of sight and other things to clean up the leaks..."

    that is awesome if they get this right this could solve a lot of problems!

    "It is a nice thought to auto boolean everything into one mesh, but not really practical,"

    we don't need to merge all together, if you have 2 models in a scene, like floor and chair, if the chair go through the floor you only cut the legs of the chair, and remains with the same 2 models. You can still choose if you want to have 1 lightmap per model or both on the same lightmap.

    If there was an automated tool for this you could regenerate the optimized geometry as you make changes to the original models in the scene.

    "real world is actually not made of triangles at all "

    I know but we represent all things with triangles right? :)

    Thanks a lot for taking the time to comment here nice to know more some of the features about Enlighten I will keep following your blog and thread to learn more stuff.
     
  11. topofsteel

    topofsteel

    Joined:
    Dec 2, 2011
    Posts:
    999
    I wanted to chime in on this. I do design visualization from BIM models. I have been developing with unity for over 4 years and a vast majority of my time on every project is spent on or around lightmapping. Geometry, UV's, Beast settings, lights, waiting.... repeat. I was also looking for a 'auto boolean' tool, and mostly for optimization purposes. I wanted to minimize surface area for lightmapping and also vertex count. One of the ways BIM software exports geometry is each material in an element is it's own mesh. They are combined, but that just means normal faces touching each other. For example, in a simple wall consisting of gyp, studs and another layer of gyp, two thirds of my lightmapped faces weren't even visible to the camera. I found ways to fix that and took care of the light leaks with planes in max or quads in Unity.

    Enter enlighten. I am trying to develop a workflow for precomputed real-time GI. My problem is completely different. I still need to block the real time light leaks. But the biggest obstacle has been the shadows created from those faces that touch each other in different circumstances. Each element exported from a BIM model also has polygons on all sides. So where a floor meets a wall, or a wall meets a ceiling, you still have touching faces. In the screenshots below, the only difference between the 2 sets is removing the red polygons in the last image. But even with all offending polygons removed, there are still slight 'shadow leaks'. They can be seen in the 4th image where the 45 degree wall meets the floor. I believe I can fix that too but I didn't bother, I just went back to Unity4 for what I was doing. In short, the solution will take an exponentially greater amount of time. I can longer drop a Revit model into Unity and create a preliminary massing/lighting model.

    Edit: Light leakage in Unity5 is significantly worse than in Unity 4. In the 6th image, there is no break in the exterior wall, the polygons go from the ground to the roof. But it is all one mesh. That may be fixed by using a 2nd mesh for the floors. It's leaking across boundaries on the lightmap, not through the wall.











     
    Last edited: Feb 20, 2015
  12. AlanAlanAlanAlan

    AlanAlanAlanAlan

    Joined:
    Feb 28, 2015
    Posts:
    2
    Beast has a property, ptCheckVisibility. Worked perfectly, no leaks in baked. It was the reason I came to Unity and left Unreal. Now with 5, I'm back to stuffing around with geometry.
     
  13. topofsteel

    topofsteel

    Joined:
    Dec 2, 2011
    Posts:
    999
    Yea, i'm having to isolate each space to get the best results. I've considered generating lightmaps in 4 early in the project when the geometry is still changing and bringing them into 5. I create my own UV2's so they would line up if the format work. I could get decent looking lightmaps without bending over backwards working on geometry.
     
  14. AlanAlanAlanAlan

    AlanAlanAlanAlan

    Joined:
    Feb 28, 2015
    Posts:
    2
    Maybe Enlighten has a property.
    It's a good thought. I had considered baking external also. I would like to have that realtime GI.
    You know, UE4's own demos make allowances with thicker geometry and shelves. I would like to stay with Unity because of it's lower demand on resources in smaller scenes.