Search Unity

Cubiquity - A fast and powerful voxel plugin for Unity3D

Discussion in 'Assets and Asset Store' started by DavidWilliams, Jun 2, 2013.

  1. MartinLyne

    MartinLyne

    Joined:
    Apr 25, 2013
    Posts:
    30
    I totally understand (I've already pestered you multiple times and time is money) I was just curious if it was a technically inhibited.. thing or a rule, as it were, after stumbling on a post mentioning it (I bought the asset just now in fact).

    One other thing, I noticed some months ago you mentioned Dual Contouring was in the background code but not exposed yet - would you ever consider adding it, perhaps to a beta version? I'd be very interested in helping to test it (sharp features would be a great addition and I'm already generating hermite data) if it was under consideration.

    Thanks for your help,
    Martin
     
  2. DavidWilliams

    DavidWilliams

    Joined:
    Apr 28, 2013
    Posts:
    522
    Thanks! It's nice to have another sale and I hope it works out for you :)

    It won't be in the foreseeable future (next 12 months) but long term I do think it will happen. The biggest question is how the tools should work, as real time editing of Hermite data is more complex and less intuitive than working with a density field. At least in some ways... I've seen Everquest Next do some nice CSG-type operations (mostly for buildings) but no sculpting of the terrain for example.

    I'd really like to get better shaders in place though, and it's possible that small-scale sharp features can come from displacement maps. Lot's of research to do here, but I believe Crysis 2 used Marching Cubes terrain and managed to look quite nice.
     
  3. Sinfritz

    Sinfritz

    Joined:
    Nov 18, 2012
    Posts:
    72
    Hi!

    I've been playing around with the evaluation of the plugin, and it's looking good thus far! (from my understanding, the evaluation version has limits?). I am pretty much interested in the ColoredCubesVolume aspect of the plugin.

    I have a couple questions:

    1) I notice that the only way to load in .vdb is via the ColoredCubesVolumeData. Is there a way to load .vdb in-game? I would like to be able to load in separate voxels and merge them into one landscape (locally or online).

    2) There doesn't seem to be anything in regards to occlusion culling for both the voxel landscape and the game objects. Is this something that is planned?

    3) Currently, the only attribute for each cube in a volume is a color. Are there any plans to expand on this to allow more data? E.g.: flags, texture offset for texture atlas.

    4) The current limit of the voxel volume to my understanding is around 256^3, and that you guys are looking into expanding that limit. For my purpose, 512^3 would be ideal. Is there sort of a rough estimate as to when that limit is expanded? I'm seeing if there are ways to go about having large landscapes with the current limitation. (perhaps having a grid of voxel volumes).

    Thanks in advance!
     
  4. DavidWilliams

    DavidWilliams

    Joined:
    Apr 28, 2013
    Posts:
    522
    Only licensing limitations (non-commercial use only), the evaluation is fully functional.

    The ColoredCubesVolumeData is basically just a thin wrapper around the .vdb file. You can create a ColoredCubesVolumeData without actually rendering it, if you just want to read the data. You could also use this approach to copy data between .vdb files.

    Be sure to read this when considering multiple voxel models and/or merging them: http://www.cubiquity.net/cubiquity-for-unity3d/1.1/docs/page_duplication.html

    It just feeds the meshes to Unity for rendering, which I assume by default only does frustum culling (perhaps also sorting near-to-far). We can add something else here if it seems to be the bottleneck, but some testing will be needed to determine that.

    Internally each ColoredCube is 32 bits, and so far we're only using the first 16 for the (quantized) color. We will want to add alpha, and possibly other visual properties (emissive value? specular strength?). It's possible that one byte might get reserved for user values but this isn't really decided yet. We're not focused on supporting texturing for ColoredCubesVolumes at the moment though.

    There isn't actually a hard coded limit, though the largest we have really tested is 512x512x64 (e.g. in our Tank Demo). When you create a new 'Empty Volume Data...' you can specify the size. 512^3 seems to work but is a little slow to load (one minute or so). Volume size/performance is indeed what we are working on at the moment.
     
  5. Sinfritz

    Sinfritz

    Joined:
    Nov 18, 2012
    Posts:
    72
    Thanks for the answers!

    So long as it's possible to load/save vdb in-game, I'm happy! What I meant for 1) in terms of merging is appending voxel data into the main one. I did saw the caution for ensuring each game object references its own voxel data.

    Occlusion culling in Unity needs to be baked into the scene, so the only culling that will occur are outside the view frustrum. So due to the dynamic nature of voxel, Unity's occlusion stuff will not work very well. I assume cubiquity already does some of this for its voxel display, but it would be nice to be able to have something like that available for gameobjects.

    I've thought about question 3 some more, perhaps if there's a way to alter what gets passed into the shader based on the "color" value, we can then have more flexibility on its interpretation. As an example: I can set half of that value to represent an offset to a texture, and the other half as game specific attributes. Pretty much as you described. So it all boils down to how much we are allowed to do this, and how easy it would be to setup.

    512^3 is when I noticed a huge performance loss. So roughly 16mil voxels is the current limit before things start to slow down drastically.
     
  6. Sinfritz

    Sinfritz

    Joined:
    Nov 18, 2012
    Posts:
    72
    In the event that I would like to look into the dll code of Cubiquity, is it possible to ask for the source code? I suppose it would be appropriate to have a license for such support. In the event that I want to debug and potentially modify certain aspects of the library specific for a game, it would be nice to have that option.

    Along that line, the code for ConvertToVDB, such that we can add in our own file formats to be imported.
     
  7. DavidWilliams

    DavidWilliams

    Joined:
    Apr 28, 2013
    Posts:
    522
    Cubiquity does not handle any occlusion or rendering directly, it simply passes the mesh data to Unity for rendering. On this basis there is probably no occlusion culling beyond frustum culling. There are things which could be done if Unity supports them (e.g. occlusion queries) or which could be added to Cubiquity (e.g. raycasting in software to test chunk visibility) but I'm not sure exactly what will change here.

    At the lowest level, the PolyVox library which powers Cubiquity is very flexible in this regard. You can define your own voxel types as the volume is templatized ('generic', in C# terminology) and you can also provide functions to define how the voxel type should be interpreted by the surface extraction. But this also makes it rather complex, and so with Cubiquity we wanted to just provide a couple of common predefined voxel types which people can drop into their projects.

    There is some chance that we expose more of the underlying flexibility, but it's not high priority at the moment. There may be some challenges, as it will be hard to define structures and callback functions in C# and pass them across the C native code interface. So yeah, it's on my mind but I don't know what will happen.

    At some point it will be possible to access the source code, though this isn't possible yet. The aim is to build a C++ SDK which is independent of Unity, and then separately sell the Unity bindings (as well as other bindings, we have UE4 in the works).

    When we have the C++ SDK, I imagine source code for the converters will be given away for free because it will serve as an example of how to use the C API.
     
  8. Sinfritz

    Sinfritz

    Joined:
    Nov 18, 2012
    Posts:
    72
    All the more reason having the underlying code to be accessible, be it another license or part of the unity package. I have looked into the code more, and the Unity scripts for the most part just communicates with the dll.

    I think having a way to do occlusion based on the camera's frustrum would certainly benefit performance. At least having some sort of query if a chunk is visible or not (based on frustrum) would be a good start.

    Unity's occlusion system is mostly dependent on static occlusion data that you bake into the scene. There's area portal too, but that is still tied into the based occlusion data.

    As for me, I wouldn't mind buying a separate license for the dll code, even if there is a point you guys have to refactor several aspects of the API.
     
  9. DavidWilliams

    DavidWilliams

    Joined:
    Apr 28, 2013
    Posts:
    522
    I just tested this and seems Unity does already perform such frustum culling. At least, using the Cubiquity example scene I was able to see the draw count in the stats window drop when rotating the camera to show only part of the terrain. I've heard this might only work in game mode though (but I'm not clear on why). Also, be sure to use the latest 1.1.3 version of Cubiquity as older versions might not have set the bounding boxes correctly.

    However, I do agree that further occlusion culling approaches could be useful to reduce overdraw.

    Perhaps we can bring forward access to the C++ source and provide an early version to interested parties. Providing a real C/C++ SDK is quite some work (polish, documentation, examples, etc) because it should be usable from OpenGL and custom engines, but much of this is not needed when only using it with the existing Unity integration. So we could probably make the C++ source to the library available by the end of the year.

    Would this fit in with your time scale? You don't need to commit of course, but other people have also expressed interest in getting the C++ code so we need to make it available anyway.
     
  10. Sinfritz

    Sinfritz

    Joined:
    Nov 18, 2012
    Posts:
    72
    Awesome, I'll definitely keep an eye out for when it's ready! I'm willing to wait for when the code is ready. Having a stable release of the code would be better.

    As far as gameplay stuff goes, I think having a complementary set of features to support it will go a long way for the plugin itself. E.g. have a collection data storage for block states (flyweight pattern). I'm looking into making one for this purpose.

    The other thing I wanted to request is if it's possible to add in the API the ability to load .vdb to an existing Volume or VolumeData. I was trying to figure out how to approach this but it looks like you can only have one .vdb loaded into a Volume object at a time. If there's a way to do so with a provided offset, that would allow us to construct one single Volume and just append it with multiple loaded .vdb. E.g. A city with different types of buildings, instead of loading each into separate Volume objects, it's just one Volume.
     
  11. Sinfritz

    Sinfritz

    Joined:
    Nov 18, 2012
    Posts:
    72
    I was playing around with Cubiquity some more and trying out the TerrainVolume part of the package, but I came across an issue in regards to extending TerrainVolume.

    The problem is that Octree explicitly looks for the type TerrainVolume or ColoredCubesVolume during the mesh building (see line 99 of OctreeNode.cs). Basically I wanted to make a TerrainVolume that uses solid colors rather than textures. The reason I wanted to extend TerrainVolume is so that I can change the editor to have a selection of Colors. That means creating a separate class in order to have its own Inspector script.

    I basically just modified OctreeNode such that it also takes into account the class I created such that it will properly call the correct mesh building.

    I know you guys are looking into adding more changes to the Cubiquity plugin, so I figure at some point OctreeNode will have some changes. I just wanted to point this out, such that perhaps a newer version will allow users to extend either TerrainVolume, or ColoredCubesVolume. That part of the code probably needs to be changed anyhow for performance reason (whenever modifying the terrain during runtime happens, which may or may not be frequent). A lot of the component queries in the code can be cached.
     
  12. DavidWilliams

    DavidWilliams

    Joined:
    Apr 28, 2013
    Posts:
    522
    Interesting idea - I hadn't considered this. At the moment it is not possible, the best you can do is to load your building (or whatever) into one volume, and then copy the contents of that volume into your main 'world' volume.

    Yes, Cubiquity comes with two predifined volume types and there is not much you can do to extend these. As mentioned previously, the internal C++ library is quite flexible in this regard but only limited functionality is exposed via the C interface. I do think we should move towards making it more flexible, but I would say it is a long-term goal as we should get the existing volume types working first.

    For using solid colors rather than textures I think the best you can do is to use textures which are just red, green, and blue, and mix them. However, there are serious constraints here - you can't make black for example because it would be considered an empty voxel. Hmmm... though maybe you could scale the brigtness of the color in a shader. Ok, you might get something to work but you'll be bending the system :)

    I do think we will be looking at this soon (in the next few weeks) as performance is fairly high priority at the moment. As well as the issue you identified we also need to be more careful with pooling, and we should reduce the number of C API calls. Perhaps we can use less temporary buffers as well. This are all things which were coded to 'just work', with the idea that they could be tidied-up later. That time has come!
     
  13. Sinfritz

    Sinfritz

    Joined:
    Nov 18, 2012
    Posts:
    72
    Yeah, that would be great to have. Having a dummy Volume object for loading in the meantime works for me, thanks for the suggestion!

    Actually what I did is just replace all of the textures in the shader to color and just use that. So it's no longer triplanar, just solid colors with blend weights. In that sense, the weight data are still used for blending between the colors much like textures (though I can see using some of those for other purposes like brightness, etc). I was trying to aim for a flat shaded look, something like this:



    I guess having a way to unsmooth the normals of each tris would be another request? :p
     
  14. DavidWilliams

    DavidWilliams

    Joined:
    Apr 28, 2013
    Posts:
    522
    Actually it's high on the todo list and flagged as 'easy'. Except I tried it last night with a shader trick and it's not as easy as I hoped!

    However, in your case (no textures) you can probably hack it. The easiest solution is to split the triangles so that it is a set of individual triangles rather an indexed mesh (see here). Then recalculate the triangle normals and you should be good to go. This should work if you just have a colored surface, but I can't adopt it as the 'official' solution as we also need the smooth normals for triplanar texturing (plus it bloats the triangle count of course).
     
  15. neveup

    neveup

    Joined:
    Sep 9, 2013
    Posts:
    2
    HI,

    I have tried to download the non commercial version to have a play with but when I download it to my mac the zip just opens up to a list of files with integer names and I can not find the unity file to import to my project.

    Is there a separate place to download a mac version or am I doing something wrong.
     
  16. DavidWilliams

    DavidWilliams

    Joined:
    Apr 28, 2013
    Posts:
    522
    The problem is that a .unitypackage file is basically a compressed archive with a similar structure as a .zip. When you download the .unitypackage file some webbrowsers or file archive utilities attempt to automatically decompress it, whereas what you actually want is to import the .unitypackage into Unity.

    I put the .unitypackage in a .zip file to try and avoid this, but some tools see through this and just decomress both levels.

    Basically, use a different browser for the download (one that doesn't expand the package) and/or a command-line tool to extract the zip. You want to end up with a single .unitypackage file that you can import the usual way. Some of these tools are just being too smart for their own good :)
     
  17. DavidWilliams

    DavidWilliams

    Joined:
    Apr 28, 2013
    Posts:
    522
    Hi guys,

    It's time for a quick update! Here's an early version of the smooth terrain level-of-detail system in action:


    Hopefully there's nothing too eye-catching about that screenshot - a good LOD system should be hard to spot :) But the wireframe overlay shows that the meshes in this volume are actually constructed at three different resolutions. The system work by downsampling the volume data and then running the Marching Cubes algorithm on the appropriate version depending on the distance to the camera.

    It looks pretty good in this image though I must admit I chose my camera angle carefully. It's currently quite easy to find view angles or volume data where it breaks down and reveals cracks between the different LOD levels. But I expect some improvements can be made here so watch this space!

    P.S. This is all still running in Unity Free. I think LOD is a Pro feature of Unity (right?), but the Cubiquity LOD is independent of that.
     
  18. SirStompsalot

    SirStompsalot

    Joined:
    Sep 28, 2013
    Posts:
    112
    @jefrosmash

    I'm a Mac user as well and the easiest way to do this is to not use the native Mac unarchive utility. A quick google will reveal free download alternatives.
     
  19. SirStompsalot

    SirStompsalot

    Joined:
    Sep 28, 2013
    Posts:
    112
    David, are there any plans to support LOD for cubic volumes? I'm not sure how that would work, but has it been considered?
     
  20. DavidWilliams

    DavidWilliams

    Joined:
    Apr 28, 2013
    Posts:
    522
    Yes, but it's not yet clear how well it will work. Some of the challenges are different from the smooth terrain case. It is quite possible to replace sets of eight cubic voxels with a single larger cubic voxel, but this can have an effect on the silhouette of the object (it tends to change size). I think it will be appropriate in some cases but perhaps not all.

    Actually, we already implemented it in one of our old prototypes:


    As you can see, it works pretty well here. But this volume gets most of it's detail from the color data, rather than from the shape, so the silhouette doesn't matter too much.

    I think some more research will be needed but it has potential.
     
  21. Ellandar

    Ellandar

    Joined:
    Jan 5, 2013
    Posts:
    207
    G'day David,

    I might have a bug to report. It looks like the collision mesh in the cubic terrain is generating .5 units higher than the surface:

    CollisionMesh 1 unit too high.png CollisionMesh 1 unit too high #2.png


    The setup here is:
    64x64x32 terrain with floor generated.
    Create some cylinders with Rigidbodies attached.
    Allow to drop to the ground.

    - Ell
     
  22. DavidWilliams

    DavidWilliams

    Joined:
    Apr 28, 2013
    Posts:
    522
    You're right, but by coincidence I was just refactoring that code and the bug got fixed in the process (though I hadn't noticed it). Looks like I was committing as you typed!

    Edit: Ah, but you might not want to grab that yet. There's some compile issues I need to resolve related to the use of the 'unsafe' keyword.
     
    Last edited: Sep 29, 2014
  23. Ellandar

    Ellandar

    Joined:
    Jan 5, 2013
    Posts:
    207
    LOL, nice timing!

    Too easy, I'll hold off on downloading the latest version for a bit.
    It's really not causing any issues right now.

    - Ell
     
  24. DavidWilliams

    DavidWilliams

    Joined:
    Apr 28, 2013
    Posts:
    522
    Hi all,

    I've recently checked in some changes to the way that communication is performed between Cubiquity for Unity3D and the native code library. Basically, it can now make use of 'unsafe' code to transfer the mesh data via direct pointer access. You have to take some steps to enable this - I'll just quote from the documentation I've been writing on the topic (please let me know if this is not clear):

    This affects the develop branch or Git and will be in the nest release (if all goes smoothly).

    @Ellandar - It should now be safe to update to get those collision fixes :)
     
  25. AlanGreyjoy

    AlanGreyjoy

    Joined:
    Jul 25, 2014
    Posts:
    192
    I could not find anything on procedural?

    How can you make this build procedural terrains?

    Thanks.
     
  26. DavidWilliams

    DavidWilliams

    Joined:
    Apr 28, 2013
    Posts:
    522
    The API provides direct access to the voxel data through the SetVoxel() function on the VolumeData classes. So you can do procedural terrain by taking the output of your noise function and feeding it through this function.

    There is a ProcuduralGeneration example in the 'Examples' folder (actual location depends on your exact version...) and the latest Git code has a simpler example in 'Examples/CreatingVolumes/CreateVoxelDatabase'

    Working with the colored cubes is easier, but if you want to work with the smooth (Marching Cubes) terrain then be sure to read the information in the MaterialSet API docs.

    Hope that helps, just let me know if you need some more pointers.
     
  27. Sinfritz

    Sinfritz

    Joined:
    Nov 18, 2012
    Posts:
    72
    I was trying to use the ConvertToVDB to generate a vdb from image slices, but it seems to not work.

    What I'm trying to do:

    ConvertToVDB -i C:\SomeProject\Assets\Cubiquity\Examples\VolumeData\Voxeliens3 -o test.vdb

    It gives me this error:

    Failed to open input file!
    Unrecognised input format

    I'm not sure what's wrong, but the document specifies you can convert a directory of image slices to vdb.

    Currently using 1.1
     
  28. Sinfritz

    Sinfritz

    Joined:
    Nov 18, 2012
    Posts:
    72
    Here's also a good starting reference on noise generator for building procedural terrain both in 2D and 3D:

    http://libnoise.sourceforge.net/

    There's been several ports of libnoise to Unity, one in particular:

    https://github.com/ricardojmendez/LibNoise.Unity

    You can easily use any of these ports and generate an array of values [-1, 1] and just convert them to weights to pass to Cubiquity.

    The GPU Gem 3 ch. 1 is also a good read to get the idea of how to go about it, as mentioned earlier in the thread.

    Beware, though, generating a noise cube is pretty slow, something like 128^3 seem reasonable with just a Perlin noise and a small octave value, like 4.
     
  29. DavidWilliams

    DavidWilliams

    Joined:
    Apr 28, 2013
    Posts:
    522
    It's just a typo - you meant:

    ConvertToVDB -i C:\SomeProject\Assets\Cubiquity\Examples\VolumeData\VoxeliensLevel3 -o test.vdb

    Note the 'Level' in 'VoxeliensLevel3'. I agree the 'Unrecognised input format' message doesn't make any sense in that context though. Also, be sure to move your created .vdb into the 'VoxelDatabases' folder.

    As for procedural generation, I included SimplexNoise in the examples folder because it was under a public domain license. I'm sure LibNoise is much more flexible though. Actually, I noticed recently that Unity has some hidden noise functions built in:


    Haven't tested them much but I thought it was a good find :)

    Using noise is indeed pretty slow, but I haven't yet checked whether the bottleneck is evaluating the noise itself or writing it into Cubiquity. Maybe there can be some improvements in the latter case. At any rate, I've been generating noise as an offline process and writing it into .vdbs (see the CreateVoxelDatabase example), which I then load at runtime and it's much faster.
     
  30. Sinfritz

    Sinfritz

    Joined:
    Nov 18, 2012
    Posts:
    72
    Ah, that was a typo! But it still didn't work with something I generated, which is why I was trying to convert VoxeliensLevel3 to make sure it works.

    This is what I'm trying to convert: https://dl.dropboxusercontent.com/u/73401173/imageslices_test.rar

    I'm not sure what the difference is...the test's dimension is 31x14x31

    Interesting to see those in Unity, that's good to know! But yeah, most of the noise algorithms are slow, and simplex speed things up dramatically.
     
  31. DavidWilliams

    DavidWilliams

    Joined:
    Apr 28, 2013
    Posts:
    522
    @Sinfritz - It seems to work for me, running the latest 'develop' branch on Windows. I extracted your 'imageslices_test' folder and placed it in the SDK folder, and ran:

    C:\code\cubiquity-for-unity3d\Assets\StreamingAssets\Cubiquity\SDK>ConvertToVDB -i imageslices_test -o my_test.vdb
    Identified input as image slices
    Importing images from 'imageslices_test' and into 'my_test.vdb'Found 14 images for import
    [Info ]: Creating empty voxel database as 'my_test.vdb'
    [Debug ]: Memory usage limit for volume initially set to 4Mb (32 chunks of 128Kb each).
    [Debug ]: Memory usage limit for volume now set to 64Mb (512 chunks of 128Kb each).
    [Debug ]: Created new colored cubes volume in slot 0
    Importing image 0
    Importing image 1
    Importing image 2
    Importing image 3
    Importing image 4
    Importing image 5
    Importing image 6
    Importing image 7
    Importing image 8
    Importing image 9
    Importing image 10
    Importing image 11
    Importing image 12
    Importing image 13
    [Info ]: Resizing compressed data buffer to 144307bytes. This should only happen once
    [Debug ]: Deleting volume with index 0

    I've attached the resulting .vdb file. Can you show what it is outputting in your case (it might be different but should be similar)? And which platform are you on?
     

    Attached Files:

  32. Sinfritz

    Sinfritz

    Joined:
    Nov 18, 2012
    Posts:
    72
    It was the same type of error as before:

    Failed to open input file!
    Unrecognised input format

    It might be something to do with the folder path, or maybe even permission to access the files? (I produce the images in a temp path, so it's somewhere in "c:/Users/Username/AppData/Local/Temp/Company Name/Project Name/" This is in Windows 8.1 I'll try again with a different path. I'm gonna also go ahead and pull from git.
     
  33. Sinfritz

    Sinfritz

    Joined:
    Nov 18, 2012
    Posts:
    72
    Ah, it was the spacing with the folders that's causing all this. I need to add quotes. So that's totally my fault, it didn't occur to me till now.

    Thanks for looking into it! I guess some sort of error indicating wrong arguments when using the tool would help in case of this blunder :p
     
  34. DavidWilliams

    DavidWilliams

    Joined:
    Apr 28, 2013
    Posts:
    522
    @Sinfritz - Great, I'm glad you worked it out. If you grab the latest develop version then be aware the latest commit is slightly dodgy - it forces you to put your .vdb in a subfolder of VoxelDatabases rather than in the folder directly. I'll commit a fix for this tomorrow.

    Edit: Now fixed.
     
    Last edited: Oct 19, 2014
  35. Sinfritz

    Sinfritz

    Joined:
    Nov 18, 2012
    Posts:
    72
    Cool, thanks! I did ensure all output are generated to the correct path. I'm using the tool via Unity, so I made sure to use the appropriate paths via Cubiquity.Paths

    Also just a small minor error, when I cancel out of choosing a vdb file for creating a volume data, it gives me this error:

    ArgumentException: Invalid path
    System.IO.Path.GetDirectoryName (System.String path) (at /Users/builduser/buildslave/mono-runtime-and-classlibs/build/mcs/class/corlib/System.IO/Path.cs:215)
    Cubiquity.MainMenuEntries.CreateColoredCubesVolumeDataAssetFromVoxelDatabase () (at Assets/Cubiquity/Editor/MainMenuEntries.cs:102)
     
  36. DavidWilliams

    DavidWilliams

    Joined:
    Apr 28, 2013
    Posts:
    522
    Thanks for this, I'll get it fixed.
     
  37. DavidWilliams

    DavidWilliams

    Joined:
    Apr 28, 2013
    Posts:
    522
    Hey guys,

    We're still hard at work and I wanted to show off the progress we've made on the LOD system. I've mentioned before that we had something in Cubiquity, but it was just a proof of concept and was disabled in previous releases. But I've done quite some work over the last few weeks to bring it up to scratch. The screenshot below shows the different LOD levels rendered in different colors, and if you enlarge the image you can also see the wireframe overlay:


    The system is based on an octree in which the nodes are subdivided based on their distance from the camera. As the camera moves through the scene the octree is updated and new meshes are created on demand. I've only shown a heightmap here but it also works with full volumes (including overhangs, etc) and you can still modify the terrain on the fly.

    Note that this is done with the smooth terrain. Applying it to the cubic terrain is more difficult (not impossible) but actually a different set of constraints apply there anyway, because we already perform 'greedy meshing' to reduce the triangle count even at the highest LOD.

    You can't test this feature quite yet but I think it should be merged into the develop branch in the next couple of weeks. I expect that level of detail for smooth terrains will then be part of the next official release.

    P.S. I should mention that this all works in Unity Free! Although Unity Free doesn't support LOD by itself, this is just a thin wrapper around our own LOD system so it works fine :)
     
    SirStompsalot likes this.
  38. rahul_

    rahul_

    Joined:
    Jul 25, 2013
    Posts:
    24
    Hey David,

    I'm thinking to Cubiquity to construct volumetric rendering from medical scan images (such as MRI/CT etc.). I've bee looking over at algorithms and most of them use ray marching to project rays from camera and then construct 2D representation of the 3D data from a particular perspective.
    I think such outputs look like as mentioned in the following video:


    I talked to you over the email and you suggested that Cubiquity will work for creating 3D representation from the medical scans - however it is different in the sense that is actually creates a 3D mesh. I wanted to ask you about the algorithms that the asset uses to do this. And if I understand correctly, it actually creates a surface rendering output from the medical scans which can be used in Unity like any other 3D model. Right?

    Does it make a difference in getting the final output in the form of triangle mesh or in the form of 2D projection using raycast? Is there an advantage of one method over the other?

    The second thing that you pointed out to me was about the missing pipeline to import the images and send it to Cubiquity for processing the 3D output. If it's not too much of an effort to do this, I can actually go ahead and implement this but with the lack of experience here in building assets/plugins would need some help about the steps to be followed in detail. If there's some particular section inside your asset where you already do this create the volume from the images and I have to do something similar, please do point me to the that direction as well.
     
  39. DavidWilliams

    DavidWilliams

    Joined:
    Apr 28, 2013
    Posts:
    522
    Yes, this is correct. Cubiquity can use the Marching Cubes algorithm to convert a scalar field into a polygon mesh. A scalar field just means that each voxel is a single number, and in the case of CT data this represents the density of the material at that point in space. Despite the 'Cubes' in the name, the Marching Cubes algorithm can generate a smooth mesh:

    https://en.wikipedia.org/wiki/Marching_cubes
    http://paulbourke.net/geometry/polygonise/

    You can use Marching Cubes without Cubiquity - just searching for 'Unity Marching Cubes' will throw up some good results. Cubiquity adds a number of features such as level-of-detail and the ability to paint textures onto the mesh (which is how we use it for terain rendering).

    The method you choose is highly dependant on your application. As a very general rule of thumb, if you are interested in just visualizing the data then raycasting has some advantages, in that you have more control over the ways scalar values are mapped to colors, and you can also get transparency. You can also change these parameters at runtime to better explore the data. On the other hand, if you are interested in interaction (e.g. virtual surgery) then a mesh might be better as you can combine it with other elements of your scene, edit it, apply shaders, collide with it, export it for use in other programs, etc.

    The two techniques are complementary (and can even be combined), and a real medical visualization application will probably make use of both.

    There are some existing examples (e.g. procedural generation), but I think they are a little complex compared with what you are trying to do. Actually a much simpler example would probably benefit other people as well, and shouldn't take me long to write. The example will just show how to create a volume at runtime and write example data into it. You will then need to use Unity or .Net image classes to read your real data, and write it into the volume in the same way that the example will do (I can help with this too when the time comes).
     
  40. DavidWilliams

    DavidWilliams

    Joined:
    Apr 28, 2013
    Posts:
    522
    @rahul_ - I have now added a simple example of building a volume from code, which you could use as a starting point for your system. The example creates a volume and fills it with Simplex noise.


    The code can be seen here: https://bitbucket.org/volumesoffun/...rrainVolume/SimpleTerrainVolume.cs?at=develop

    That script should be attached to an empty game object in Unity.

    The code has been commited to our Git repository on BitBucket. You will want to check out the 'develop' branch, open it in Unity, and open the 'Cubiquity->Examples->CreatingVolumes->SimpleTerrainVolume->Main' scene. When you press 'Play' the volume should then appear.

    You will want to modify the code to read voxel values from your images, and write them into the volume instead of the example simplex noise values. When trying to understand the code, be sure to read the API docs and in particular the MaterialSet docs.

    I can help you further (e.g. changing the material), but you have a lot to take in first :)
     
  41. DavidWilliams

    DavidWilliams

    Joined:
    Apr 28, 2013
    Posts:
    522
    On another note I've been working on a heightmap importer for Cubiquity. This allows you to take heightmaps generated with tools such as WorldMachine and L3DT (as well as other programs) and convert them into Cubiquity volumes as an offline process. You can then apply the usual Cubiquity effects such as sculpting and other runtime editing of the terrain.

    Here's an example of a heightmap imported into Cubiquity:



    Looks pretty good huh? Obviously it needs a color map added, and I should be able to show the LOD system working on it as well. More screenshots of that soon :)

    P.S. This terrain is 512x512x64 voxels, and was generated from a heightmap of 2048x2048 pixels (downsampling improves the quality). The resulting voxel database (.vdb) is about 11mb.
     
  42. rahul_

    rahul_

    Joined:
    Jul 25, 2013
    Posts:
    24
    Hey David,

    Thanks a lot for the detailed explanation. I was able to replicate the SimplexNoiseExample easily at my to get some good ideas about this. Nice to see that there’s a lot of documentation flowing around on Cubiquity - I just skimmed through the explanation of Marching Cubes on Paul Burkes website but didn’t want to get into too much gory details of it. There’s a section on Cubiquity’s website that mentions that the algo mostly make use of the density field - about which I wanted to clarify in simple terms. What exactly is the density field in this context?

    From what I understand, in marching cube, there’s a process of mesh being created using triangle facets through cube edges. What is the density field then? How is it helping in building the model and how it being captured from the scalar field values of medical data?
     
  43. rahul_

    rahul_

    Joined:
    Jul 25, 2013
    Posts:
    24
    About adapting the algorithm for medical image slices:

    I just tried to change the algorithm for SimpleTerrainVolume from using noise values to using data from image slices (256x256 in my case). Initially, I did this with only ten images to see if the output is working - I did get an output volume but I couldn’t really guess about the correctness of the code by just 10 image scans, so I tried to use all hundred image slices.

    Here’s what I’m doing right now: http://pastebin.com/3GHm10i2 for the this particular medical images dataset.
    My understanding was to
    - load the textures (after enabling their read/write properties in Unity) in the application,
    - go through the textures doing GetPixel on the images (also get their grayscale values for only the highlighted features in the scans)
    - assign weights to MaterialSet (mapping values obtained in the previous step on a 0-255 scale) that keeps on adding voxels to volumedata as volumeData.SetVoxel(x, y, z, materialSet);

    Q1. However, I'm not sure if that's the proper way by which we'll be getting the final geometry. Will all the pixel values from the medical image scans have to be read and assigned weights for getting the shape?

    Q2. In the Simple Noise Example, I don't really understand how the shape is being formed just by assigning values to the Material Weights? I read on the website that "the density value (required for Marching Cubes) is not stored explicitly but is instead computed on-the-fly as the sum of the these weights" but I don't really understand what that means in plain simple language

    Q3. Is this process usually slow because it entails A LOT of calculations? I just tried the code above in my pastbein link once and it kept on running for more than 10 minutes while trying to do through all the pixels values in the images.

    Q4. I couldn't really find a way to change the material to get something like this. What I was actually getting getting in was some kind of default material for the terrains. From where do I change the material and how do I give the color variation to the model as per the MRI/CT scan image?
     
  44. DavidWilliams

    DavidWilliams

    Joined:
    Apr 28, 2013
    Posts:
    522
    By the sounds of it you are going in the right direction :)

    A density field is a kind of scalar field - for your purposes they are the same thing. Technically a CT scanner is measuring the density at each point of the volume, and this density is a single number (that is to say, it is a scalar rather than a vector). The Marching Cubes algorithm looks at each of your voxel values and compares it to a threshold (127, in the case of Cubiquity) to decide if it is solid or empty, and it generates a mesh seperating these.

    You can also ignore all the weights except for weight[0]. These are for more advaned uses related to multiple materials. Just set weight[0] to your desired value, leave the others at zero, and the sum will then be the same as weight[0].

    It sounds like you are doing it about right. You read a pixel value, map it to the 0-255 range, and write it to weight[0] of the corresponding voxel. Voxels above 127 will then be considered solid and the others will be empty.

    As mentioed above, for your purposes you can ignore the summing aspect and just work with weight[0].

    The process of building the mesh should be very fast (just a few seconds). Was it fast when doing the simplex noise? Actully I suspect your image loading is the problem as you are using Unity textures. This *possibly* means the images are being loaded into the GPU, athen retrieved from the GPU one pixel at a time. You should test this, perhaps by commenting out the call to GetPixel() and just setting weight[0] to 255 or something.

    If the image loading is indeed the bottleneck then perhaps look for other image loading approaches, such as whatever is built into .Net (I'm a bit vauge here as to what is available).

    It is possible to change the material used by the 'Terrain Volume Renderer' component. TRry something like this at the end of your script:

    Code (CSharp):
    1. gameObject.GetComponent<TerrainVolumeRenderer>().material = Instantiate(Resources.Load("Materials/MaterialSetDebug", typeof(Material))) as Material;
    It should then be red. To change the color have a look at this line of the MaterialSetDebug shader:
    Code (CSharp):
    1. o.Albedo  = half4(1.0, 0.0, 0.0, 1.0) * materialStrengths.x;
    Getting different colors across the surface is more complex - how would you choose the color? All points on the surface have the same density (given by the threshold) so you can't color by that. Having multiple colors in a single volume is where raycasting starts to have an advantage.
     
  45. rahul_

    rahul_

    Joined:
    Jul 25, 2013
    Posts:
    24
    Yes. It was indeed quite fast with simplex noise example. I just tried commenting out the GetPixel part and hard-wiring a value which lead me to a a Terrain Volume on the screen in a about a minute. (30secs for the main for loop that goes through x, y and z values - and another approx. 30 for processing that the plugin might be doing with the volumedata.

    Just for the record - I'm currently on Macbook Pro with Intel Core i7, 4GBRAM but a stupid IntelHD4000 card. I'll try and see if changing the machine with a good graphics card makes a difference and also if there's something else I can substitute instead of Unity's GetPixel().
     
    Last edited: Nov 12, 2014
  46. rahul_

    rahul_

    Joined:
    Jul 25, 2013
    Posts:
    24
    @DavidWilliams Ok, so just tried it out on a decent PC and was getting quite good results with the same code above. Here's how the output looks like:



    Some problems that I wanted to clarify:
    i. The depth of the generated model volume seems to be a bit abnormal. The primary loop that goes through the image slices looking at the image pixels uses the image's y dimension for the depth of the model but not sure why the model comes out to be like this (a bit elongated) in the 2nd image above.

    ii. For the purpose of interaction in the application, I'm looking to create a movable cross-section plane. And the cross- section plan should remove a part of the mesh above it and show the internal parts of the model behind the cross-section plane. Something like the following figure (borrowed from another forum):


    This seems to be such a common use case for such applications but I wonder how this is made possible. Will such a shader work for this kind of clipping? http://forum.unity3d.com/threads/shader-for-clipping-plane.11832/#post-196958

    The author applies a vertex clipping shader to the objects that have to be clipped. So, technically if I just apply that same "vertex lit clipped" mat to the generated model, will it it behave this way as expected?
     
    Last edited: Nov 13, 2014
  47. DavidWilliams

    DavidWilliams

    Joined:
    Apr 28, 2013
    Posts:
    522
    Nice to see it's coming together! The performance may indeed struggle on a laptop, and of course your volume is larger that the simplex noise one I made as an example. If you do want to develop on your laptop you could always try batch resizing your input images to make them a more manageable size.

    CT image data often has different scaling along different axis. Image data is often supplied in DICOM format in which case you can determine the correct scaling, but this is probably lost in your case as you only have the .jpg slices to work with. Probably you should just change the scaling on the transform component until it looks 'about right'.

    This is relativly difficult when working with a mesh as compared to using raycasting of the volume data, but it should still be possible to some degree. It's a bit outside my area of expertise though. You will probably be able to find ways to use custom clip planes or shaders to discard part of the geometry, but filling in the 'gap' (the hatched grey area on your figure) may be hard. You'll have to investigate what existing solutions can do this to a mesh, and consider it largely separate from Cubiquity.

    It may work, and I also found this one: http://forum.unity3d.com/threads/simple-cross-section-shader.34508/

    However, I would encourage you to test these out on 'normal' Unity meshes first. When you find an effect you like and get it working on normal meshes, we can then think about how it can be combined with the Cubiquity (maybe it just works, but I'm not sure).
     
  48. rahul_

    rahul_

    Joined:
    Jul 25, 2013
    Posts:
    24
    Why would it be harder to achieve with the mesh approach than raycasting? I'm trying to get my around the fact that the raycasting would produce 2D projection (it doesn't have any actual geometry inside) but the clipping plane will be able to expose internal parts. This will probably entail writing the clipping plane module inside the raycasting shader itself and that is how the shader will also render the volume on the screen that is exposed by the clipping plane?
     
  49. DavidWilliams

    DavidWilliams

    Joined:
    Apr 28, 2013
    Posts:
    522
    Because the input to a raycasting shader is the full volume data, so when a clipping plane is exposing the internals the raycasting shader can work out what is inside. By contrast, and extracted mesh is hollow and is basically just a shell. This makes cut-away views a bit more challenging.

    Basically yes, the raycasting shader has to be written to handle custom clipping planes. I don't know whether the Unity raycasting shader I mentioned previously will provide this feature but I would suspect not.
     
  50. DavidWilliams

    DavidWilliams

    Joined:
    Apr 28, 2013
    Posts:
    522
    Hi guys, we're making more progress on the LOD support and heightmap import, and can now show these two things working together :) Here is the input image:


    And here are some views of the voxelized version with working level of detail (even in Unity free).






    Next up I'll try to get a color map applied :)