Search Unity

  1. Megacity Metro Demo now available. Download now.
    Dismiss Notice
  2. Unity support for visionOS is now available. Learn more in our blog post.
    Dismiss Notice

3D Volume Rendering using Raymarching Demo

Discussion in 'Made With Unity' started by brianasu, Jan 25, 2012.

  1. LinQuestSMSA

    LinQuestSMSA

    Joined:
    Apr 19, 2014
    Posts:
    8
    Hi Brianasu,
    I am very impressed with your work. I have a Unity5 3D world for immersive collaboration and analytic fusion. For the medical domain I'd like to extend this to operate with the Rift and have remote surgeons, as avatars, interact with the 2D MRI atlas. Have you looked at Oculus at all? If so, did you have any issues with the Oculus fov? Frame pipeline? Any tips?
    Thanks,
    Art
     
  2. brianasu

    brianasu

    Joined:
    Mar 9, 2010
    Posts:
    369
    Sorry I wouldn't know. I don't have any VR hardware to test on.
     
  3. micma909

    micma909

    Joined:
    Apr 16, 2015
    Posts:
    1
    Hi, this is such a great project!
    What would you say it would take to get this running in an android/ios environment? Im fairly new to programming on mobile devices and I tried deploying it to android but never got it past rendering anything but the back-buffer...
     
  4. elenzil

    elenzil

    Joined:
    Jan 23, 2014
    Posts:
    73
    Very very nice, thanks for sharing this.

    Note, if you have 3D Textures you can do volumetric rendering in a fixed-function manner by manipulating the texture coordinates on a stack of camera-aligned quads. Here's an ancient paper I helped author on that approach - ah, but the actual paper isn't accessible. well, the abstract is there.
     
  5. LinQuestSMSA

    LinQuestSMSA

    Joined:
    Apr 19, 2014
    Posts:
    8
    Hi Brianasu,
    I'm revisiting your code and had a quick question. I'm using Unity3D 5.3.1f1 Pro. How would I change the opacity at runtime? Without changing it, the user can move around/through the volume and everything looks great. When I try to change it through a GUI control, I get weird clipping as the user moves around, but it restores the image correctly after a second or two with the correct opacity.
    Thanks,
    Art

    [SOLVED]
    The issue has to do with the MouseLook component on the avatar. The user can disable the MouseLook camera movement via a GUI button so that he/she can manipulate the GUI slider (screenspace) to change the opacity. When the Mouselook component was re-enabled, the weird clipping was there. I'm no longer disabling the MouseLook component and am just using a simple public bool to shutoff the Update code in MouseLook and that seemed to work. Don't understand it, but at least I'm no longer getting the strange clipping. Thx.
     
    Last edited: Feb 10, 2016
  6. brianasu

    brianasu

    Joined:
    Mar 9, 2010
    Posts:
    369
    @elenzil I've actually implemented it that way before. Basically use the GPU to align the quads in the vertex shader and render it in a volume. I haven't really compared which is faster but I guess it would run on lower shader models.

    @ArtHughes The script on the camera actually clips the cube mesh real time it might be that. It's probably better to just clip it in the shader. I've done this before but unfortunately I lost the original code.
     
    arumiat likes this.
  7. LinQuestSMSA

    LinQuestSMSA

    Joined:
    Apr 19, 2014
    Posts:
    8
    Hi Brainasu,
    Thanks for the response. The run-time opacity adjustment by the user is working great. I'm just using the OnRenderImage to do that. I have also extended it to cube RBG and Black threshold controls.

    FYI: You can apply this to hyperspectral analysis. I have cube normalization working and am currently working on allowing the users to enable/disable selectable ENVI slices including the manipulation of an individual slice's intensity/alpha through the GUI and shader.
     
  8. kw123472

    kw123472

    Joined:
    Mar 22, 2016
    Posts:
    1
    hello ArtHughes, can you run this demo successfully? i just get a black scene with blue background. what is your version of unity and web player? thank you
     

    Attached Files:

  9. Dazzid

    Dazzid

    Joined:
    Apr 15, 2014
    Posts:
    61
    Hi Brains,
    very nice project! thanks a lot for sharing.
    I would like to ask how to tell unity to read other folder also with MRI images? I don't find where it uploads the layers of MRI images. I have other subjects and the fils are .png. is it a problem?
    Thanks for the help
     
  10. zalo10

    zalo10

    Joined:
    Dec 5, 2012
    Posts:
    21
    It does work in VR! A very neat effect; I just spent 15 minutes following the trail of the "empty cavity" that runs from ear to ear in the Vive in 5.4b17.

    However, the "SliceMesh" script seems to be breaking it, even without VR in 5.4b17. Almost everything appears to work with it disabled; I assume its purpose is to reshape the volume such that the camera can "intersect" the object...

    Anywho, top notch effect! It's visually stunning.
     
  11. gferrand_UofM

    gferrand_UofM

    Joined:
    Jan 19, 2016
    Posts:
    8
    Thank you very much for sharing this. This year I have been working on a project for scientific visualization using Unity, and it's the only usable demo I found that does volume rendering with Unity, so it was really helpful to get me started (I have some background in computer science, but not specifically in computer graphics).

    To me this really is "ray casting" that you are doing, I believe the 3D community has been calling "ray marching" the method when using distance functions to draw pre-defined shapes —the point of what we are doing here, by sampling the data cube in a systematic way, is that the content of experimental/observational data will reveal itself.

    Regarding the ray casting I noticed a problem that you are using a fixed number of steps per ray, so that shorter rays (if looking to the cube at some angle) will be more densely sampled, which artificially increases the intensity, which creates artefacts at the edges and corners of the cube (when it's filled with data). To render the volume uniformly you want to use regularly spaced steps. So I naively tried to set the number of iterations as a function of the ray length, but apparently GPUs want to know the number of iterations in advance, so instead one can set the maximum number of steps and do early termination.

    The algorithm used to compute all the rays at once, by making a difference of the front and back faces positions, is really neat, but I was concerned that it requires 3 shader passes. Apparently in Unity this requires the use of render textures, and these apply to the entire screen, so it breaks the metaphor of a shader/material painting a given object in the scene. To me the fact of being volume-rendered should be a property of the data cube, not of any particular camera looking at it. And when I tried the demo in VR (in a CAVE) where multiple cameras are automatically spawned by the middleware, the stereoscopy was broken. I don't know if this can be fixed, I didn't look into it. Instead I made my own version, with a single shader, that computes the ray intersections and does the integration in a single pass. For this I re-used a demo from the NVIDIA OpenGL SDK 10 Code Samples (http://developer.download.nvidia.com/SDK/10/opengl/samples.html). I have also included slicing and thresholding of the data cube, so that all is done on the GPU. I have posted a demo here if you want to try it: https://github.com/gillesferrand/Unity-RayTracing
     
    Circool, willyci and elenzil like this.
  12. elenzil

    elenzil

    Joined:
    Jan 23, 2014
    Posts:
    73
    awesome.
     
  13. Fomstat

    Fomstat

    Joined:
    Aug 2, 2016
    Posts:
    1
  14. ktaswell

    ktaswell

    Joined:
    Oct 8, 2015
    Posts:
    2
    I've also gotten it working in Oculus Rift. For some reason I can't get the rotation of the rendered material to work.
    Logically it would just mean controlling the rotation of cube the shader is rendered in right? except its rotating the cube with the material remaining static and unaffected by the rotation of the cube. Anyone able to help me with this?
     
  15. gferrand_UofM

    gferrand_UofM

    Joined:
    Jan 19, 2016
    Posts:
    8
    Myself I have not looked into any optimization techniques yet, but there is most probably room for performance improvements. (My own data are astronomy data, but not unlike MRI data.)
     
  16. gferrand_UofM

    gferrand_UofM

    Joined:
    Jan 19, 2016
    Posts:
    8
    In the original version, the volume-rendering shader is not attached to the cube but to the camera. This caused me troubles in VR. Maybe the last paragraph of my first post above can help you with this.
     
  17. ktaswell

    ktaswell

    Joined:
    Oct 8, 2015
    Posts:
    2
    Ah right! thanks. I should've checked out your demo sooner, its great!

    In your demo, from what I can see the Data Cube object does in fact have the shader attached instead of it being attached to the camera. However your start data is a 3D texture initially (and I'm starting in Texture2D), I'm going to try to modify it to see if I can create that texture3D from a set of texture 2Ds and then see if that works well.
     
  18. willyci

    willyci

    Joined:
    Feb 10, 2017
    Posts:
    6
    Hi gferrand_UofM, thanks for share your code, it works great in unity 5.50f3. I got question, how did you convert all dicom files into a single raw file? thanks!
     
  19. gferrand_UofM

    gferrand_UofM

    Joined:
    Jan 19, 2016
    Posts:
    8
    Hi willyci, I have never used DICOM files, I don’t work in medicine but astronomy, and our own data came in the form of 3D cubes (and I prefer to manipulate a single file and load it as a single texture). For the purpose of this demo, I included the skull.raw file from the original XNA demo by Kyle Hayward, I don’t know how it was made (I believe it comes from the volvis.org website, which is down). In Brian Su’s original Unity project the 3D texture is built from a list of images, so you can probably re-use his code.

    If you assemble the data cube outside of Unity, you just have to be careful about the order in which the axes are stored in memory, so that data coordinates are mapped correctly inside Unity – you may need to do some transpositions. Note that, in my shader, I swap the last two coordinates when doing the texture lookup, to be in the (right-handed) coordinate system made of Unity’s axes x (red), z (blue), and y (green) – then the skull looks upright as you’d expect.
     
  20. grobm

    grobm

    Joined:
    Aug 15, 2005
    Posts:
    217
    Hello,

    I have been reading this thread for a while now. I have a simple question about getting a CT scan image(s) to work with 3D texture method above. I am looking to use this in an educational cancer detection simulation. I had something like this working in Unity 5.3 a while ago, but now the client wants to upgrade and its not wanting to work. I am looking for a simple dicom style of affect that can be culled by another object collision.

    I have been looking at:
    https://github.com/gillesferrand/Unity-RayTracing

    but I am not getting it to work in unity 5.5.1f1.
     

    Attached Files:

  21. grobm

    grobm

    Joined:
    Aug 15, 2005
    Posts:
    217
    Never mind... with very little effort I was able to change the code form using a single texture to a grid of images. Thanks for this post and sharing. I will also post a package here once I clean it up. It is a fun VR experience with the Oculus touch, Hololens or vive. this group rocks!!

     
  22. willyci

    willyci

    Joined:
    Feb 10, 2017
    Posts:
    6

    I am doing the same, looking forward to your package.
     
  23. willyci

    willyci

    Joined:
    Feb 10, 2017
    Posts:
    6
    after few days of research, I finial found out the RAW file format.
    the reason I want to using it is because it is a single file, easy to load at run time.
    first, it need to be non-diffusion weighted dicom images
    using 3D slicer open the dicom files, click to save, change the format to NRRD(.nhdr), it will give you two file, skull.nhdr and skull.raw.gz, using 7zip un-compress the gz, that will give you raw format file.
    I hope this info will help anyone.

    other info
    https://www.slicer.org/wiki/Documentation/4.6/Modules/DWIConverter
    For non-diffusion weighted dicom images, it loads in an entire dicom series and writes out a single dicom volume in a .nhdr/.raw pair.

    http://teem.sourceforge.net/nrrd/
    Besides dimensional generality, nrrd is flexible with respect to type (8 integral types, 2 floating point types), encoding of written files (raw, ascii, hex, or gzip or bzip2 compression), and endianness (the byte order of data is explicitly recorded when the type or encoding expose it)
     
    tlooms and Tom-Mensink like this.
  24. Snouto

    Snouto

    Joined:
    May 27, 2013
    Posts:
    9
    Just wondering if you ever got around to cleaning up your package? I'm looking in the same area by the sounds of it so would very much like to see what you put together with regards to loading multiple DICOM slices.

    Cheers
     
  25. tlooms

    tlooms

    Joined:
    Sep 24, 2016
    Posts:
    1
     
  26. gferrand_UofM

    gferrand_UofM

    Joined:
    Jan 19, 2016
    Posts:
    8
    Thanks, but you are relying on analytic distance functions, so you have defined beforehand what shapes you want to draw. This is not really relevant to the demos in this thread, which are meant for data visualization, when you don't know in advance what there is to be drawn.
     
  27. SimonTsungHanGuo

    SimonTsungHanGuo

    Joined:
    Dec 19, 2017
    Posts:
    1
    Hi brian su ,

    Thanks for your sharing ! I have downloaded this project from git-hub and run it as attached screenshot, but I cannot control it by hold mouse right button, any suggestion ? 螢幕快照 2017-12-31 下午3.56.19.png ??
     
    Last edited: Dec 31, 2017