Search Unity

  1. Megacity Metro Demo now available. Download now.
    Dismiss Notice
  2. Unity support for visionOS is now available. Learn more in our blog post.
    Dismiss Notice

Kinect plugin

Discussion in 'Made With Unity' started by bliprob, Nov 21, 2010.

  1. bliprob

    bliprob

    Joined:
    May 13, 2007
    Posts:
    901
    I've got a Unity plugin for Kinect working. Here's a screenshot. For simplicity's sake, plugin is stuffing the depth info into the alpha channel of the texture.



    Next I'll work on polygonizing the depth buffer... although a shader might be simpler. Does anyone have a parallax shader that uses one RGBA texture, where the alpha channel is depth?
     
  2. elias_t

    elias_t

    Joined:
    Sep 17, 2010
    Posts:
    1,367
    Cool!
    Do you plan to release it?
     
  3. MrRudak

    MrRudak

    Joined:
    Oct 17, 2010
    Posts:
    159
    Coooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooool.l!!!!!!!!!!!!!!!!!!!!!
     
  4. LamentConfig

    LamentConfig

    Joined:
    Sep 28, 2010
    Posts:
    292
    Thats pretty dang sweet :)
     
  5. priceap

    priceap

    Joined:
    Apr 18, 2009
    Posts:
    274
    are you sure you want to map it to a paralax shader? That doesn't seem like you would get the full representation of depth, or more important, to just use the depth value from the image to script any objects or particles into the scene, etc.. If you have the plugin reading the camera images into a texture - that's the basic functionality for any number of implementations after that - I would love to see what you've got working.

    I have been working on getting kinect into unity this weekend also. I got the motor and the led lights working, can read the serial, and turn the cameras on and off, but I am having difficulty figuring out how to read the image data from the pointer in the dll.

    I am using the dll from codelaboratories.com/nui/ - are you using that or are you using the openKinect code from
    https://github.com/OpenKinect/openkinect/ ?

    This is the format of the camera capture function in the dll - any suggestions?

    It's the PBYTE pData I can't figure out. I am trying this in the unity C# code:
    Code (csharp):
    1.  
    2.     [DllImport("CLNUIDevice.dll", EntryPoint = "GetNUICameraColorFrameRGB24")]
    3.     private static extern bool GetCameraColorFrameRGB24(System.IntPtr camera, System.IntPtr data, int timeout);
    but getting what I think should be a byte array out of the data is not working out for me so far.
     
  6. bliprob

    bliprob

    Joined:
    May 13, 2007
    Posts:
    901
    I wrote a plugin that uses the OpenKinect library. Seemed simplest at the time.

    The plugin will also give you the texture without alpha, and also give an array of depth values at full precision. I added the depth to the texture alpha solely so I could quickly determine if things were working -- I'm not sure how long it will take me to write code to polygonize the depths. (And it seemed like a no-brainer to add depth in a shader, rather than churning out fifty thousand polygons.) It's only three bits of precision less, and it avoided the pain of passing arrays around to native code, which I always find annoying.

    I'm not sure how that library works, but the depths aren't bytes -- they are 11-bit integers, and OpenKinect stores them as unsigned 16-bit ints. Mono maps them to the ushorts type. So maybe you need to make your array ushorts instead of bytes.
     
  7. priceap

    priceap

    Joined:
    Apr 18, 2009
    Posts:
    274
    alright - thanks for the tip. I will keep at it. Do you have plans to share your plugin?
    I will if I get it working, but I suspect it will be pretty messy. :(
    thanks
     
  8. KITT

    KITT

    Joined:
    Jul 17, 2009
    Posts:
    221
    That's awesome Rob!
     
  9. bliprob

    bliprob

    Joined:
    May 13, 2007
    Posts:
    901
    I'd like to get something more interesting working before I make any releases. Right now I feel confident it works well, but until I can really visualize the depth in 3D, I'm not 100% certain.
     
  10. priceap

    priceap

    Joined:
    Apr 18, 2009
    Posts:
    274
    Your first question was about a parallax shader that uses one RGBA texture, where the alpha channel is depth - I started to look into that, but wondered why not use the parallax diffuse shader built in - and assign the one texture you are writing to both the Base and also the Heightmap, because the Heightmap input uses the A of the image and the Base uses the RGB. What are you looking for that would be different from that?

    What might also be interesting is to use something like the "heightmap generator" scene in the procedural mesh examples in the resources section of the unity site. This takes a texture and displaces the vertices of a mesh.
     
  11. rahuxx

    rahuxx

    Joined:
    May 8, 2009
    Posts:
    537
    great work man, would like to use some thing like this in my projects too.

    If you want some testing in others project please PM.
     
  12. Tinus

    Tinus

    Joined:
    Apr 6, 2009
    Posts:
    437
    Last edited: Nov 22, 2010
  13. Jordii

    Jordii

    Joined:
    Sep 14, 2010
    Posts:
    135
    Hahahahaha holy S***, I've seen many homebrew Kinect applications, but this is one of the coolest
     
  14. dart

    dart

    Joined:
    Jan 9, 2010
    Posts:
    211
    Really cool man. I made some tests with openFrameWorks and OpenKinetic and I was planning on doing something to integrate it with Unity too, but you made it first. Congrats.
     
  15. dragonstar

    dragonstar

    Joined:
    Sep 27, 2010
    Posts:
    222
    I do have question about Kinect and C# is anyway to convert a unity game on visual C# project and then put on XNA ? if you are working with KInect and C# how hard that can be.

    My reason for this question is i have and XNA license i can deploy games on my Xbox 360 i wonder if somebody got the idea to convert any of the games made with UNITY put them on Xbox live.
     
  16. bliprob

    bliprob

    Joined:
    May 13, 2007
    Posts:
    901
    Slightly more interesting example. A grid of spheres offset by their distance values (with texture drawn on the plane below). I pointed the camera at the wall to remove the background noise. The shader on the plane is drawing the depths as greyscale, so you can see how the depth values match the pixel values (and you can see why the XBox is always telling you to back up.)



    I heard that XNA licenses don't incude Kinect support, so you might want to ask Microsoft directly about that. If you did you wouldn't have to be hacking around with open source drivers like the rest of us.
     
  17. Jordii

    Jordii

    Joined:
    Sep 14, 2010
    Posts:
    135
    Would love to see some video bliprob! Any chance? :)
     
  18. reissgrant

    reissgrant

    Joined:
    Aug 20, 2009
    Posts:
    726
  19. nsx_nawe

    nsx_nawe

    Joined:
    Nov 20, 2010
    Posts:
    2
    Hi!, this would certainly be astonishing.
    I´ll keep an eye on this thread.

    Cheers and keep up the good work. =)
     
  20. bliprob

    bliprob

    Joined:
    May 13, 2007
    Posts:
    901
    reissgrant: awesome, thanks! will give a try. I've hacked at the marching cubes from the wiki some, too, and added simple background removal.
     
  21. priceap

    priceap

    Joined:
    Apr 18, 2009
    Posts:
    274
    hello -
    well I got back to it for a little while since I last wrote - and I am now able to get the color depth image into a texture with a good frame rate, but I am not getting the full color bit depth I think - not all the values are there - but will keep at it until having something to work with!
     

    Attached Files:

  22. priceap

    priceap

    Joined:
    Apr 18, 2009
    Posts:
    274
    it was just an oversight casting a float - I now have kinect running in unity!
     

    Attached Files:

  23. liverolA

    liverolA

    Joined:
    Feb 10, 2009
    Posts:
    347
    cool,any demo to show how does this work?
    please keep updating the progress,really impressed!!
     
  24. bliprob

    bliprob

    Joined:
    May 13, 2007
    Posts:
    901
    priceap: nice job! Yeah, I get the same seven-finger effect on my Kinect. The cameras sees too-close objects as double. You're using the dot net library, right? How's the performance?
     
  25. priceap

    priceap

    Joined:
    Apr 18, 2009
    Posts:
    274
    hi bliprob - thanks.

    Yes - I think the "seven finger" effect is a "shadow" that appears down the side of everything because the sensor that sees the IR pattern projected from the kinect relies on the distance offset between the two lenses to find the depth - and so the offset also introduces the shadowed or unseen areas.

    Now I have the raw depth frames coming and and get a nice full range gray scale. I paused the scene to capture these two angles of using the depth image as the height map for the mesh deformation. There is a plane with the depth texture updating on it in the scene I rotated in the right shot to see how it looks. Right now it is only sampling a block of pixels from the depth image.

    I can try to get some video but am not adept at screen video grabbing, but will try that soon.

    Updating the texture without the mesh deformation is about 18 fps. However, I think that can be sped up a lot with a method to read/write the depth texture at a power of 2 instead of 640x480, and maybe improving the conversion step of the pixel data.

    edit: I am using the library from codelaboratories.com/nui/ - importing a DLL with unity pro. If you are using .net, maybe thats the better way to go?

    Update: At 512x512 for the depth texture - it is now running at 33fps.
     

    Attached Files:

    Last edited: Nov 25, 2010
  26. priceap

    priceap

    Joined:
    Apr 18, 2009
    Posts:
    274
    here's a screen cap video I made using camstudio. Frame rate is working pretty well - 255x255 vertex mesh.

     
  27. bliprob

    bliprob

    Joined:
    May 13, 2007
    Posts:
    901
    Awesome! How are you building the mesh, marching cubes? One thing I've done is grabbed the initial depths and subtract the current values from each, in effect removing the background. If you try that and step out of the frame when the initial depths are grabbed, you should see the mesh for just yourself.
     
  28. priceap

    priceap

    Joined:
    Apr 18, 2009
    Posts:
    274
    I was using the heightmap generator code in the procedural examples project from the unity resources. I modified it to update every frame. Thanks for your suggestion - I am looking at the marching squares code from the wiki site and see how that works now, so will try to use that soon. Turning the depth into a 3D mesh is pretty cool, and I want to experiment with interacting with physics objects, but in the long run maybe the best approach is to construct a skeleton out of the depth map and use that for interaction, like is done with the xbox. At my center, we've got a skeletonization method working with opencv and a pair of webcams, and can try using that method in combination with the kinect depth map.

    One thing I noticed is the full color cam image does not match up with the depth (raw or color) image if you simply take the outputs and put one on top of the other. I figure I can crop scale and tweak until they register together, but is there a known offset or scaling already to do so?

     
  29. minevr

    minevr

    Joined:
    Mar 4, 2008
    Posts:
    1,018
    Wow...kinect..kinect...
     
  30. priceap

    priceap

    Joined:
    Apr 18, 2009
    Posts:
    274
    Hello -
    Here is an update with my progress with the Kinect. I have the color image matched to the depth image, but for now it is just being done with the coverage and offset values for the texture on the material.

    I tried out the "marching squares" code from the wiki, but as soon as I upped the mesh resolution the frame rate dropped to 2fps and I could not do much to get it running faster, so I changed to a method that simply drops elements from the triangle array if they are below a given depth. The result runs much faster, but has more jagged edges than the marching squares method.

    In this video example, I adjust the clipping depth value once or twice, and also have an experiment where it copies the mesh when hitting the space bar. The frame rate is slower (about 12fps) because I was running it on my laptop.

    Like in the previous note, getting the mesh heightmap is a fun effect, but in the long run I think just the ability to capture the depth image and then use that for a skeleton or blob tracking will be good for interaction - I have a test working now with the ability to attach an object where the user is extending a hand towards the camera.

     
  31. ethermammoth

    ethermammoth

    Joined:
    Nov 28, 2010
    Posts:
    12
    I think you are absolutly right with the skeleton tracking.. Do you know if there already are some implementations with skeleton tracking?

    Btw. iam currently working on a similar project, though with a regular camera. Trying to extract depth information.
    Having similar problems with frame rate. With large meshes there is probably no way around using CUDA and doing it on the GPU.
     
  32. acme3D

    acme3D

    Joined:
    Feb 4, 2009
    Posts:
    203
    Definitely cool !!!
     
  33. Jordii

    Jordii

    Joined:
    Sep 14, 2010
    Posts:
    135
    awesome priceap!
     
  34. bliprob

    bliprob

    Joined:
    May 13, 2007
    Posts:
    901
    Over on the OpenKinect mailing list there's a couple of teams talking about skeleton tracking (I believe they call it pose estimation). There's a version of OpenCV that has been updated for Kinect, which might help, since you could do feature tracking with OpenCV.

    According to a Sony engineer (http://www.blisteredthumbs.net/2010/11/move-engineer/):

    I don't know if that is true. I haven't played Dance Central yet. (I recall that the pack-in game Kinect Adventures does distinctly draw your posed skeleton in real-ish time). Priceap -- I think this explains why the color and depth cameras are offset: stereo vision. A quick google of silhouette tracking leads to a number of papers that use stereo imaging techniques.
     
  35. ethermammoth

    ethermammoth

    Joined:
    Nov 28, 2010
    Posts:
    12
    I think what the sony engineer is talking about is to actually use the contour as input. Which is actually almost the same with what you have done now. So there is not that much tracking involved.. The pose estimation goes a step further and tries to identify distinct features in the image, tracking them and estimating them if they are not visible (based on the information available).. Which if done right can give you a skeleton. The hard part is to identify these features as they really can vary with different light and camera conditions.

    here is a video doing something like that (just with a bit more cameras, which give more accurate data):
    http://www.youtube.com/watch?v=dTisU4dibSc
     
  36. priceap

    priceap

    Joined:
    Apr 18, 2009
    Posts:
    274
    I've been working with a PhD student, Paulo, for the past year, where we have had a 3D depth-imaging system using two webcams with openCV. Paulo developed the system for extracting a skeleton from the depth image, so we have a skeleton system ready to go, and are looking at using the Kinnect cam to replace the pair of webcams, (although two webcams are half the price of the Kinnect, but a bit harder to set up and calibrate, plus the method the kinect uses is so stable)

    Here is a (sped-up) video of of the 3D vision system. We had it in a poster session at Siggraph last summer, and the system was running pretty well for 4 months or so prior to that. Right now I am expecting we will keep it as an external app that sends the tracking data to Unity over OSC.

     
  37. ethermammoth

    ethermammoth

    Joined:
    Nov 28, 2010
    Posts:
    12
    Wow, cool. ... Also nice to use OSC. Will you release the tracking part? I'm no expert in computer vision, so i am looking for an implementation which i can use in a student project.
     
  38. kenshin

    kenshin

    Joined:
    Apr 21, 2010
    Posts:
    940
    Great work priceap!!!

    Will be great if you share the tracking part or you release a basic tutorial for this!
     
  39. the_gnoblin

    the_gnoblin

    Joined:
    Jan 10, 2009
    Posts:
    722
    What's OSC? :?
     
  40. Deleted User

    Deleted User

    Guest

  41. elbows

    elbows

    Joined:
    Nov 28, 2009
    Posts:
    2,502
    OK OpenNI when used with NITE middleware has skeletal tracking :) Ive managed to get this working on linux, and then hacked around with the code for the skeletal tracking sample to get the joint data out over the network using OSC :) Still at early stages with this, hope to have something to show later today or tomorrow.
     
  42. ant001

    ant001

    Joined:
    Dec 15, 2010
    Posts:
    116
    any news on sharing the plugin fun?
     
  43. hierro

    hierro

    Joined:
    Dec 22, 2009
    Posts:
    27
  44. trooper

    trooper

    Joined:
    Aug 21, 2009
    Posts:
    748
    Wow. just wow. Love it!
     
  45. hierro

    hierro

    Joined:
    Dec 22, 2009
    Posts:
    27
    actually facing the rotation problem into unity , hope will be solved soon :D
     
  46. elbows

    elbows

    Joined:
    Nov 28, 2009
    Posts:
    2,502
    I've been using OSCeleton to get joint positions into Unity. Im on a mac and NITE is not available for mac yet, so I just used a virtualized windows on the mac sending OSC over the virtual network to my mac with unity on, and it works a treat, with CPU use for the OSCeleton being very low.

    OSCeleton:

    http://vimeo.com/17966780

    https://github.com/Sensebloom/OSCeleton

    hierro are you getting anywhere with joint rotations? I would like to help but matrix stuff does my head in, I only half understand it. Plus I was experiencing very high CPU use with your vvvv example, perhaps the NITE examples visual display of skeleton stuff has quite an overhead compared to just capturing the data without displaying anything, and leaving the display to the other app (eg Unity)?
     
  47. hierro

    hierro

    Joined:
    Dec 22, 2009
    Posts:
    27
    Hi elbows, the low CPU is just Unity related in fact for using this http://vvvv.org/contribution/kinect-multi-skeleton (we got a new version), in UNITY we had to compile another version with smaller net buffer and less capabilities.

    we did'nt go further with unity cos , actually cant get any help from community about joint rotations, so we are developing on other paltform.

    Anyway soon ill compile some stuff in c# so it can be integrated through MONO script and will have no latency, anyway....we have all the data, just we dont know how to rotate joints, i will take a look to it next days with som people more skilled then me in unity :D
     
  48. elbows

    elbows

    Joined:
    Nov 28, 2009
    Posts:
    2,502
    Thanks for the reply, good luck with the joint rotations.

    Regarding CPU, its nothing to do with Unity that I am talking about, just how much CPU the tracking programs use. OSCeleton uses hardly any CPU on my machine, but your stuff uses a lot of CPU on my machine. I was wondering if its because yours actually displays the NITE skeleton visually, and OSCeleton does not (at least not on my machine), or whether there is something else in your code that uses the CPU a lot. OSCeleton source code is available if that helps!
     
  49. hierro

    hierro

    Joined:
    Dec 22, 2009
    Posts:
    27
    :) yes maybe is the GL stuff that uses CPU, but i've noticed that, differently from other software, in vvvv i need a smaller OSC buffer otherwise everything is slow, thank for ur code and sharing :D
     
  50. elbows

    elbows

    Joined:
    Nov 28, 2009
    Posts:
    2,502
    Its not my code, I didnt make OSCeleton. I did make my own OSC thing from one of the NITE samples a few weeks ago, but it was not very good so I didnt release it. The only good thing mine did was to send joint positions in a different format OSC, eg lots of separate messages such as /joint/1/head/pos which some apps find easier to deal with (eg quartz composer), but for Unity this is not relevant.

    If nobody else gets anywhere with joint rotations unity in coming days then I shall have to try myself, but I will be surprised if I succeed!