Unity Community


Page 2 of 15 FirstFirst 123412 ... LastLast
Results 21 to 40 of 293

Thread: Kinect plugin


  1. Location
    ohio
    Posts
    274
    hello -
    well I got back to it for a little while since I last wrote - and I am now able to get the color depth image into a texture with a good frame rate, but I am not getting the full color bit depth I think - not all the values are there - but will keep at it until having something to work with!
    Attached Images  


  2. Location
    ohio
    Posts
    274
    it was just an oversight casting a float - I now have kinect running in unity!
    Attached Images  


  3. Posts
    313
    cool,any demo to show how does this work?
    please keep updating the progress,really impressed!!


  4. Location
    San Francisco, CA
    Posts
    901
    priceap: nice job! Yeah, I get the same seven-finger effect on my Kinect. The cameras sees too-close objects as double. You're using the dot net library, right? How's the performance?


  5. Location
    ohio
    Posts
    274
    hi bliprob - thanks.

    Yes - I think the "seven finger" effect is a "shadow" that appears down the side of everything because the sensor that sees the IR pattern projected from the kinect relies on the distance offset between the two lenses to find the depth - and so the offset also introduces the shadowed or unseen areas.

    Now I have the raw depth frames coming and and get a nice full range gray scale. I paused the scene to capture these two angles of using the depth image as the height map for the mesh deformation. There is a plane with the depth texture updating on it in the scene I rotated in the right shot to see how it looks. Right now it is only sampling a block of pixels from the depth image.

    I can try to get some video but am not adept at screen video grabbing, but will try that soon.

    Updating the texture without the mesh deformation is about 18 fps. However, I think that can be sped up a lot with a method to read/write the depth texture at a power of 2 instead of 640x480, and maybe improving the conversion step of the pixel data.

    edit: I am using the library from codelaboratories.com/nui/ - importing a DLL with unity pro. If you are using .net, maybe thats the better way to go?

    Update: At 512x512 for the depth texture - it is now running at 33fps.
    Attached Images  
    Last edited by priceap; 11-24-2010 at 04:36 PM.


  6. Location
    ohio
    Posts
    274
    here's a screen cap video I made using camstudio. Frame rate is working pretty well - 255x255 vertex mesh.



  7. Location
    San Francisco, CA
    Posts
    901
    Awesome! How are you building the mesh, marching cubes? One thing I've done is grabbed the initial depths and subtract the current values from each, in effect removing the background. If you try that and step out of the frame when the initial depths are grabbed, you should see the mesh for just yourself.


  8. Location
    ohio
    Posts
    274
    I was using the heightmap generator code in the procedural examples project from the unity resources. I modified it to update every frame. Thanks for your suggestion - I am looking at the marching squares code from the wiki site and see how that works now, so will try to use that soon. Turning the depth into a 3D mesh is pretty cool, and I want to experiment with interacting with physics objects, but in the long run maybe the best approach is to construct a skeleton out of the depth map and use that for interaction, like is done with the xbox. At my center, we've got a skeletonization method working with opencv and a pair of webcams, and can try using that method in combination with the kinect depth map.

    One thing I noticed is the full color cam image does not match up with the depth (raw or color) image if you simply take the outputs and put one on top of the other. I figure I can crop scale and tweak until they register together, but is there a known offset or scaling already to do so?

    Quote Originally Posted by bliprob View Post
    Awesome! How are you building the mesh, marching cubes? One thing I've done is grabbed the initial depths and subtract the current values from each, in effect removing the background. If you try that and step out of the frame when the initial depths are grabbed, you should see the mesh for just yourself.


  9. Location
    China
    Posts
    1,009
    Wow...kinect..kinect...
    My Blog:Http://Www.1Vr.Cn
    Unity Tencent QQ Group:2453819,34643309,7838310,7838228
    Unity Forum: http://www.iu3d.com
    Development Environment:Unity Pro & iPhone Adv & MacBook Pro & iPhone & iPod & iPad & Zpad.


  10. Location
    ohio
    Posts
    274
    Hello -
    Here is an update with my progress with the Kinect. I have the color image matched to the depth image, but for now it is just being done with the coverage and offset values for the texture on the material.

    I tried out the "marching squares" code from the wiki, but as soon as I upped the mesh resolution the frame rate dropped to 2fps and I could not do much to get it running faster, so I changed to a method that simply drops elements from the triangle array if they are below a given depth. The result runs much faster, but has more jagged edges than the marching squares method.

    In this video example, I adjust the clipping depth value once or twice, and also have an experiment where it copies the mesh when hitting the space bar. The frame rate is slower (about 12fps) because I was running it on my laptop.

    Like in the previous note, getting the mesh heightmap is a fun effect, but in the long run I think just the ability to capture the depth image and then use that for a skeleton or blob tracking will be good for interaction - I have a test working now with the ability to attach an object where the user is extending a hand towards the camera.



  11. Location
    Copenhagen
    Posts
    9
    Quote Originally Posted by priceap View Post
    I tried out the "marching squares" code from the wiki, but as soon as I upped the mesh resolution the frame rate dropped to 2fps and I could not do much to get it running faster, so I changed to a method that simply drops elements from the triangle array if they are below a given depth. The result runs much faster, but has more jagged edges than the marching squares method.

    In this video example, I adjust the clipping depth value once or twice, and also have an experiment where it copies the mesh when hitting the space bar. The frame rate is slower (about 12fps) because I was running it on my laptop.

    Like in the previous note, getting the mesh heightmap is a fun effect, but in the long run I think just the ability to capture the depth image and then use that for a skeleton or blob tracking will be good for interaction - I have a test working now with the ability to attach an object where the user is extending a hand towards the camera.
    I think you are absolutly right with the skeleton tracking.. Do you know if there already are some implementations with skeleton tracking?

    Btw. iam currently working on a similar project, though with a regular camera. Trying to extract depth information.
    Having similar problems with frame rate. With large meshes there is probably no way around using CUDA and doing it on the GPU.


  12. Posts
    166
    Definitely cool !!!


  13. Location
    Amsterdam
    Posts
    96
    awesome priceap!


  14. Location
    San Francisco, CA
    Posts
    901
    Over on the OpenKinect mailing list there's a couple of teams talking about skeleton tracking (I believe they call it pose estimation). There's a version of OpenCV that has been updated for Kinect, which might help, since you could do feature tracking with OpenCV.

    According to a Sony engineer (http://www.blisteredthumbs.net/2010/11/move-engineer/):

    He referred to Haromnix’s Dance Central as a particular title that demonstrated this point, assuming that that instead of using skeletal tracking (which is a possibility with Kinect hardware), the game uses silhouette-tracking instead to match your moves. “Skeleton-tracking is the hard thing,” commented Mikhailov, “That’s the neat thing about Kinect, but it’s actually the hardest part. The reason they did silhouette-tracking is because the Harmonix guys worked with EyeToy and they know the best tech is the most reliable tech. So they take the z data from the camera and they just chop it out of depth and feed that into their game.”

    Mikhailov did agree that some games were making use of the skeletal-tracking capabilities of the Kinect, but argued that the actual tracking wasn’t reliable enough to be effective, and Dance Central’s focus on the silhouette instead of a skeleton made it more reliable and workable as a result.
    I don't know if that is true. I haven't played Dance Central yet. (I recall that the pack-in game Kinect Adventures does distinctly draw your posed skeleton in real-ish time). Priceap -- I think this explains why the color and depth cameras are offset: stereo vision. A quick google of silhouette tracking leads to a number of papers that use stereo imaging techniques.


  15. Location
    Copenhagen
    Posts
    9
    I think what the sony engineer is talking about is to actually use the contour as input. Which is actually almost the same with what you have done now. So there is not that much tracking involved.. The pose estimation goes a step further and tries to identify distinct features in the image, tracking them and estimating them if they are not visible (based on the information available).. Which if done right can give you a skeleton. The hard part is to identify these features as they really can vary with different light and camera conditions.

    here is a video doing something like that (just with a bit more cameras, which give more accurate data):
    http://www.youtube.com/watch?v=dTisU4dibSc


  16. Location
    ohio
    Posts
    274
    I've been working with a PhD student, Paulo, for the past year, where we have had a 3D depth-imaging system using two webcams with openCV. Paulo developed the system for extracting a skeleton from the depth image, so we have a skeleton system ready to go, and are looking at using the Kinnect cam to replace the pair of webcams, (although two webcams are half the price of the Kinnect, but a bit harder to set up and calibrate, plus the method the kinect uses is so stable)

    Here is a (sped-up) video of of the 3D vision system. We had it in a poster session at Siggraph last summer, and the system was running pretty well for 4 months or so prior to that. Right now I am expecting we will keep it as an external app that sends the tracking data to Unity over OSC.



  17. Location
    Copenhagen
    Posts
    9
    Wow, cool. ... Also nice to use OSC. Will you release the tracking part? I'm no expert in computer vision, so i am looking for an implementation which i can use in a student project.


  18. Location
    Italy
    Posts
    648
    Great work priceap!!!

    Will be great if you share the tracking part or you release a basic tutorial for this!



Page 2 of 15 FirstFirst 123412 ... LastLast

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •