Search Unity

  1. Megacity Metro Demo now available. Download now.
    Dismiss Notice
  2. Unity support for visionOS is now available. Learn more in our blog post.
    Dismiss Notice

Kinect plugin

Discussion in 'Made With Unity' started by bliprob, Nov 21, 2010.

  1. priceap

    priceap

    Joined:
    Apr 18, 2009
    Posts:
    274
    From the joint transforms of the avatar/skeleton that are being driven by the original primesense Nite.cs script

    However, I supposed you could just get the rotation values directly from the Nite.cs functions, but I am doing it the above way. Seems easier.

    Here is a very stripped down example of capturing the rotations into an array. I've been doing this with eulerAngles and writing it out to files for importing into Maya as motion capture too.

    Code (csharp):
    1. var recordObjs : Transform[];
    2. var mocap = new Array();
    3.  
    4. function Update() {
    5. [INDENT]for (var i = 0; i < recordObjs.Length; i++ ) {
    6. [INDENT][INDENT]mocap.Add(recordObjs[i].eulerAngles);[/INDENT][/INDENT]
    7. }[/INDENT]
    8. }
     
  2. Daniro

    Daniro

    Joined:
    Jan 12, 2011
    Posts:
    3
    Hi Artknyazev,

    I'm happy that you're interested! I do have some positive news.
    We got noticed by many sites, among which New Scientist and Joystiq, and there will be an interview in the next edition of Xbox Magazine UK. We will soon have a web page: http://www.thirdsight.co/research/student-projects/kinect-superman/ (it was supposed to be online this weekend) with some more information.

    We cleaned and documented our code and were about to publish it, but then we got an invitation to give a demo at the virtual environment conference at Laval in France. So we postponed it. If we go to the conference in april we will publish the code afterwards. Otherwise we'll publish it somewhere in the next two weeks (if our supervisors agree). I will inform you what it will be.
     
  3. ModesttreeMedia

    ModesttreeMedia

    Joined:
    Nov 16, 2010
    Posts:
    10
    I thought you were using the OSC package data,
    Thanks Again
     
  4. jarf1337

    jarf1337

    Joined:
    Sep 16, 2010
    Posts:
    1
    I'm trying to figure out how to use the plugin/wrapper with a different avatar and i'm not having much luck. I tried changing the name of the gameobject to my avatar's name and the bone transforms to the skeleton of my avatar, but I keep getting a null reference exception on anything using the soldierAvatar or userID.

    I get a null exception on line 209 of the wrapper class, unless I attach a mastermover object from the soldier avatars to my avatar. Even though it stops giving errors, the script will still move the soldier instead of my avatar.
     
    Last edited: Mar 2, 2011
  5. priceap

    priceap

    Joined:
    Apr 18, 2009
    Posts:
    274
    The original Nite.cs script has a function in it that traverses the hierarchy of the avatar to find the joint names. I found it easiest to swap out the avatar by creating a new variable exposed in the inspector. It was also made easier by importing the fbx of the soldier avatar into maya and stripping off the geometry, and using the skeleton for new avatars. However, as long as you have the joints named properly, a completely new one should work too.

    Inside the "public class Nite : MonoBehaviour" statement, I added and changed the following. The commented out lines are the original code, and replaced with the new code:

    Code (csharp):
    1.  
    2. public GameObject[] Avatars;
    3.  
    4.  // init our avatar controllers
    5. if (Avatars.Length > 1) soldiers = new SoldierAvatar[2];
    6. else soldiers = new SoldierAvatar[1];
    7.  //soldiers[0] = new SoldierAvatar(GameObject.Find("Soldier1"));
    8.  //soldiers[1] = new SoldierAvatar(GameObject.Find("Soldier2"));
    9. soldiers[0] = new SoldierAvatar( Avatars[0] );
    10. if (Avatars.Length > 1) soldiers[1] = new SoldierAvatar( Avatars[1] );
    11.  
    The result is that an array then appears in the inspector. Set the array to a length of one and put your avatar in the entry. Or set the length to two and put a second avatar and it will run the script looking for two avatars. This makes it pretty easy to swap out avatars and switch between using one or two.

    You can also make the smoothing function visible in the inspector like this, and find the setSkeletonSmoothing function call and change the hard-coded value to the new variable. (This takes affect only at start up and not during run time).

    Code (csharp):
    1. public float skeletonSmoothing = 0.3f;
    2.  
    3. // set default smoothing
    4. NiteWrapper.SetSkeletonSmoothing(skeletonSmoothing);   
    and while I am at it, I also added a boolean for the display of the usermap which otherwise slows the frame rate down a lot:

    Code (csharp):
    1. private bool showUserMap = false;
    Then hunt around through the code and find all the right places where you would want to see the userMap texture or no longer need to see it - for example:

    Code (csharp):
    1. void OnCalibrationSuccess(uint UserId)
    2.     {
    3.         showUserMap = false;
    4. // code continues.....
    5.  
    6.     void OnCalibrationStarted(uint UserId)
    7.     {
    8.         showUserMap = true;
    9.  
    10.  
    and then in the Update() function, change the updateUserMap call to this:

    Code (csharp):
    1.  // update the visual user map
    2. if (showUserMap) UpdateUserMap();
    hope that is helpful some.
     
  6. Sizukhan

    Sizukhan

    Joined:
    Apr 21, 2009
    Posts:
    4
    so has anyone been able to actually get the depth values? it doesn't seem like there's been any success getting the depth to render as a texture, but what about the values themselves?

    the reason i ask is because i'm interested in creating a point cloud at the moment. i've seen the hand point cloud, but that seems like a very roundabout way of doing it... couldn't you just get the depth of each pixel and apply that to a group of objects (spheres, particles, etc.) in order to move them back and forth in space (forgive me if this is unbelievably ignorant)?

    is this what bliprob/priceap were doing in the first few pages?
     
    Last edited: Mar 3, 2011
  7. priceap

    priceap

    Joined:
    Apr 18, 2009
    Posts:
    274
    I was using the CLNUI plugin on those early pages to get the depthmap from the kinect, and then deforming a mesh height based on the depth values.

    Now, using the unitywrapper, you can still get the depth values - however, what the Nite.cs script does is very cool in showing the individual users that are recognized. It steps through the depth histogram and then applies those depths only where it sees a user, and additionally multiplies each user by a color. When you see the usermap display during the calibration routine - that is the depthmap, but separated into each isolated user.

    You can change this to see the full depthmap however. I replied to someone earlier in the thread about this, saying where you can find it in the Nite.cs script, but I guess it was not clear in how to get the full depth map.

    The following is a function you can add in the Nite.cs script. It is basically the UpdateUserMap() function, but stripping out all the code that isolates and colors each user. Add this function to the script and then call it in the Update function (and comment out the UpdateUserMap() call so they don't write to the same guitexture!) This is just so you can see the depthmap as an intitial test:

    Code (csharp):
    1. void UpdateDepthMap()
    2.     {
    3.         Marshal.Copy(NiteWrapper.GetUsersDepthMap(), usersDepthMap, 0, usersMapSize);
    4.         int flipIndex, i;
    5.  
    6.         for (i = 0; i < usersMapSize; i++)
    7.         {
    8.             flipIndex = usersMapSize - i - 1;
    9.         Color c = new Color(1.0f-usersDepthMap[i]*0.0001f, 1.0f-usersDepthMap[i]*0.0001f, 1.0f-usersDepthMap[i]*0.0001f, 1.0f);
    10.         usersMapColors[flipIndex] = c ;
    11.         }
    12.        
    13.     usersLblTex.SetPixels(usersMapColors);
    14.         usersLblTex.Apply();
    15.     }
    The getDepthMap function in the sdk returns an Int16, and I would like someone to tell me the correct way to convert that to a float that can be used to see the greyscale value, but as you can see here, I am multiplying by 0.0001 and that results in a viewable depthmap.

    Now that you have this, you can read from the pixel array and set your objects to the position based on the x y pixel coordinates, and then use the greyscale value to set the z coordinate of your 'point cloud'. That is how I was doing it in the early experiments at the start of this thread.

    hope that helps some.
     
  8. Noisecrime

    Noisecrime

    Joined:
    Apr 7, 2010
    Posts:
    2,035
    Where are you getting that its an Int16?

    From a quick browse through the openNI.chm and code it should be a uInt16 ( unsigned 16 bit integer). Its uInt16 in openNI, uInt16 in the latest c# wrapper and in tinkerers Unity wrapper (nite2.cs). It does appear to be Int16 (short) in the original Unity wrapper using UnityInterface.dll (though not bothered to check what type UnityInterface.dll is using).

    Now whilst looking into this I came across the MS .net help stating that uInt16 is not CLS-compliant. Not sure if that has any real bearing on anything though .

    Anyway, if the value is uInt16 then the values should be in the range 0 and 65535 and you could simply multiply by 1.0/65535 to convert the depth value into 0.0 and 1.0 range.

    However that is assuming that openNI remaps whatever the depth value it gets from the camera into that range. I suspect, though have not found any documentation to support this, that since the max depth appears to always be defined as 10,000 units (in all the samples), which is well within a uInt16 range that it is simply used straight. What I mean by that is even though we have a uInt16 with a range of 0 to 65535, the actual range used by Kinect camera is just 0 to 10000. In which case your multiply by 0.0001 would be correct.

    Of course when used with a histogram the actual values become irrelevant, since we are greyscaling by the histogram and not the actual depth.

    edit:
    Just came across GetDeviceMaxDepth, looks like this should tell you the range of values in the depthmap and therefore the value to divide through by.

    Just outputted the GetDeviceMaxDepth() value and it is 10000, so looks like you're right multiplying by 0.0001, assuming openNI doesn't remap the value to fill the uInt16 range.
     
    Last edited: Mar 5, 2011
  9. Noisecrime

    Noisecrime

    Joined:
    Apr 7, 2010
    Posts:
    2,035
    Out of interest what are peoples views to having context.WaitOneUpdateAll() being called in Update()?

    I ask since the naming suggests that this function is synchronous and since I believe Kinect cannot produce a depthmap at higher than 30 fps, it would surely reduce performance, in effect limiting it to 30 fps max?

    So would it be better to simply call this function at 30 fps instead? Only trouble would be how to sync the calls to the camera depth updates. Without syncing I could envisage variations in performance

    I find it strange that the function isn't asynchronous, that would make more sense I would have thought. Perhaps i'm missing something though?
     
  10. amir

    amir

    Joined:
    Oct 5, 2010
    Posts:
    75
    Try
    WaitNoneUpdateAll
     
  11. Noisecrime

    Noisecrime

    Joined:
    Apr 7, 2010
    Posts:
    2,035
    So been playing with the two openNI wrappers (unityInterface and openNI.net) and noticed that using the newer version halves the performance of displaying the depthmap as texture. From commenting out sections and doing some basic timing, the difference appears to be due to how the depthmap data is accessed through the wrapper.

    So using the following types, commands and a simple loop for building the texture directly from depth
    Code (csharp):
    1. MapData<ushort>         DepthMap;
    2. DepthMap = this.depth.GetDepthMap();
    3.  
    4. // The loop that is timed
    5. for (i = 0; i < usersMapSize; i++)
    6. {
    7.     flipIndex = (int)usersMapSize - i - 1;
    8.     Color c = new Color(DepthMap[i]*0.0001f, DepthMap[i]*0.0001f, DepthMap[i]*0.0001f, 1.0f);
    9.     usersMapColors[flipIndex] = c;
    10. }          
    11.  
    Results in taking 0.8 seconds to build a straighforward texture from depthmap!
    Whilst

    Code (csharp):
    1. short[] usersDepthMap;
    2. Marshal.Copy(NiteWrapper.GetUsersDepthMap(), usersDepthMap, 0, usersMapSize);
    3.  
    4. // The loop that is timed
    5. for (i = 0; i < usersMapSize; i++)
    6. {
    7.     flipIndex = usersMapSize - i - 1;
    8.     Color c = new Color(usersDepthMap[i]*0.0001f, usersDepthMap[i]*0.0001f, usersDepthMap[i]*0.0001f, 1.0f);
    9.     usersMapColors[flipIndex] = c;
    10. }
    11.  
    Only takes 0.2 seconds!

    Can anyone else confirm this?
    As far as I can tell its not the getting of the entire depthmap data itself that takes more time, but getting the actual depth value via the 'MapData' type. I've yet to check the definition of MapData to see whats going on there, but does anyone have any ideas?

    One clear difference is that MapData is probably using uInt16 whilst the UnityInterface.dll Unity code uses a short (int16). I'm doubtful this should have any impact on performance, but haven't tried testing it.

    Obviously doing this and setixels to build a texture is going to be slow anyway and finding some way via a plugin to directly reference the texture pixels would improve performance, but I can't ignore the apparent fact that using the new c# wrapper over the UnityInterface.dll appears to have a considerable drop in performance.


    eidt:
    Yeah I guess that could work, though if you've got a number of nodes, might it not cause greater performance descrepencies if nodes update on different frames?
     
    Last edited: Mar 5, 2011
  12. amir

    amir

    Joined:
    Oct 5, 2010
    Posts:
    75
    @NoiseCrime:

    The issue is that you are moving data over to user space over to .NET user space when you access DepthMap. Marshal.Copy accelerates the process.

    you need to use a plugin to properly draw the images to a texture. I explain how to do it on Mac here:
    http://forum.unity3d.com/threads/77...-depth-data-into-a-Texture2D?highlight=kinect

    For creating the depth image (code not public) I supply a histogram to the plugin and use a for loop to transform the depth map to RGB or RGBA data. It is probably faster to load the 16bit data into the GPU and use the shader pipeline for the color mapping.

    I still have not made the plugin for Windows OpenGL or using DirectX, and I don't see people providing hints anywhere about the DirectX implementation for doing this. I believe we can find a way to support this as a feature in the OpenNI library so that we do not need to use a native plugin to do it. we could just provide several native texture IDs to the OpenNI.net.dll, and it would fill these textures with user maps, depth maps, and camera images. I'm not 100% certain how the data is cached when it is read from USB, but ideally, the sensor driver itself would load the images into GPU texture buffers and we can have fun processing it in the shader pipeline.

    OpenCV, SkeletonTracking and gesture recognition needs to be implemented in OpenCL or CUDA.
    Edit (OpenCV in CUDA recently released): http://opencv.willowgarage.com/wiki/OpenCV_GPU
     
    Last edited: Mar 5, 2011
  13. Noisecrime

    Noisecrime

    Joined:
    Apr 7, 2010
    Posts:
    2,035


    Are you saying that in the marshal copy version the data is being copied over in one go and then accessed, but the DepthMap version is maybe just providing a reference/pointer to the data and each depthMap call is going across the NET user space, hence why its 4 times slower?

    That would certainly make sense in terms of the performance drop, though i'm suprised if DepthMap = this.depth.GetDepthMap() isn't copying all the data just like marshal.Copy?

    I'd like to get to the bottom of this or find an alternative method for passing/copying the depthmap data since you might want to use the data (tracking) as opposed to simply turning it into a texture.

    I like the idea of doing the color mapping in a shader though, definately something to look into.

    Edit:
    I managed to improve performance and confirm that its the depthMap call that is cuasing the peformance issue, by caching the value before applying it to a color, instead of getting it 3 times for each color.

    Edit2: Improved texture generation
    Been thinking about this over the weekend.

    I spent a few fruitless hours trying to convert the plugin method to working in straight c#. In theory I figured if you can import functions from dll's, you can import bindTextures and texSubImage and essentially achieve the same thing you did in C. However it doesn't appear to work. I get no errors reported, but the texture just remains black. Mind you it was only after several hours that I remembered you have to force Unity in windows to use opengl, but adding -force-opengl to a standalone project didn't make any difference.

    Unfortuantely i'm unsure where the problem lies. Could be i'm not forcing opengl correctly on windows, could be a problem with [ImportDLL] functions in a .net.dll, or perhaps calling those functions from unity c# scripts. I've tried exposing the opengl functions in Unity, openNI and via using tao.opengl wrappers, none appeared to work.

    One aspect i'm confused over is the status of plugins, or more precisely what constitutes a plugin, since they are meant to only be avialable in Unity Pro. I would have assumed that openNI.net.dll is a plugin, but apparently not as we can simply add 'using' to a script to get full access to functions within. With a plugin though we need to use [ImportDLL] and that will only work in Pro mode. Yet thats exactly what is happening within openNI.net.dll! So it seems one could circumnavigate the Pro restriction?

    Unfortunately whilst the plugin approach provides considerable performance improvements, there is the massive problem that it simply fails to work at all in the windows editor, as opengl mode cannot be forced. Granted you can force opengl in a standalone project, but thats not conducive for development or testing.

    I don't believe its possible to update a texture via DirectX due to the way it works, which is different to opengl and all down to how render contexts are managed. So for window developers we're stuck with using setpixels and apply ;(

    This is one area i'd really like to see Unity address, being able to update textures from blocks of data from external sources (e.g. webcams) is a major feature that needs better support as I feel its currently holding it back from being used in this way. I would have thought it should be simple to provide a direct function working on bytes to pass blocks of data between code and the engine to update a texture.
     
    Last edited: Mar 7, 2011
  14. Tripwire

    Tripwire

    Joined:
    Oct 12, 2010
    Posts:
    442
    hey guys,

    Is it possible to use the kinect to track multiple people without doing the whole pose thing?
     
  15. priceap

    priceap

    Joined:
    Apr 18, 2009
    Posts:
    274
    This is a good question and yes it is possible. The plugin can detect multiple users as they enter the view and it will continue to lock on to each individual. This is how it colorizes each user in the usermap texture display. In the unityinterface.dll implementation, there is also a function called GetUserCenterOfMass(), which returns a Vector3. You can use this to track the center point of each user in view without having to display the texture either (or the speed slowdown doing that). If you wanted to detect more than the center of mass ( like the top of the head) then you would have to seek through the histogram map of that user I suppose.

    I have not yet used the GetUserCenterOfMass function for this, but was planning to in the next few days - that's why I think it is a good question. :)

    If the question is about being able to skeletonize all uses without doing a pose, that is another good question that I don't know what it would take, but might be possible with much less reliability of initial matches of body parts.
     
  16. priceap

    priceap

    Joined:
    Apr 18, 2009
    Posts:
    274
    Hi Noisecrime -
    This is exciting to read about your investigation into a method to bypass the setpixel/apply slow-down. I really have been wondering why this is an issue in Unity, because Processing uses this as a basic principal in many things it does with video processing, etc., and while it certainly has a impact, it is not enough to make the application run unacceptably slow.

    My understanding of the difference with "using" openNI.net.dll is that is is a native .NET plugin, and if you create any native .net plugin, it can be used in this way with the indie version. Other types such as built from C++ need the import process available in pro. Others can correct me if I am wrong, but I think that is the case.

    I really appreciate you sharing your inquiry into this. I was trying to learn how this could be accomplished in the artoolkit thread, but any communication was elusive and unhelpful while at the same time claiming success - After following referenced threads and now reading your post here, my conclusion is that they were doing it either solely on the iphone and/or osx. But I may be wrong since they would not say.

    If you discover anything new in this area, let us know, but another solution may be to get a poll going in the wishlists for improvement to the performance of setpixel apply - !
     
    Last edited: Mar 9, 2011
  17. Noisecrime

    Noisecrime

    Joined:
    Apr 7, 2010
    Posts:
    2,035
    Yeah thats what I thought, but since .net plugin could itself import C++ dll I was wondering what is to prevent devs from using this to get around Pro. However since all my .net tests accessing opengl failed so far (using non-Pro) maybe Unity manages to block this workaround. More investigation is needed ;)

    I've made progress with a windows plugin updating the opengl texture directly, though was disappointed to still only get 30-40 fps on my laptop. I feel that is still rather low, though vastly better than using setPixels. I want to do more testing between the UnityInterface.dll vs openNI.net.dll and see if there is anything in the wrappers that might make passing rgb camera data around faster or slower.

    I actually added a thread to the wishlist a few days ago here. Its just a start, describing what I believe the issues are, potential solutions and reasons why we need something better. Feel free to add any input to the discussion.


    Unfortuantely from what i've read, if you want to use the nite skeleton features you must do the 'psi' pose. I believe in the library there are methods to set up your own pose, but they don't work and PrimeSense have stated that the 'psi' pose is needed to gauge the relative skeleton structure of the person.
     
  18. the_gnoblin

    the_gnoblin

    Joined:
    Jan 10, 2009
    Posts:
    722
    Hello!

    Is it possible to have a walking character with kinect UnityWrapper?

    The hips\pelvis of the character are stuck in one position in space - so there's no walking and no jumping.

    I've read priceap's ideas on how to ground the character or make him jump - was very happy to find that piece of info - thanks!

    But isn't there any built-in support for walking and crouching\jumping recognition?

    thanks,
    Slav
     
  19. Noisecrime

    Noisecrime

    Joined:
    Apr 7, 2010
    Posts:
    2,035
    Following on from my previous post I've continued to investigate methods for imrpoving update of textures with the camera source from kinect. Currently this is mainly focused on using opengl to update the texture as opposed to Unity's native setPixels.

    I've uploaded the source here.

    Its developed in Unity 3.2 (not sure if there is any reason why it wouldn't work with say 2.6) and provides about half a dozen variations of how to access and call the necessary opengl functions.

    It requires that you have opengl installed and force the editor/standalone to run in opengl mode. It also requires openNI and Nite (unstable builds) to have been installed. If this is your first test with kinect you should follow the guide posted earlier in this thread to installing openNI/Nite/Kinect drivers and get their samples or early Unity samples working first, before running this project.

    There are several readme files included that give further information regarding requirements, installation, usage. Please read these as there is some important information included such as how to force Unity into opengl mode etc.

    The methods available can be split into 3 groups.
    setPixels - Unity native method and slowest
    C++ plugin - uses a C++ plugin to interface with opengl and called from unity
    C# - several methods all using C# to interface directly with opengl. These include using the tao.opengl.net framework, adding opengl access to the openNI.net.dll wrapper, and calling opengl32.dll directly from unity.

    At present i'm still evaluating the methods to determine the most appropriate, with each variation having pro/cons and i'm only testing 'how' to call opengl functions and not looking at say the performance of the openNI.net.dll wrapper.

    All opengl methods will display the video flipped, but thats easy enough to fix via gui.matrix or texture matrix.

    I'd be interested to hear from other developers who give this a test. Currently all methods are working perfectly on my system, but i'd be very interested to know if any work with Unity non-Pro.

    There are a few issues, the main one being that Unity fails to build the project as a standalone. It complains that it can't include 'XnVNite.net.dll' due to it referencing 'OpenNI.net.dll'. Not entirelly clear what the problem is, welcome any advise on this.

    Edit:
    BTW has anyone else who has viewed the rgb camera output from kinect (either in Unity or using openNI samples) disapointed with the qaulity? It looks to me like some weird interlacing issue with every other row having its pixels shifted left or right by 1 pixel. Maybe it just doesn't have anti-aliasing but it still looks surprisingly bad.

    I don't think its any problem with the code as then the image would be garbaled, but currently the quailty is too poor to be useful fo many applications, such as say background removal/greenscreen.
     
    Last edited: Mar 11, 2011
  20. priceap

    priceap

    Joined:
    Apr 18, 2009
    Posts:
    274
    In the Nite.cs script look for this UpdateAvatar function, and then notice where the last boolean in the TransformBone function is set to True - this means it will translate as well as rotate.

    Code (csharp):
    1.  public void UpdateAvatar(uint userId)
    2.     {
    3.         TransformBone(userId, NiteWrapper.SkeletonJoint.TORSO_CENTER, spine, true);
    4.         TransformBone(userId, NiteWrapper.SkeletonJoint.RIGHT_SHOULDER, rightArm, false);
    5. ...
    6.  
     
    Last edited: Mar 12, 2011
  21. priceap

    priceap

    Joined:
    Apr 18, 2009
    Posts:
    274
    This is very exciting to learn what you are up to! I have downloaded the file and will take a look. I hope I may be of some help. It will take a little while to assimilate, I am sure. This reply is just to let you know someone is looking.

    When I was using the CLNUI plugin, I was acquiring the color camera image and it looked pretty good. You can sort of see it in the youtube video I posted earlier in this thread. So the image can look good - not sure why it is not in this case.
     
  22. Noisecrime

    Noisecrime

    Joined:
    Apr 7, 2010
    Posts:
    2,035
    Thanks. Don't get too excited as its just looking at methods of updating a texture via opengl, but will be interested to know if they all work for you. At some point i'll try and decide which method I favour. If C# methods work in non-pro Unity that will be very important, if they don't (which I suspect they wont though can't think why) then I guess a C++ plugin is best, especially as it could be enhanced to take say the depth data and calculate the histogram much faster than I beleive C# can. Though I still want to drill into the new C# wrapper for openNI as I'm not yet convinced that it isn't responsible for some performance drain.


    I've posted two screen grabs to illustrate the issue that i'm seeing.

    Image 1 (cropped and zoomed in 50%), Image 2 (cropped from 640x480)

    If you look at sharp edged objects such as the laptop screen you can see the 'stepping' quite clearly.
     
    Last edited: Mar 12, 2011
  23. Noisecrime

    Noisecrime

    Joined:
    Apr 7, 2010
    Posts:
    2,035
    Apparently it is, though it relies on saving the user calibration of someone in the psi pose (i.e do it yourself), then load that callibration file for any new person detected. Obviously its not a 'perfect' solution as for people who differ greatly in dimensions to youself the skeleton data is not going to be correct, however in practise for certain applications its probably viable.

    For further details check this thread on openNI
     
  24. Noisecrime

    Noisecrime

    Joined:
    Apr 7, 2010
    Posts:
    2,035
    Ok making progress on image output. With some creative use of texture formats i've managed to use the same opengl method of updating textures to support direct depthmap data. That is using the 16bit (uShort) depth data directly in a texture (16 bit ARGB4444 unity format), thus meaning the data can be passed around via pointers from OpenNI to opengl, maintaining the highest performance.

    Obviously a 16 bit depth value being partitioned into 4 bit components (r,g,b,a) requires a dedicated shader to convert it into a meaning full greyscale map, but that was partly the point of doing this, to avoid having to loop through the depthmap data one pixel at a time and converting it from 16bit depth value into a rgb value like the current wrapper samples do. I've not bothered converting it into a histogram as I don't really see the point, instead i'm just mapping the camera max distance into range ( ie. divide the depth value by 10000)

    Even better it looks like i've found a way of doing the same thing with the user label map, though i'm unsure currently if accessing this data in openNI.net wrapper forces a copy of the data first (pointless nad slow?), or if it populates the data array. Initially it didn't appear to work with new code giving a Unity error of 'CurrentThreadIsMainThread', but thats disappeared now.

    If this is all working as expected it should mean that we can get nice framerates when displaying the image/depth/label data. Use of shaders would allow mixing/colourisation of say RGB or depth camera data with label map etc, with the benefit of avoiding doing all the work in c# on the cpu!

    There are a few issues/limitations though.

    Since displaying the depth/label map relies on using a shader, I don't think you can use gui.drawtexture anymore.

    The image sources from the camera are inverted vertically in opengl, so needs to be flipped.

    It appears Unity does not support gpu native Non-Power-Of-Two textures, at least for programmatically created textures. Instead it will copy the image data into a next POT texture. (i.e image from camera is 640x480, Unity creates a texture 1024x512). I've calculated the scaling of the uv to ensure the image would fill a plane, but I get a slight border top and left, and if I negate the scaling to flip the texture I get a large black bar along the bottom! Unsure what the issue is here, but it should be fixable, even if it means making a dedicate mesh with specific uv's instead of using a plane primitve.

    I'll update the source demo ina day or so, once i've got everything working.
     
    Last edited: Mar 12, 2011
  25. KnifeFightBob

    KnifeFightBob

    Joined:
    Jan 22, 2009
    Posts:
    196
    I may be stupid now, so excuse me if that is the case, but I'm pretty sure only Pro supports plugins. C# should be no problem whatsoever with any Unity version, and speed should be comparable or the same. Go with a C# solution if you need to have the project readily accessible for everyone.
     
  26. priceap

    priceap

    Joined:
    Apr 18, 2009
    Posts:
    274
    Hey Noisecrime -
    I see you are already progressing, but here are my results from your test -

    I got the opengl working - am rediting this post....

    sager laptop with corei7 950, geforce gtx 280m

    VBL turned off:
    native setpixel = 6fps
    import opengl32 = 70fps
    csharp interop = 70fps
    c plusplus = 70fps
    via openni = 70fps
    via taoopengl = 70fps

    This is very cool!
    I have been using the unityinterface.dll wrapper up until now, and one thing I noticed is that I was initially getting about 17fps on the setpixel method. I changed the texture2d to 1024x512 and used setpixe(0,0,640,480, colorarray) and have since been getting about 25 fps. I was surprised to see this version only getting 6fps, and when I tried the power of two change, it only increased to about 7fps.
     
    Last edited: Mar 12, 2011
  27. Noisecrime

    Noisecrime

    Joined:
    Apr 7, 2010
    Posts:
    2,035
    I don't know you well enough to say if you are stupid or not ;)

    What you state about Pro only supporting plugins is true, but I'm not sure its that straightforward and thats why I'm questioning aspects of it.

    Specifically if you place say a c++ dll into the Unity plugin folder it will require Pro to work. However it appears from using openNI.net.dll that (net assemberlies?) which are essentially (to me at least) the same as a plugin can be used without placing them into the plugin folder and simply having 'using xn' in you c# class.

    Now where it gets interesting is that the openNI.net.dll is simply a wrapper to the openNI.dll (assumed installed at system level). You can check the source and see that openNI.net.dll is using interopServices and [DllImport] to access the functions found in the c++ openNI.dll. To me this suggested that wrapping a C++ dll/plugin in a C# wrapper can get around the Pro requirement. However in intial , though flawed testing I didn't find this to be the case. I say flawed, becuase for much of the testing of accessing opengl functions, I'd not realised I had to force the editor into opengl mode (on windows). Unfortunately I only became aware of this and had a working plugin version after i'd upgraded my temp install of 3.2 to Pro.

    Having said that, even if using a C# wrapper can get around the Pro requirement, i'm unsure it automatically makes it the best method, as there is still the possibility of having to work on depth/image/label data and for that I believe C++ can still be faster. I'm not saying C# is bad, actually its very good, but it doesn't necessary produce the most optimal machine code. However I don't want to get into a debate about which is fastest, my overall point with much of this is that the different methods need explicit testing to determine perofrmance as well as other postitive/negatives.



    Thanks for taking the time to test. Do you mind repeating whatever the problems were before you edited your post? I like to ensure that this works for everyone and if there was some important information I failed to document or some other techinical issue that you solved yourself I'd like to be aware of it.

    As to your results, very impressive, pretty much double what i'm getting on my test machine, perhaps its time to move back to my desktop ;) I'm glad that all methods worked for you and not suprised by the actual results.


    This is very interesting stuff, especially the change to using a POT texture. Clearly the performance increase must be due to Unity making two copies of the texture (which I beleive I read in another thread related to similar methods). The first keeps a copy of the image at its NPOT dimensions, whilst the second and the one used for textures is placed onto a next POT texture.

    I am a litle surpised at just how much extra perforamnce you gain though. Are you or have you considered disabling mipmap generation on apply()? I would presume that should give a performance beneift too.

    As reagrds the poor performance of my image setPixels() method I'm not surprised since unlike the unityinterface.dll its an array of MapData objects and from experience is far slower than an array of actual bytes to access. This is why i'm trying to use pointers to the data instead of using the C# interface, though of course this then relies on using opengl to update the texture via the pointer, but with almost 3 times the performance (i.e your result of 25fps vs 70 fps) would appear to be worth it.
     
    Last edited: Mar 12, 2011
  28. priceap

    priceap

    Joined:
    Apr 18, 2009
    Posts:
    274
    It was a dumb mistake. I was trying "-force -opengl" rather than "-force-opengl". I misinterpreted the way you wrote it.

    Your instructions and organization of the files are very clear. Everything worked well once I saw "<opengl>" in the main window title bar.

    I remember that being an option, but had not tried it recently. I just did, and it did not seem to make any improvement. I would guess that is because the texture being written into does not have any mip map levels to it.

    I might answer this by going through the source you included, but am I correct in understanding that in all cases (not just the "via openni" method) that you are relying on your changes to the openni.net.dll in order to do the opengl texture transfer as a pointer? In other words, once I saw how great the color camera image was transferring, I tried to see if the script could be adjusted to read the depthmap using the interop c# method, but it appears that cannot currently be done.
     
    Last edited: Mar 13, 2011
  29. Noisecrime

    Noisecrime

    Joined:
    Apr 7, 2010
    Posts:
    2,035
    Ah, right. Yeah this is the one major drawback to using opengl on windows. There really needs to be an alterantive method to force opengl other than adding commands to a shortcut. Firslty becuase as you discovered its easy to make a mistake and frankly I wasn't even sure how to do it at first as its not somethign I normally do. The second issue is should you want to release a standalone project, as then you have to somehow ensure that your exe is always ran from a shortcut. Should a user launch directly then the whole project is not going to run.

    No, each method is completely independant or at least should be, to be sure I'd need to test replacing my openNI.net.dll with the original and see if other methods are somehow using it. However as I said they should be independant, though i'm unsure if any will work in Unity non-Pro as I was unable to test that.

    Of the top of my head the best C# method is likely to be UNITY_IMPORT_OPENGL32 or OPENGL_CSHARP_INTEROP. Though these are essentially the same, the code is subtley different and I'm unsure if this difference has any effect, especially whether one might work in Non Pro mode or not. I'd rather avoid using the one that added opengl calls to openNI, since to me that pollutes the code space and will be a pain to update.

    I suspect although I like the OPENGL_CSHARP_INTEROP method and exposing the opengl functions directly in the C# class, the best appraoch wil be to produce a c# wrapper dedicated to updating textures. There are several options that have not yet been investigated such as using opengl to automatically build mipmaps should you desire etc. Therefore a nice little C# library may be useful.

    I'm hopeful I can post an update to my source demo later today that will support depth and label maps as well. The only problem is i've not found a method I'm happy with to code all these variations. Compile time defines arn't bad, but it becomes harder to manage as the script grows. I'm also kinda stuck between making a framework to allow for easy opengl texture updates and maintaining a 'testbed' for quick debugging or testing changes to Unity and what effect they might have ont he code.
     
    Last edited: Mar 13, 2011
  30. amir

    amir

    Joined:
    Oct 5, 2010
    Posts:
    75
    Obviously I can't get your stuff to work in Mac, so I'll try it on PC later, but I want to make my Mac plugin with an equivalent API as your Windows one. I'd like to see the code you're using to make the 16bit texture into the GPU.

    Of course someone needs to write some color-mapping shaders for it.

    Now I'm going to see if the same BindTexture / TexSubImage2D will work to get the camera data into Unity from my shiny new iPad 2
     
  31. Noisecrime

    Noisecrime

    Joined:
    Apr 7, 2010
    Posts:
    2,035
    Cool, hopefully later today i'll post a version with depth and label code and shaders. It was all working yesterday, but I want to clean up the code a bit more first.

    One thing i'd be interested to know is whether any of the C# methods can be used to access opengl on the Mac? I.e DLLIMPORT the equivalent of opengl32.dll on the Mac side? I wonder if a pure C# plugin/net/lib could work cross-platform in some fashion?

    Regardless it would actually be rather nice at some point to produce either a c++ plugin or c# .Net that is a fully featured opengl texture library, with cross-platform support. Something beyond our current 'expect an RGB array via pointer'. As I'm surprised there doesn't sem to be a community released library, forcing everyone to remake the wheel each time. As such this is comepletely seperate to the openNI stuff.

    edit:
    BTW I seem to remember you mentioning pretty good framerate with setPixels using the new openNI.net.dll, getting something like 14-18 fps for rgb camera? I was wondering how you achieved that since iterating through an array of MapData objects seems so slow here for me. Slightly more confused as i'm sure I used your project as a basis for the setpixel version using the new c# wrapper. Can you post your code, or maybe rebuild your demo as the one on git has lost all its references.

    I'm interested because although we know have a method of updating textures with openNI data very fast, if we need to access the data itself, say to develop our own motion tracking system, going via MapData is going to be too slow to be useful. Obviously there are ways around this, UnityInterface.dll for example using marshal feels much faster (haven't tested though), but i'm not familiar enough with C# to know all the ins and outs.
     
    Last edited: Mar 13, 2011
  32. Noisecrime

    Noisecrime

    Joined:
    Apr 7, 2010
    Posts:
    2,035
    Ok new version of the opengl texture test is up here.

    I've spent some time simplifying the code. I've removed one method (UNITY_IMPORT_OPENGL32) as it was almost exactly the same as OPENGL_CSHARP_INTEROP. Added const int defines for opengl enumerations to make it more readable.

    The big change is support for both depth and label maps via opengl updating the textures. As both of these pixel values are uShort (16 bit) it means using an ARGB4444 texture format in Unity and slightly different values for some opengl variables (enums). However viewing the texture directly in Unity will not be productive as a 16 bit depth value is being split across r,g,b,a components (kinda looks like a plasma effect). Instead these textures MUST be rendered via the provided fragment shaders.

    This has the unfortunate side effect of preventing Gui.DrawTexture() being used on these two maps since we can't set a shader to use. For this reason much of the code has moved over to displaying the openNI data (camera) textures on planes in the scene. However its still possible to use drawTexture() for the image texture, but you have to use GetNativeTextureID()+1 for the ID and uncomment the onGui function.

    Still a few issues to address, the main one is that since Unity in the background is converting the NPOT textures into POT ones I have to scale the uv across the planes, but despite having the correct values, as soon as I negate the Y scale to 'flip' the image I get a weird black bar across the bottom. Need to investigate this further, but welcome any ideas. I'm also keen to try setting the textures as POT from the start, since it may provide some performance increase (for setPizels at least) but also reduce ram usage.

    The label shader is pretty barebones and not realy tested. Its a bit hard to test user labels since I've only got one user, me! However the shader code really isn't that hard to understand so should be easy to improve.

    I started to look into adding a shader to use the label map to mask the rgb camera. The shader works well, but in default mode the depth/label map is not in the same 'space' as the rgb camera. There is an openNI function to change the perspective which does align all images, but I've found it tempremental, with the user dropping in and out. This reampping of the depth space is currently cmmented out, but you can find it in the start() function.

    Overall performance appears to have dropped slightly, but then it is now updating 3x640x480 textures each time. I've also noticed occassional 1 second pauses, but unsure of the cause.

    Anyway I think thats it for the main stuff for now at least.
     
    Last edited: Mar 13, 2011
  33. priceap

    priceap

    Joined:
    Apr 18, 2009
    Posts:
    274
    Very fast and very productive work, Noisecrime!
    I tried out your project and it runs at abut 40fps with all four image displays running. Very impressive!

    I recall a similar issue with the color camera when using the CLNUI plugin. I wound up adjusting the texture offset so that it would match the depth image - for the color image mapped on a mesh being deformed by the depth. Interesting to see where you found that the openni sdk can compensate for that - I tried uncommenting the GetAlternativeViewPointCap() function and that did indeed align the two, but I see how it flickers in and out - seeming like variations in the depth affect it more easily.

    I have been starting on a new project planning to use the depth image in the unityinterface.dll wrapper - but not try to map it to a texture but only use the height array to determine positions of people in a public space. Now you have me seriously contemplating moving to the openni.net.dll, which I figured I would want to do soon but needing to find the time to start figuring out all its other functionality, plus getting joint rotations rather than positions when coming back to using the skeleton stuff.

    Also, as you mentioned the solutions you are working with are independent of openni, it is a great contribution to any needs for getting video streams into unity. I have been pursuing an AR project also, and am interested in how your solution can be used to move the video into a texture from other plugins such as ARtoolkit or the webcam toolkit posted in other threads.
     
  34. Noisecrime

    Noisecrime

    Joined:
    Apr 7, 2010
    Posts:
    2,035
    Thanks. Though i'm somewhat disappointed only getting 40fps now as that feels quite poor to me, especially as you were getting 70fps previously. Granted its updating 3x640x480 pixels worth of data each frame, though still I think it should be faster. Trouble is i'm not sure where the bottleneck is, openNI, the wrapper, or simply uploaidng so much data to the gpu each frame. The shaders being used should have negliable impact. So something to investigate in the future.

    Not familiar with CL NUI so can't comment on that. Its disappointing that GetAlternativeViewPointCap doesn't appear to work well with user label map. I've also read it performs poorly when used for mapping a skeleton onto the rgb camera, so perhaps there is a bug in openNI or something that might be fixed in the future.

    Yeah there is a lot to get to grips with in the openNI.net.dll wrapper, though I'm pretty sure you could get the opengl texture code working with UnityInterface.dll if you need to. I'm hoping to work through the wrapper and openNI in general this week to get a better understanding.

    I've been making a few adjustments to my source demo. I've greatly improved the Unity_Native method, by using marshal.copy to get the data before looping through it (i.e like in the UnityInterface demo). I suspect there is more performance if the copy can be avoided, but i'm now getting 10fps for depth or colour and 6 fps for both, where as with the old code it was more like 0.5fps for just depth. I'm also messing around with using POT textures. I'll post an update when its done, though i've attached the script as is. Should work fine as long as you stick with Unity_Native method and hide the planes in the scene, since the code fallsback to using gui.drawtexture().

    Yeah I'll start a new project to investigate this. I'd like to explore how you can build say a byte array in c# and pass a pointer to it directly to opengl texture functions. That way I can test performance purely of updating textures and not the whole openNI process.
     

    Attached Files:

  35. Eiger2112

    Eiger2112

    Joined:
    Mar 15, 2011
    Posts:
    3
    Hi Guys,

    I'm interested in getting Unity working with Kinect also, and have tried to follow your discussion. Unsuccessfully :)

    I'm new to Unity, and am getting into it only now that Kinect is available. I'm pretty excited about the possibilities.

    Anyway, what it sounds like you're working on is a way to import the Kinect depth and video data into Unity. Is this required to be able to teach it gestures? Or do you actually want to be able to USE the data to texture or give depth to Unity objects at runtime? If it's gestures that you're going for, maybe consider my idea.

    My thought to program gestures was just to have the skeleton of the player character's (PC's) avatar controlled by the kinect, as you have already succeeded at doing with the unity wrapper. Then I thought you could put collider boxes or spheres (otherwise invisible and non-interactive) in various places around the PC, and have their hand and feet bones trigger events. To get gestures you could have each 'collision' enter a character in an array and then have an update function search for gestures/words in this array. If desired, you could also record the time of the different collisions to calculate the intensity/speed of the gesture, and to clear out old collisions. Anyway, I'm sure you get the picture.

    My vision is to create a game with more realistic magic. Instead of pressing a key to cast a spell, you have to do certain gestures. For example, one gesture creates a particle effect (mana-globe) in front of the PC, which they manipulate (with other gestures) in order to change it's color and power. I'd like this to be like an entertaining mini-game of sorts. This mana-globe can then be saved (pulling gesture of some sort) for a spell later on. A fire attack would be more effective with a red mana-globe, while a cold spell would be more effective with a blue one yada yada yada. Saved mana-globes would fill various spots in the GUI around the screen for later use in spells. Yet other gestures would start spells, and the player would choose from his pre-created mana-globes to power them.

    So, as you can see, this kind of a system would probably take 20-40 trigger locations, in order to cover both sides, hands, feet, center etc. Is that going to bring Unity to a crawl? Are there any other obvious problems that I'm overlooking as a Unity newbie? Anyway, I wanted to share my idea in case you are indeed searching for a way to detect gestures. And if this is a helpful idea or not, I'd appreciate feedback.

    But I suppose I need to start experimenting myself. I'd been waiting for Microsoft's SDK, thinking that it might end up being more streamlined and stable, but I'm getting impatient, and if this DOES work, then the only code I'd have to rewrite would be that connecting the PC's bones to the kinect data. All gesture and effect programming would be independent.


    Thanks guys!

    Eiger
     
  36. Noisecrime

    Noisecrime

    Joined:
    Apr 7, 2010
    Posts:
    2,035
    Welcome to the future ;)
    Kinect is so exciting at the moment becuase its not yet fully defined, in terms of what and how it can be utilised.


    Speaking personally, i've focused on methods of increasing the performance of displaying video/depth/user data for two reasons. Firstly video is likely to be quite important in many projects and having the depth/label images available just makes debugging easier. Secondly because it gave me an oppertunity to explore further, openNI/NITE, Unity, C# and plugins/dlls.

    In fact none of this is actualy 'required' for working with kinect, skeleton/hand/gesture tracking is all built in (NITE libraries) and although they require video/depth data, there is no need to display that data. I did wonder from time to time if i'd focused on the wrong thing, since i've yet to make any real meaningful kinect demo, but I like to really get to grips with a technology, to deconstruct what is going on, so I feel i've already learnt a great deal that will be useful in the future.


    Its not a bad idea, though I suspect it wouldn't work as well as you'd like. In terms of complex gestures, such as those for casting spells I think the common method is simply to store the movement as an array of points over time and check these (maybe as a series of normalised directions to avoid working with actual co-ordinates). This is then somehow matched against a list of gestures, though i'm not clear on specfically what method is used, probably some form of fuzzy matching system.

    At least that would be my starting point and it has already been achieved in games, such as Black White (though through use of a mouse). The algorithms used should be more or less the same once you are able to track a hand.

    I actually tried writing a mouse gesture based system years ago.It was based on dividing up the screen into cells (say 20x20), then storing which cells the mouse moved through (keeping say the last N number of cells moved through). I'd use this data to create a snapshot per frame, where each cell was either on/off. I'd then compare this snapshot to a list of 'gesture' snapshots (i.e. a 1 bit image) adding up the number of times a cell in both snapshots was filled and if >90% cells where the same, that was the gesture. I seem to remember it worked ok, though was restricted to simple gestures such as letters, circles etc. However it was very restrictive as it was dependant on matching a scale dependant snapshot. So for example if you drew a circle it had to be a circle that touched the edges of the screen, it couldn't be a little one in the middle. Hence why i think absolute positional checking isn't a good idea.

    However having said that NITE comes with gesture recognition, at least for basic interactions (swipe, touch etc), i'm unsure if you can add gestures though and if you could whether the system would be general enough to track a series of actions which is what you'd need for casting magic.

    I wouldn't bother waiting, afaik there is no release date for the MS SDK and we really have no idea exactly what it will be capable of. The biggest advantage will be having MS drivers, which i'm sure openNI will support. As for the other stuff openNI and NITE have come from the developer/manufacture of the tech that is in kinect, so it may be similar to the MS SDK. Then again MS may make it completely different for the sake of it. Get started now as there is much about the technology that you'll need to learn, not to mentioned programming gesture recognition. It will give you a head start and hopefully moving to MS SDK will be easy.
     
    Last edited: Mar 15, 2011
  37. bryanleister

    bryanleister

    Joined:
    Apr 28, 2009
    Posts:
    130
    This is for folks trying to get Kinect up and running. For me, it was a bit painful. Finally, I uninstalled everything possible from multiple failed efforts and then followed the instructions on this page:

    http://projects.ict.usc.edu/mxr/faast/

    Once that software was running ( as admin) it was pretty easy to get the Unity wrapper going. Hope this helps someone. BTW, this is on a PC.

    Bryan
     
  38. Todilo

    Todilo

    Joined:
    Feb 1, 2011
    Posts:
    88
    Anyone know a way of using this skeleton tracker and getting the coordinates into screen-space? Independant of where in the depth a player is standing?
     
  39. Eiger2112

    Eiger2112

    Joined:
    Mar 15, 2011
    Posts:
    3
    Hi Noisecrime,

    Thanks for the in-depth reply. I think I'll go ahead and try my method, since I don't really understand yours :) Normalized directions and fuzzy matching systems, Oh My! If it does work, I'll get my directions just from detecting a hand entering one box collider followed by another. And the fuzzy part can be controlled by the size of my boxes - bigger boxes of course requiring less accuracy.

    In fact, it'd be kind of like your 2d example, where you mapped to cells, just that my cells would be 3d, variable size, and would not necessarily cover the entire space. I'd just create them where I needed them to detect the gestures I chose. Or maybe I should just make a 3d grid of cells so each hand and foot has a location at all times? I'll have to play around a bit, but it would be nice if I could develop some more generic system to be used by the community in general. Of course, it'd be surprising if I'm the one that came up with something like that, considering the limited amount of time I have for playing around with it.

    But you say that NITE comes with gesture recognition... can that data be accessed in Unity, or are we limited to skeletal mapping and depth data? It might not have all the gestures I need/want, but it'd save me from reinventing some parts of the wheel.


    Thanks,

    Eiger
     
  40. Noisecrime

    Noisecrime

    Joined:
    Apr 7, 2010
    Posts:
    2,035
    Right uploaded a final version of my openNI tests for getting frame/depth/user maps into textures. Problem is as a project its getting over-complex as I keep adding new features. I'll leave the last version up as that might be simpler to understand the whole unity native setpixels vs opengl methods.

    Latest Version (mk5)

    Old Version (mk2)

    The new version has

    1. Greatly improved Unity_Native_SetPixels performance.

    This was achieved through defining textures as power-of-two (POT) instead of non-POT and drawing into a sub-area of the texture. More importantly (with the c# wrapper) I switched to using marshal.copy() to make a copy of the image/depth/user data via a pointer into managed code. This is far faster to loop through and build the unity color array for setpixels function. On my machine displaying 640x480 rgb camera image its jumped from <1 fps to 12fps, though in the demo i'm getting both image and depth data combined so only manages 6 fps.

    2. Camera Image masked by user or depth map on the gpu.
    This involves new code, material and frag shaders. The masking is achieved in frag shader thus removing load from the cpu.Fixed the issue of losing a user whilst using alternativeViewPoint (needed to align the depth/user maps with that of the imagemap) by forcing userGenerator to update (see code). It all works very well, but the accuracy in terms of removing a subject from a background is a bit underwhelming, frequently get a bit of a halo around the subject. Still its a start and quite interesting to mess with.

    Currently default is to output image using a controllable (via shader values) depthmap, to change to using usermap or not perform masking you have to comment/uncomment so new defines in the script. More information can be found in the script.

    3. Support via opengl for image/depth/user map extraction, copying into a texture and displayed via dedicated shaders.

    4. Clicking on any of the 4 planes in the grid will bring that plane forward to fill up the screen a bit, so you can see more details. This does not work with UNITY_NATIVE_METHOD. whilst the UNITY_NATIVE_METHOD it self now only displays a fullsize version of either the image or depth. Simply press any key to toggle between them.

    5. There are a couple of booleans in the script;

    m_ForcePowerOfTwo (default true) which avoids Unity making two versions of any non-pot texture you create. This makes native setpixels much faster and saves memory when using opengl method.

    m_ForceXVGA (default false) which over-rides the properties for dimension/fps of rgb camera image from 640x480x30 to 1280x1024x15. Obviously using this high quality mode is slower, it updates only 15 times a second and much more data being moved around.

    Hope this helps anyone interested in messing with the actual image data comgin from kinect/openNI.
     
    Last edited: Mar 17, 2011
  41. Noisecrime

    Noisecrime

    Joined:
    Apr 7, 2010
    Posts:
    2,035
    lol, didn't mean to scare you off. Heres the sort of stuff I'm talking about, and where I might make a starting point if I had to do gesture recognition - Mouse gesture system via Hidden Markov Models

    I don't doubt you'll get you method working, but it may just be a little restrictive or temperamental.

    For example imagine you were 5 years old and traced out a circle infront of the kinect, now imagine you are 25 yrs doing the same thing. There is no way that the two circles would be the same size/scale. So using absolute positions (e.g you boxes) cannot account for this difference. Instead then its better to ignore absolute positions and instead work with normalised veclocity (i.e the angles the hand moves through). However this is where things start getting complex and beyond this thread ;)

    One thing I'd most definitely do though is switch from doing this in 3D to 2D, depending on the set up i could imagine body parts/hands missing your boxes if they are not at the same depth. Instead i'd convert hand point into 2D screen point and make you boxes 2D again in screenspace.

    But you know, whatever works for you is best. Start small, implement your cubes method and learn from the process. From that you can progress to more advance methods. I'm sure you'll be able to make something usable from your method, so it will not be wasted effort.

    I've not drilled into Nite yet, but I believe you have access to a number of Natural Interaction gestures, such as push, swipe, track a circle etc. Several of these are demo'd in the Nite samples (check the primesense install folder in programfiles). I'd spend some time reading up the Nite documentation and examining the source to the samples it comes with.

    As far as i'm aware you should be able to access this via openNI.net.dll and nitewrapper.net.dll (though the nite wrapper may have a different name - can't remember off-hand)
     
    Last edited: Mar 17, 2011
  42. Eiger2112

    Eiger2112

    Joined:
    Mar 15, 2011
    Posts:
    3
    I have a soon-to-be-5-year-old girl, so it's pretty easy for me to imagine this :) She's one of the reasons I don't have much time to work on projects such as this, but also one of the reasons I want to. If I get it all working, I'd like to create a less-serious magical play space for my daughter. Before the unity wrapper came out, I was planning on having to adjust the trigger boxes based on user size. But then I thought I'd also have to (somehow) adjust the model size at runtime, to fit the user. Otherwise, I thought, what if I was trying to force the model's elbow to be too close to the shoulder?

    But that doesn't seem to be necessary in the Soldier Demo. So when matching skeletal joints to Kinect data, they must just scale it to fit the model, and they probably start at the more fundamental positions like waist and shoulder, and if things don't quite fit, they just get as close as they can without 'bending' the model. So if my daughter tried to use the soldier demo, I think it'd work. I'll have to enlist her help to try that early on.

    So then, once the soldier is dancing around like a little girl ( :) ), I'm no longer directly concerned with Kinect Data. My trigger boxes are placed around the soldier, added to his root game object so they move with him, and sized appropriately for him. For some trigger boxes, I'd essentially make it 2D by making them deep... extending far in front of the model, and behind him far enough that he just couldn't reach there. But with some triggers, I'll want to stack a series of thin boxes, so the user gets, say, a different color of mana ranging from yellow through orange to red depending on where exactly his hand ends up. I want it to be more of a feel thing, like learning to drive a stickshift, so that someone who has played a lot can get exactly the result they want, but a newbie will get some surprises. That's one of the beautiful things I see coming from the kinect: reintroducing PLAYER skill to games, rather than skills that improve on level-up.

    But I've given your 2D and grid ideas some thought... if I made a contiguous grid of boxes, either one thick layer, or several thinner ones, I could then record gestures. I'd have a simple project that just detects and records for later use. You know, stand in front of the computer and make the gesture I want a half dozen times and average them out. Then use the results in my 'real' project. When detecting gestures, there'd be a threshold for a successful gesture, but the closer it comes to the 'ideal' recorded gesture, the stronger/purer/better the result.

    Anyway, I think I've hijacked this thread long enough. I appreciate the feedback, but it's time that I stop theorizing and start experimenting. If I develop anything interesting I'll be sure to share it here. Congratulations btw! It sounds like you have your Kinect-to-Texture mapping working pretty well! I'll be sure to give it a spin.

    Cheers, Eiger
     
  43. the_gnoblin

    the_gnoblin

    Joined:
    Jan 10, 2009
    Posts:
    722
    There's no such flag in Nite.cs\TransformBone() method :(.
    Do you have some special edition of it?
     
  44. priceap

    priceap

    Joined:
    Apr 18, 2009
    Posts:
    274
    right - sorry - this is working from the changes that vtornik23 made and posted on page 8 of this thread.

    hopefully this properly links to that post:
    http://forum.unity3d.com/threads/67982-Kinect-plugin?p=465491&viewfull=1#post465491

    I then also posted some changes I've made to that script, to make changing the number of avatars easier, and to turn off the user map display when not needed (speeds up a lot).
    http://forum.unity3d.com/threads/67982-Kinect-plugin?p=513924&viewfull=1#post513924

    Another significant improvement can be to change the guiTexture to POT and then use the options to SetPixel() with blockwidth (640) and blockheight (480), which can double the framerate when showing the depthmap.

    I guess I should also add that switching to the use of openni.net.dll will open up access to many other functions, but so far I have been getting a lot of mileage from the unityinterface.dll
     
  45. priceap

    priceap

    Joined:
    Apr 18, 2009
    Posts:
    274
    Hi Eiger -
    Noisecrime provided a lot of good feedback and insight, but I thought I would chime in and and tell you that I teach a class called "building virtual environments" and a few of the students used the kinect with unity this quarter. One student did a really fun project and used exactly the method you are talking about experimenting with.

    She created cubes and then parented them to appropriate joints of the avatar body (out in front of the chest, over the head, behind the neck). Parented to the skeleton, the cubes would always be above the head or in front of the chest, etc., no matter how the avatar moves in the scene.

    Because she was relying on other needs for rigibodies and collisions, she simply used a distance threshold to watch to see if both hands arrived at the box locations. She also used timers to make sure the hands were in the locations for enough time instead of "accidentally" passing through the locations and triggering events.

    She did have to relocate the boxes after a few tests once she learned what poses were easiest for a user to reach, and the method also suggested that adding many more poses would get more finicky ( she had four ), but once the boxes were made invisible and people started using her project in the final review (today!), they very quickly learned how to make the right poses and trigger the different events.

    good luck with your project !

    P.S - I have also been experimenting with some methods more similar to what Noisecrime was describing - recording the rotation values of arms/elbows into arrays and then watching for movement through those motions within an allowable threshold. I also noticed there seems to be pretty sophisticated implementation of gesture recognition as a unity script on the wiki at : http://www.unifycommunity.com/wiki/index.php?title=Gesture_Recognizer
     
    Last edited: Mar 18, 2011
  46. the_gnoblin

    the_gnoblin

    Joined:
    Jan 10, 2009
    Posts:
    722
    Big thanks to priceap and vtornik23!

    Now I can walk around, crouch and jump ;).
     
  47. diabloroxx

    diabloroxx

    Joined:
    Jan 20, 2010
    Posts:
    71
    Is there a way to tilt the kinect Sensor using Unity for detecting head and the limbs. I know tilting depends on the Kinect Motor. But I don't find a class anywhere that supports tilting or am I missing something. In other words, I want to tilt the Kinect sensor up and down without manually doing it.
     
  48. priceap

    priceap

    Joined:
    Apr 18, 2009
    Posts:
    274
    When I first started experimenting with the Kinect in Unity, I used the CLNUI dll from http://codelaboratories.com/nui/
    It worked well for capturing the color camera, the depth map, and it can read from the built-in accelerometer and control the color LEDs and the motor. For it to work, you install the CLNUI device driver for the Kinect.

    Likewise, the Primesense OpenNI SDK also has its own drivers for the hardware, but is intended for Primesense sensors and not the Kinect, so using the modified primesense sensor driver is necessary.

    As far as I can tell, there is no motor control functionality in the OpenNI SDK. I guess this may be due to the Primesense technology not including a motor - and the motor and accelerometer is something Microsoft thought up.

    So the complicated answer is that it can be done, but maybe not with the driver and SDK that currently provides the skeletonization functionality. I have not tried it, but I suspect attempting to install and run both drivers would directly conflict with one a another.

    If Microsoft does indeed release its own SDK this spring, then there may be full skeletonization along with all other access to the kinect hardware, or they may limit a lot of it in the "hobbyist" edition, but there is no predicting except that with the commercial version there will be the promise of a legitimate path to developing and distributing projects that use the kinect/primesense technology. I keep hoping that the Primesense/Asus camera will show up too.
     
  49. diabloroxx

    diabloroxx

    Joined:
    Jan 20, 2010
    Posts:
    71

    Thanks for the information. Guess will have to wait until they release the SDK. Until then, I will just use my Xbox to tilt the angle and then reconnect with PC :)
     
  50. priceap

    priceap

    Joined:
    Apr 18, 2009
    Posts:
    274
    Sorry Diabloroxx - I gave you a much more complex reply than necessary. I thought you wanted to have the camera tilt adjust automatically during run time for some reason in your project.

    I use the brute force method to tilt my Kinect. I just grab the top and the base and firmly tilt it until it clicks into a new angle. The click is kind of loud and you might think it is breaking it, but I think it is the way a servo is geared up inside. I assume it can allow for this because they know little kids are going to handle it, but if you are concerned about it being too abusive then don't do it - just letting you know I use the little kid approach.