Search Unity

  1. Megacity Metro Demo now available. Download now.
    Dismiss Notice
  2. Unity support for visionOS is now available. Learn more in our blog post.
    Dismiss Notice

Kinect v2 with MS-SDK

Discussion in 'Assets and Asset Store' started by roumenf, Aug 1, 2014.

  1. roumenf

    roumenf

    Joined:
    Dec 12, 2012
    Posts:
    635
    Make sure that the 'Vertical Movement'-setting of the AvatarController-component of the avatar is enabled, in the scene where you replay the recorded file.
     
  2. agramma

    agramma

    Joined:
    Jun 20, 2011
    Posts:
    13
    Thanks for the answer! I did but it didn't make much of a difference. The overall animation is pretty good. Here's is an example:



    At the video you can see the animation and the video of the motion. As you can see when the user is walking the avatar seems like sliding most of the times. Is there anything else I could try?
     
  3. roumenf

    roumenf

    Joined:
    Dec 12, 2012
    Posts:
    635
    It looks like the 'Smooth factor'-setting of AvatarController is too low, causing the movement to be slow. Try changing it to 10 or 20. This should make movement faster and less smooth. If you set it to 0, there will be no smoothing at all, and the movement should have minimum delay.

    By the way, which asset and what sensor are you using? The avatar seems to lean backwards...
     
  4. Pyxis-Belgique

    Pyxis-Belgique

    Joined:
    Jul 1, 2016
    Posts:
    14
    Hi Rumen,

    First of all, thanks a lot for this asset that helped me to show a quick POC for our interactive wall project.

    I'm trying to achieve something similar to the KinectBackgroundRemoval5 example.
    I need to use a background removed "3D sprite version" of the high resolution users with matching pivot points.

    Think about the KinectUserVisualizer example but with the 2D high-resolution image in place of the 3D Mesh View.
    The 3D Mesh View in this example is awesome and match 3D pivot points but his resolution isn't great enough for my needs.

    Unfortunately, I cannot use a Pos Relative To Camera because users won't be able to go beyond ~3 meters from the Sensor (placed in front of them to capture a mirrored 1:1 image of them). Plus the screen is about 4.50 x 3.50 meters and they would render too big on that screen.

    That's why I'm trying to scale down their silhouettes but also place them in 3D space to allow all kinds of 3D Layer (MainCamera stick to front and never move during runtime, so "sprite depth" isn't required).

    I try for nearly 3 weeks with erroneous mixed results but since my deadline approach I was wondering if you could help me to figure out the best way to do this ?

    Thanks
     
  5. roumenf

    roumenf

    Joined:
    Dec 12, 2012
    Posts:
    635
    Hi,

    First off, you can increase the resolution of user visualizer - open UserMeshVisualizer.cs and change 'sampleSize' from 2 to 1. Be aware that in this case, if the user is near the sensor, the mesh can become too large and violate the 64K-mesh limit restriction in Unity. This is the reason of sampleSize constant in the first place.

    I read your post several times, but still cannot figure out what exactly is the stopper. If the problem is how to apply the BR-texture on sprite instead of renderer, look at the Update()-method of DepthSpriteViewer.cs. It is a component of the DepthColliderDemo2D-scene. Just the texture (and its size) should be different in your case.

    If the problem is something else, please describe it in more details.
     
  6. agramma

    agramma

    Joined:
    Jun 20, 2011
    Posts:
    13

    Thanks for the tip, now the animation is much better! I am using data recorded with the kinect 2 sensor and some of the kinect scripts from your package. Can you think of anything else I could change that would improve the animation. The move rate parameter does it play any major effect in the result?
     
  7. Tina00

    Tina00

    Joined:
    Feb 21, 2017
    Posts:
    1
    Hi, I'm experimenting with the holographic viewer and a projection on a wall. The projection is like a window to another room. The virtual room is actually a 3D-model, which is supposed to seem real, when the viewer is moving around. According to the user's position the perspective of the camera should be adapted.
    My issue is that I don't want the 3D-object to be at the centre of the unity project, as it is in the example. Therefore the transformation of the model wasn't as it would be in a real environment.
    Furthermore I would like to place the kinect behind the user. For now I tried switching +/- on the x-axis. It worked well, when the user was walking from side to side. Unwanted side effect was that the plane on point 0 started to tilt, too.
    I appreciate any help. Thanks in advance.
     

    Attached Files:

  8. Pyxis-Belgique

    Pyxis-Belgique

    Joined:
    Jul 1, 2016
    Posts:
    14
    Hi
    Thanks for the reply.
    Allow me to rephrase my problem :

    I got various effects attached to each user.
    Some that only need pivot points to work (particles emitter), other that require skinned mesh (shaders) and on top of that I also need the high Res texture of the users with background removed.

    You can perfectly align those 3 different things when positions are relative to the camera and to Overlay color.
    But as soon as you need to reduce the size of the users, because of a giant screen or missing space in front of the sensor, a perspective offset appears between the skinned mesh & the high resolution texture.

    I will have a look to the DepthSpriteViewer.

    For now, I've only managed to mix the High Res texture and Joints positions to work with the UserPlaneMover of KinectBackgroundRemoval5 thanks to this modified version of the Joint Overlayer, but I'm unable to match a Skinned Mesh with Avatar Scaled on.

    Code (csharp):
    1. [/SIZE]
    2. using UnityEngine;
    3. using System;
    4.  
    5. public class SilhouetteJoints : MonoBehaviour
    6. {
    7.  
    8.     [Tooltip("Index of the player, tracked by this component. 0 means the 1st player, 1 - the 2nd one, 2 - the 3rd one, etc.")]
    9.     public int playerIndex = 0;
    10.  
    11.     [Tooltip("Camera that will be used to overlay the 3D-objects over the background.")]
    12.     public Camera silhouetteCamera;
    13.  
    14.     [Tooltip("Smoothing factor used for joint rotation.")]
    15.     public float smoothFactor = 20f;
    16.  
    17.     public Renderer silhouetteRenderer;
    18.  
    19.     [NonSerialized]
    20.     public Quaternion initialRotation = Quaternion.identity;
    21.  
    22.     public Transform SpineBase;
    23.     public Transform SpineMid;
    24.     public Transform SpineShoulder;
    25.     public Transform Neck;
    26.     public Transform Head;
    27.     public Transform ShoulderLeft;
    28.     public Transform ElbowLeft;
    29.     public Transform WristLeft;
    30.     public Transform HandLeft;
    31.     public Transform ShoulderRight;
    32.     public Transform ElbowRight;
    33.     public Transform WristRight;
    34.     public Transform HandRight;
    35.     public Transform HipLeft;
    36.     public Transform KneeLeft;
    37.     public Transform AnkleLeft;
    38.     public Transform FootLeft;
    39.     public Transform HipRight;
    40.     public Transform KneeRight;
    41.     public Transform AnkleRight;
    42.     public Transform FootRight;
    43.     public Transform HandTipLeft;
    44.     public Transform ThumbLeft;
    45.     public Transform HandTipRight;
    46.     public Transform ThumbRight;
    47.  
    48.     private bool objFlipped = false;
    49.     private KinectManager manager = null;
    50.  
    51.     private Rect silhouetteRect;
    52.  
    53.     public Rect SilhouetteRect
    54.     {
    55.         get { return silhouetteRect; }
    56.     }
    57.  
    58.     public void Start()
    59.     {
    60.         if (!silhouetteCamera)
    61.             silhouetteCamera = Camera.main;
    62.  
    63.         manager = KinectManager.Instance;
    64.  
    65.         InitializeObject(SpineBase);
    66.         InitializeObject(SpineMid);
    67.         InitializeObject(SpineShoulder);
    68.         InitializeObject(Neck);
    69.         InitializeObject(Head);
    70.  
    71.         InitializeObject(ShoulderLeft);
    72.         InitializeObject(ElbowLeft);
    73.         InitializeObject(WristLeft);
    74.         InitializeObject(HandLeft);
    75.  
    76.         InitializeObject(ShoulderRight);
    77.         InitializeObject(ElbowRight);
    78.         InitializeObject(WristRight);
    79.         InitializeObject(HandRight);
    80.  
    81.         InitializeObject(HipLeft);
    82.         InitializeObject(KneeLeft);
    83.         InitializeObject(AnkleLeft);
    84.         InitializeObject(FootLeft);
    85.  
    86.         InitializeObject(HipRight);
    87.         InitializeObject(KneeRight);
    88.         InitializeObject(AnkleRight);
    89.         InitializeObject(FootRight);
    90.  
    91.         InitializeObject(HandTipLeft);
    92.         InitializeObject(ThumbLeft);
    93.  
    94.         InitializeObject(HandTipRight);
    95.         InitializeObject(ThumbRight);
    96.  
    97.     }
    98.  
    99.     void InitializeObject(Transform t)
    100.     {
    101.         if (t)
    102.         {
    103.             // always mirrored
    104.             initialRotation = t.rotation;
    105.  
    106.             Vector3 vForward = silhouetteCamera ? silhouetteCamera.transform.forward : Vector3.forward;
    107.             objFlipped = (Vector3.Dot(t.forward, vForward) < 0);
    108.  
    109.             t.rotation = Quaternion.identity;
    110.         }
    111.     }
    112.  
    113.  
    114.     void UpdateObjectPosition(long userId, Transform t, KinectInterop.JointType trackedJoint)
    115.     {
    116.         int iJointIndex = (int)trackedJoint;
    117.         if (manager.IsJointTracked(userId, iJointIndex))
    118.         {
    119.             Vector3 posJoint = manager.GetJointPosColorOverlay(userId, iJointIndex, silhouetteCamera, silhouetteRect);
    120.  
    121.             if (posJoint != Vector3.zero)
    122.             {
    123.                 if (t)
    124.                 {
    125.                     t.gameObject.SetActive(true);
    126.                     t.position = posJoint;
    127.  
    128.                     Quaternion rotJoint = manager.GetJointOrientation(userId, iJointIndex, !objFlipped);
    129.                     rotJoint = initialRotation * rotJoint;
    130.  
    131.                     t.rotation = Quaternion.Slerp(t.rotation, rotJoint, smoothFactor * Time.deltaTime);
    132.                 }
    133.             }
    134.         }
    135.         else
    136.         {
    137.             // make the object Inactive
    138.             if (t) t.gameObject.SetActive(false);
    139.         }
    140.     }
    141.  
    142.     void Update()
    143.     {
    144.         KinectManager manager = KinectManager.Instance;
    145.  
    146.         if (manager && manager.IsInitialized() && silhouetteCamera)
    147.         {
    148.  
    149. //            if(portraitBack && portraitBack.enabled)
    150. //            {
    151. //                backgroundRect = portraitBack.GetBackgroundRect();
    152. //            }
    153.  
    154.             silhouetteRect = ScreenRectRenderer(silhouetteRenderer);
    155.  
    156.             long userId = manager.GetUserIdByIndex(playerIndex);
    157.  
    158.             // overlay the joint
    159.             UpdateObjectPosition(userId, SpineBase, KinectInterop.JointType.SpineBase);
    160.             UpdateObjectPosition(userId, SpineMid, KinectInterop.JointType.SpineMid);
    161.             UpdateObjectPosition(userId, SpineShoulder, KinectInterop.JointType.SpineShoulder);
    162.             UpdateObjectPosition(userId, Neck, KinectInterop.JointType.Neck);
    163.             UpdateObjectPosition(userId, Head, KinectInterop.JointType.Head);
    164.  
    165.             UpdateObjectPosition(userId, ShoulderLeft, KinectInterop.JointType.ShoulderLeft);
    166.             UpdateObjectPosition(userId, ElbowLeft, KinectInterop.JointType.ElbowLeft);
    167.             UpdateObjectPosition(userId, WristLeft, KinectInterop.JointType.WristLeft);
    168.             UpdateObjectPosition(userId, HandLeft, KinectInterop.JointType.HandLeft);
    169.  
    170.             UpdateObjectPosition(userId, ShoulderRight, KinectInterop.JointType.ShoulderRight);
    171.             UpdateObjectPosition(userId, ElbowRight, KinectInterop.JointType.ElbowRight);
    172.             UpdateObjectPosition(userId, WristRight, KinectInterop.JointType.WristRight);
    173.             UpdateObjectPosition(userId, HandRight, KinectInterop.JointType.HandRight);
    174.  
    175.             UpdateObjectPosition(userId, HipLeft, KinectInterop.JointType.HipLeft);
    176.             UpdateObjectPosition(userId, KneeLeft, KinectInterop.JointType.KneeLeft);
    177.             UpdateObjectPosition(userId, AnkleLeft, KinectInterop.JointType.AnkleLeft);
    178.             UpdateObjectPosition(userId, FootLeft, KinectInterop.JointType.FootLeft);
    179.  
    180.             UpdateObjectPosition(userId, HipRight, KinectInterop.JointType.HipRight);
    181.             UpdateObjectPosition(userId, KneeRight, KinectInterop.JointType.KneeRight);
    182.             UpdateObjectPosition(userId, AnkleRight, KinectInterop.JointType.AnkleRight);
    183.             UpdateObjectPosition(userId, FootRight, KinectInterop.JointType.FootRight);
    184.  
    185.             UpdateObjectPosition(userId, HandTipLeft, KinectInterop.JointType.HandTipLeft);
    186.             UpdateObjectPosition(userId, ThumbLeft, KinectInterop.JointType.ThumbLeft);
    187.             UpdateObjectPosition(userId, HandTipRight, KinectInterop.JointType.HandTipRight);
    188.             UpdateObjectPosition(userId, ThumbRight, KinectInterop.JointType.ThumbRight);
    189.  
    190.         }
    191.     }
    192.  
    193.  
    194.     Rect ScreenRectRenderer(Renderer r)
    195.     {
    196.         Vector3 cen = r.bounds.center;
    197.         Vector3 ext = r.bounds.extents;
    198.  
    199.         Vector2[] extentPoints = new Vector2[8]
    200.          {
    201.                silhouetteCamera.WorldToScreenPoint(new Vector3(cen.x-ext.x, cen.y-ext.y, cen.z-ext.z)),
    202.                silhouetteCamera.WorldToScreenPoint(new Vector3(cen.x+ext.x, cen.y-ext.y, cen.z-ext.z)),
    203.                silhouetteCamera.WorldToScreenPoint(new Vector3(cen.x-ext.x, cen.y-ext.y, cen.z+ext.z)),
    204.                silhouetteCamera.WorldToScreenPoint(new Vector3(cen.x+ext.x, cen.y-ext.y, cen.z+ext.z)),
    205.                silhouetteCamera.WorldToScreenPoint(new Vector3(cen.x-ext.x, cen.y+ext.y, cen.z-ext.z)),
    206.                silhouetteCamera.WorldToScreenPoint(new Vector3(cen.x+ext.x, cen.y+ext.y, cen.z-ext.z)),
    207.                silhouetteCamera.WorldToScreenPoint(new Vector3(cen.x-ext.x, cen.y+ext.y, cen.z+ext.z)),
    208.                silhouetteCamera.WorldToScreenPoint(new Vector3(cen.x+ext.x, cen.y+ext.y, cen.z+ext.z))
    209.          };
    210.  
    211.         Vector2 min = extentPoints[0];
    212.         Vector2 max = extentPoints[0];
    213.         foreach (Vector2 v in extentPoints)
    214.         {
    215.             min = Vector2.Min(min, v);
    216.             max = Vector2.Max(max, v);
    217.         }
    218.  
    219.         return new Rect(min.x, min.y, max.x - min.x, max.y - min.y);
    220.     }
    221.  
    222. }
    223.  
    224. [SIZE=14px]
    ps. The uploaded archive only contain this script as well as a setup scene (just in case).
     

    Attached Files:

  9. slrbatman

    slrbatman

    Joined:
    Aug 21, 2015
    Posts:
    2
    I didn't read all of the posts on this thread so this question may have already been answered.

    I am looking through the code and in Kinect Gestures the Left and Right HandCursor and Click Gestures are commented out. Is this because the code does not work or for some other reason? I need to simulate a mouse on screen and a button click.

    If there is an easy way to do this that has already been created I would really appreciate the guidance before I try to go reinvent the wheel.
     
    unity_456rudwjd likes this.
  10. roumenf

    roumenf

    Joined:
    Dec 12, 2012
    Posts:
    635
    I saw your scene and I think I understand what you mean. Am I right that you only need to display background removal + overlays on part of the screen? If this is the case, why don't you use a separate camera to render on the given rectangle on screen, with the BR image on background and the needed overlays over it?
     
  11. roumenf

    roumenf

    Joined:
    Dec 12, 2012
    Posts:
    635
    The cursor control and user interactions (grip, release, click, press) are moved to the InteractionManager-component now. That's why they're removed from the gestures-component. See the interaction demo scenes in the package, and don't reinvent the wheel :) More information about IM-component you can find here.
     
  12. davide445

    davide445

    Joined:
    Mar 25, 2015
    Posts:
    138
    Interested on this package since want to create both registered animations and realtime using Kinect v2.
    Some questions:
    - will be possibile to mocap and animate in realtime a Unity character using this plugin
    - will be possible to correctly capture the 360 degree movement of a dancer
    - I've an old i7-860 cpu, equivalent to a modern i3, will be enough
    - adding USB3 trough PCIe card will be better to have a 1USB port (so to dedicate all the controller to just one port) or purchasing a 2-4 USB port configuration will be good anyway.
     
    Last edited: Feb 27, 2017
  13. this-play

    this-play

    Joined:
    Sep 8, 2016
    Posts:
    5
    Hi,

    I can't avoid People staying in the tracking area after they stop using my installation. Kinect won't track the new arrivals. How do I reset the sensor? Or re-initialize?

    Thanks :)
     
  14. roumenf

    roumenf

    Joined:
    Dec 12, 2012
    Posts:
    635
    Answered by e-mail.
     
  15. roumenf

    roumenf

    Joined:
    Dec 12, 2012
    Posts:
    635
    Try to invoke: KinectManager.Instance.ClearKinectUsers() from time to time. Combine it with proper values of 'User detection order', 'Max tracked users' and tracking distances, as well.
     
  16. JCeja

    JCeja

    Joined:
    Jun 9, 2015
    Posts:
    1
    Hi,
    is there a way to track an object like a baseball ball?
     
  17. roumenf

    roumenf

    Joined:
    Dec 12, 2012
    Posts:
    635
    There is a scene in VariousDemos called KinectHandObjectChecker. See https://ratemt.com/k2docs/DemoScenes.html
    You could fine tune its HandObjectChecker-component to detect when the ball is in your hands, I think. But if you need to track a flying ball, you would need a CV library too, to locate the object in one of the depth-related textures, or in the raw depth data. I'm also not quite sure the performance would be enough in this case.
     
  18. Pyxis-Belgique

    Pyxis-Belgique

    Joined:
    Jul 1, 2016
    Posts:
    14
    Thanks for the reply !

    Indeed, I need to display background remove with overlay joints but also a skinned mesh (and eventually a mesh visualizer, for various "mode").

    I'm currently using a separate camera but since background removal use bliting I can't render to a reduced given rectangle. That's why I'm using the UserPlaneMover with a "foregroundToRenderer" script so I can freely place the High Res texture regardless to the main camera and bringing real depth to it.

    Unfortunately, and you could see that by adding a simple mesh visualizer, UserPlaneMover (and by design, Joints Overlay) doesn't move the same way as 3D Skinned Mesh (or Mesh Visualizer).

    There's some kind of ratio I would need to apply because of the 3D perspective but I can't figure it out. Something related to the Unity camera Field of View and the Kinect camera optic ?


    Simply put :

    1. Copy the UserMesh from KinectUserVisualizer scene
    2. Paste-it into the KinectBackgroundRemoval5 scene
    3. Try to scale down the UserImage to match the UserMesh (or, the other way around, scale up the UserMesh)

    Realize that when both mesh nearly fit walking forward make the UserImage bigger than the UserMesh, walking backward make the UserImage smaller than the UserMesh.

    It's a natural optical behavior but how to scale & move the UserImage/UserMesh to overcome this problem ?
     
  19. roumenf

    roumenf

    Joined:
    Dec 12, 2012
    Posts:
    635
    Well, for sure the scale and offset could be estimated, based on the Kinect camera resolution and the target rectangle resolution and position, and then you can recalculate all overlaying points. Unfortunately I'm quite overwhelmed with work at the moment and don't have time to go deeper into the issue and help you with the re-mapping of coordinate systems.

    But I still think the camera approach would be more simple and robust. Let me show you what I mean: Open the KinectBackgroundRemoval2-demo scene and set the ViewportRect-setting of all cameras (BackgroundCamera1, BackgroundCamera2 & Main Camera) to (X: 0.4, Y: 0.2, W: 0.6, H: 0.6). As you can see when you run the scene, all geometry (background, BR-image, 3d models behind and in front of the BR-image) are aligned and take only part of the screen. In your case you would probably need one more camera, to do what MainCamera does here, leaving the MainCamera render the full screen (X: 0, Y: 0, W: 1, H: 1).
     
  20. CHenYaTong

    CHenYaTong

    Joined:
    Dec 11, 2016
    Posts:
    2
    Hello.
    I want ask.
    Did Kinect v2 support for multiple kinect .Like two kinect with PC.
    And how to do.
    Thank you .
    Sorry my english not that good.
     
  21. CHenYaTong

    CHenYaTong

    Joined:
    Dec 11, 2016
    Posts:
    2
    Two kinect attach in one PC.
     
  22. Voronoi

    Voronoi

    Joined:
    Jul 2, 2012
    Posts:
    584
    I am using Kinect v2 with Unity 5.6.0f1 beta. When I run a build for the first time, a whole bunch of files are put outside of the data folder onto my desktop. The scripts are all related to Kinect, face tracking, etc. I see there is a bug in the release notes about builds pointing to Editor DLL's and think it may be this problem.

    Is this expected behavior or a bug? Any thoughts on how to prevent builds from propagating files like this?
     
  23. roumenf

    roumenf

    Joined:
    Dec 12, 2012
    Posts:
    635
    Sorry for the delayed response. No, multiple Kinects are not allowed on the same PC, as far as I remember. This was requirement of the Kinect SDK 2.0. The workaround was to connect multiple Kinects sensors to multiple PCs and sync their data over the network on one of them. But this setup is not yet supported by the K2-asset.
     
  24. roumenf

    roumenf

    Joined:
    Dec 12, 2012
    Posts:
    635
    I'm not sure which bug you mean, that was mentioned in the release notes. Copying the needed libraries to the app folder is expected behavior. This way the needed libraries are determined at runtime, without any specific preparations. Of course, if your build targets the Kinect-v2 sensor only and you want to prevent this behavior, see this tip on how to include the libraries in the build: https://rfilkov.com/2015/01/25/kinect-v2-tips-tricks-examples/#t32
     
  25. finx

    finx

    Joined:
    Mar 31, 2017
    Posts:
    6
    Hi! I'm making a project and i need to know if this asset its made for it i want to know if exist some script implemented to help me cuz I'm noob with kinect
    The project is about detect people and show a message in their head (max. 3 persons simultaneous)
    Like mentioned before i wanted to know if i can use this asset if you know info about it i will appreciate it
    I'll add reference images and video.

     
  26. roumenf

    roumenf

    Joined:
    Dec 12, 2012
    Posts:
    635
    Take a look at the 2nd face-tracking demo in the package. You could replace the hat with the bubbles, and modify or extend the controlling script, if needed. Anyway, if you later decide you are not satisfied with the package, just email me and I'll send you back the money.
     
  27. finx

    finx

    Joined:
    Mar 31, 2017
    Posts:
    6
    Hi Roumenf,
    i bough an asset and made to do the video but i have an inconvinient
    when i try to proove with 1 person it's ok but when ther is 3 persons the screen start to lagging and the objects stay in the air
    what is happined?
     
  28. roumenf

    roumenf

    Joined:
    Dec 12, 2012
    Posts:
    635
    Make sure you don't have multiple objects in the scene, containing the KinectManager-component. The KinectManager should be only one. Regarding the picture above, is it possible some chair there to be wrongly detected as human body? In this case just remove the chairs, or limit the max user distance, in meters (setting of the KinectManager-component).

    If you can't locate the issue by yourself, feel free to zip and send me some part of the project (or a demo scene, showing the issue you are experiencing), so I could take a closer look.
     
  29. digitalfunfair

    digitalfunfair

    Joined:
    Oct 21, 2014
    Posts:
    10
    Hi Roumenf,
    thanks for the great plug in. It all seems to work fine except for the BackgroundRemoval3 example where my silhouette mask appears but upside down at the top of the screen.
    Every other example I've tried works fine!
    I'm using Windows 10, Unity 5.6.0f3, 2.12.2. Nvidia Geforce GTX 760.
    Any ideas?
     
  30. Ben-BearFish

    Ben-BearFish

    Joined:
    Sep 6, 2011
    Posts:
    1,204
    @roumenf Do you know if support was ever added to UWP?
     
  31. roumenf

    roumenf

    Joined:
    Dec 12, 2012
    Posts:
    635
    Thank you for the feedback! I think there is a bug in the shader this demo uses. Please e-mail me, to send you the fixed shader for replacement. At least I hope it is fixed... ;)
     
  32. roumenf

    roumenf

    Joined:
    Dec 12, 2012
    Posts:
    635
    If you mean the K2-asset, yes the initial UWP-10 support was added to v2.12.2. It will be extended in the upcoming v2.13 that should be published these days. The only problem I see so far is a bug present in the latest beta of Windows-10 Creators Update, related to the body stream.
     
  33. Ben-BearFish

    Ben-BearFish

    Joined:
    Sep 6, 2011
    Posts:
    1,204
    @roumenf So if I use your plugin, instead of the Unity/Kinect package Microsoft has on their page, then the Kinect will run in UWP?

    I'm unsure which DLL to use to get UWP working correctly. Do I use the MultiK2.dll or the Metro.dlls? I'm unsure of the script to call as well. Do I call my usual KinectBodySensor?
     
    Last edited: May 11, 2017
    roumenf likes this.
  34. roumenf

    roumenf

    Joined:
    Dec 12, 2012
    Posts:
    635
    Yep. It will run on UWP-10. Only the face-tracking part is (still) missing. I don't understand, what is so special about this?!
    Here is what you need to do, to build for UWP-10: https://rfilkov.com/2015/01/25/kinect-v2-tips-tricks-examples/#t33
    Actually, if you like, you can make it work on UWP-8.1 too. There is a another tip about it ;)
     
  35. roumenf

    roumenf

    Joined:
    Dec 12, 2012
    Posts:
    635
    FYI: The K2-asset is based more on the Unity component architecture. You don't need to call any special functions. Just add the KinectManager from KinectScripts-folder as a component of an object in your scene, and the Kinect should start working - in Editor play mode, desktop apps, UWP 8.1 or UWP-10. Of course, you can call the KM-methods, as well, like this: KinectManager.Instance.<method-name>(); Here is the online documentation of components and API, although I still need to finish and update it a bit, to match the latest version of the package: https://ratemt.com/k2docs/
     
  36. Ben-BearFish

    Ben-BearFish

    Joined:
    Sep 6, 2011
    Posts:
    1,204
    @roumenf My case may be more particular. Ad I was hoping to include as few libraries and dependencies as possible. I'm using the WSA namespace for Unity to do KeywordRecognition for any microphone. I wanted to build to separate UWP apps in Unity. One that uses the Kinect as a microphone and one that uses another microphone. For the WSA KeywordRecognizer to detect the Kinect as a microphone all the Kinect has to do is be activated. So I was hoping to find the minimum amount of libraries and files to include to get the Kinect activated in my scene in my UWP app.

    I've used your Kinect package before to great effect, but in this case all the files and dlls in you package feel like overkill, and I was wondering if there are any libraries or a few files I can grab directly from your package that gets the minimum Kinect activated on UWP. Thank you.
     
  37. roumenf

    roumenf

    Joined:
    Dec 12, 2012
    Posts:
    635
    I see. As a minimum for UWP-10, you would only need the Plugins/UWP/MultiK2.dll library and KinectScripts/Interfaces/Kinect2UwpInterface.cs. In your case, you could probably use only the methods for opening and closing the sensor, and delete or comment out the rest of the code.
     
  38. Ben-BearFish

    Ben-BearFish

    Joined:
    Sep 6, 2011
    Posts:
    1,204
    @roumenf I ran a test where I've imported all your KinectScripts into an empty project. I've added a KinectController object to the scene, and run the scene in the editor and as a build as a UWP app. Both times I receive the error:
    I find this strange because I have the Kinect v2 plugged in, all the drivers and SDKs properly installed, and when I run the official Microsoft Kinect verifier, everything looks ok. If I run the Kinect Unity plugin from MS, the Kinect works fine. DO you know what would cause this? I have the latest version of your plugin.
     
  39. roumenf

    roumenf

    Joined:
    Dec 12, 2012
    Posts:
    635
    If you copied only the KinectScripts-folder into your empty project, I'm surprised it works at all. If you are doing a standalone build, please read and (if possible) follow these tips:
    https://rfilkov.com/2015/01/25/kinect-v2-tips-tricks-examples/#t2
    https://rfilkov.com/2015/01/25/kinect-v2-tips-tricks-examples/#t20
    https://rfilkov.com/2015/01/25/kinect-v2-tips-tricks-examples/#t32

    In this regard, citing my quote regarding the UWP10 builds above is not correct. And comparing it to the Kinect Unity plugin from MS too, because it doesn't work for UWP10 builds. Again, UWP10 builds require some special actions. The product of this build is a Visual Studio project:
    https://rfilkov.com/2015/01/25/kinect-v2-tips-tricks-examples/#t33
     
    Last edited: May 17, 2017
  40. rexcheung

    rexcheung

    Joined:
    Feb 2, 2017
    Posts:
    35
    @roumenf Thanks for your great support.
    I got a problem on face tracking.
    I based on "kinectOverlayDemo" scene to develop a photoBooth. Props will be attached to human head. Basically it works. I set the Max tracked user to be three persons.
    However, I found that they interfere each others easily when they come closer, such as detected head positions are varied once another person standing near. I know there is another package called "KinectFaceTrackingDemo". I would like to know if this package provide a better face tracking? or they perform the same in terms of tracking face.
    Many thanks.
    Rex
     
  41. roumenf

    roumenf

    Joined:
    Dec 12, 2012
    Posts:
    635
    Hi, I'm not sure which package you mean, and why the targeted users in a photo booth are three instead of one. As a matter of fact, there is another component in the K2-asset, called FaceTrackingManager. If your photo-booth puts masks over users' heads only, you could try to add it to the KinectController in the scene, and use its GetHeadPosition(userId) method instead of the GetJointPosition() of KinectManager. The estimation of the head overlay position may be needed afterwards. Please e-mail me, if you need help on the overlay estimation.
     
  42. paul_h

    paul_h

    Joined:
    Sep 13, 2014
    Posts:
    29
    Hi Roumenf, I am working with your package, great job!
    I've got an issue with kinect orientation, the same has popped out a couple of times but I did not find a clear answer: is it possibile to track an user from behind? If a user enters the range facing opposite the sensor, kinect believes he is facing forward and behaves weirdly.
    I see there is "getUserOrientation" parameter but it's read only.
    I hope there is a way to solve this issue as in case of interactive wall installations we need the gear to be far from the projection surface.
    Thanks!
     
    roumenf likes this.
  43. roumenf

    roumenf

    Joined:
    Dec 12, 2012
    Posts:
    635
    Hi Paul, unfortunately this is a "feature" of Kinect SDK. It tracks correctly only the users, who are facing the sensor. Nevertheless, according to my experience it tracks also those who turn their backs to the sensor too, only confuses their left and right joints, and things get tricky when they stay sideways - with shoulder to the sensor. I think you can still use it for the interactive wall installations, by swapping left and right joints programmatically (if this is of any importance). Anyways, please do some tests on site, with back-standing people first, to make sure you get good tracking of users. Feel free to e-mail me (only not at weekends please), if you need more information.
     
  44. paul_h

    paul_h

    Joined:
    Sep 13, 2014
    Posts:
    29
    Yep, Kinect not very elastic when dealing with non-standard setups. I remember trying to track a dancer lying with her back on the floor. A mirror and a fake pavement was not enough to fool the device :)
    Speaking about interactive walls I found that kinect works fine only if the "actors" face the sensor and then turn.
    But in this kind of installations usually people enter the sensor range walking parallel to the wall, so they present their side to the kinect. I foresee some headaches.
    If anyone else succeeded in a similar setup and wants to share his thoughts he'll be welcome!
     
  45. snomura

    snomura

    Joined:
    Apr 28, 2013
    Posts:
    4
    I purchased Kinect v2 VR Examples asset. this is excellent.
    I have a question.
    In KinectDataServer.cs, below code is used for detecting the data compressed.
    Why does this process detect that the data is compressed?

    ```
    recBuffer[0] > 127 || recBuffer[0] < 32
    ```
     
  46. roumenf

    roumenf

    Joined:
    Dec 12, 2012
    Posts:
    635
    The data transferred between data client and data server could be LZ4-compressed, to fit the UDP-packet size limitation. But, because at the moment the client send very short keep-alive messages, the compression client -> server is not used, in general. And from server to client, it is almost always used.
     
  47. snomura

    snomura

    Joined:
    Apr 28, 2013
    Posts:
    4
    Thanks for reply.
    In other words, will the first 1 byte of data compressed with LZ4 be 0 to 31 or 128 to 255?
     
  48. roumenf

    roumenf

    Joined:
    Dec 12, 2012
    Posts:
    635
    Yep. Otherwise it is a text string.
     
  49. HarishDamodaran

    HarishDamodaran

    Joined:
    Feb 14, 2014
    Posts:
    3
    Hello! I am using the package to detect users for a game I am designing. I use the cubeman to detect the user and have 3D objects attached to the ankle of the person. The shoes attached to the ankle move as the user moves in real world. The game itself is a simplified DDR game. The camera angle is from top to only look at the visible shoes and the tiles below. I am using the direct measurements from the program to control and play the game. It works fine as is.

    Problems I have -
    1. If I change the scene and come back to the same scene it causes the user standing at the same distance from the sensor to be represented 4/5 feet away from the original position ( either closer to the sensor or away from it)! not sure why this is happening. If scene is not changed it works fine! But my game requires me to change scenes at specific times. The changed scene is another game requiring the user to kick or march standing in the same place.

    I have all my 3D objects at fixed real world distances. Sensor at 1m height. 0 degree tilt.

    2. if another user walks in it picks the second user over my current user playing the game.

    Is there a way that irrespective of where the user is actually detected within the 3ft- 9ft away from the sensor they are always centered in my game ?

    Thank you for the help!
     
  50. snomura

    snomura

    Joined:
    Apr 28, 2013
    Posts:
    4
    When I run KinectDataServer and KinectAvaterDemos2 on same pc, it well works.
    But, When I split pc running KinectDataServer and pc running KinectAvaterDemos2, network disconnection occurs frequency.
    In this case, where do I investigate?