1. Help us improve the editor usability and artist workflows. Join our discussion to provide your feedback.
    Dismiss Notice
  2. We're looking for feedback on Unity Starter Kits! Let us know what you’d like.
    Dismiss Notice
  3. We’re giving 2017.1 beta testers a chance to win t-shirts and a Nintendo Switch. Read more on the blog.
    Dismiss Notice
  4. We want to know how you learned Unity! Help us by taking this quick survey and have a chance at a $25 gift card
    Dismiss Notice
  5. Are you an artist or level designer going to Unite Europe? Join our roundtables there to discuss artist features.
    Dismiss Notice
  6. Unity 5.6 is now released.
    Dismiss Notice
  7. Check out all the fixes for 5.6 on the patch releases page.
    Dismiss Notice

Kinect v2 with MS-SDK

Discussion in 'Assets and Asset Store' started by roumenf, Aug 1, 2014.

  1. roumenf

    roumenf

    Joined:
    Dec 12, 2012
    Posts:
    349
    Hi, sorry for the delayed response. I'm on the road till the end of the week. You are experiencing interesting issues... I looked again at the code of CubemanController.cs, but could not find anything suspicious yet. And the detected user is controlled by the 'Player index'-setting of CubemanController and the 'User detection order'-setting of KinectManager. If you leave it to 'Appearance' the detected user should keep its index until it gets lost, or until other user occludes it. Here is more info on the available detection orders (and you can add your own, too): https://rfilkov.com/2015/01/25/kinect-v2-tips-tricks-examples/#t23

    Regarding the Cubeman issue: May I ask you to create a simplified project for me that illustrates the issue you have with the Cubeman's position, and then zip and send it over to me via WeTransfer.com. This could make locating of the issue easier and faster. Otherwise I'll try to reproduce your issue by myself, as soon as I get back home.
     
  2. roumenf

    roumenf

    Joined:
    Dec 12, 2012
    Posts:
    349
    Usually the disconnection tells the reason, too. So, what's the reason for disconnecttion?
    Also, make sure you use the same version of Unity on the server and client machines. Otherwise MISMATCH error is possible, due to (probably) differences in protocol implementations.
     
    snomura likes this.
  3. HarishDamodaran

    HarishDamodaran

    Joined:
    Feb 14, 2014
    Posts:
    3

    Thank you, I have sent you the files with the instructions on what to do to replicate the error.

    On further trouble shooting realized that when I walk away from the sensor, after my feet are lost when I walk back into the detection range there is around 2mts added or subtracted from my current feet position. Meaning I need to walk back further from (the limits of the detection range) or towards the sensor to align my feet in the same spot in the game.

    harish
     
    roumenf likes this.
  4. roumenf

    roumenf

    Joined:
    Dec 12, 2012
    Posts:
    349
    How is it now, with the updated CubemanController script?
     
  5. snomura

    snomura

    Joined:
    Apr 28, 2013
    Posts:
    4
    thanks for reply.

    The error code at disconnection was 0.
    I found out that there were times when sending a keepalive message from the client failed in some cases.
    When commenting out as follows, disconnection does not occur.
    in KinectDataClient.cs
    ```
    //if(connected && keepAliveIndex < keepAliveCount)
    //{
    // if(sendKeepAlive[keepAliveIndex] && !string.IsNullOrEmpty(keepAliveData[keepAliveIndex]))
    // {
    // // send keep-alive to the server
    // sendKeepAlive[keepAliveIndex] = false;
    // byte[] btSendMessage = System.Text.Encoding.UTF8.GetBytes(keepAliveData[keepAliveIndex]);
    // int compSize = 0;
    // if(compressor != null && btSendMessage.Length >= 100)
    // {
    // compSize = compressor.Compress(btSendMessage, 0, btSendMessage.Length, compressBuffer, 0);
    // }
    // else
    // {
    // System.Buffer.BlockCopy(btSendMessage, 0, compressBuffer, 0, btSendMessage.Length);
    // compSize = btSendMessage.Length;
    // }
    // NetworkTransport.Send(clientHostId, clientConnId, clientChannelId, compressBuffer, compSize, out error);
    // //Debug.Log(clientConnId + "-keep: " + keepAliveData[keepAliveIndex]);
    // if(error != (byte)NetworkError.Ok)
    // {
    // throw new UnityException("Keep-alive: " + (NetworkError)error);
    // }
    // // make sure sr-message is sent just once
    // if(keepAliveIndex == 0 && keepAliveData[0].IndexOf(",sr") >= 0)
    // {
    // RemoveResponseMsg(",sr");
    // }
    // }
    // keepAliveIndex++;
    // if(keepAliveIndex >= keepAliveCount)
    // keepAliveIndex = 0;
    //}
    ```

    in KinectDataServer.cs
    ```
    //case NetworkEventType.DataEvent: //3
    // if(recHostId == serverHostId && recChannelId == serverChannelId &&
    // dictConnection.ContainsKey(connectionId))
    // {
    // HostConnection conn = dictConnection[connectionId];
    // int decompSize = 0;
    // if(decompressor != null && (recBuffer[0] > 127 || recBuffer[0] < 32))
    // {
    // decompSize = decompressor.Decompress(recBuffer, 0, compressBuffer, 0, dataSize);
    // }
    // else
    // {
    // System.Buffer.BlockCopy(recBuffer, 0, compressBuffer, 0, dataSize);
    // decompSize = dataSize;
    // }
    // string sRecvMessage = System.Text.Encoding.UTF8.GetString(compressBuffer, 0, decompSize);
    // if(sRecvMessage.StartsWith("ka"))
    // {
    // if(sRecvMessage == "ka") // vr-examples v1.0 keep-alive message
    // sRecvMessage = "ka,kb,km,kh";

    // conn.keepAlive = true;
    // conn.reqDataType = sRecvMessage;
    // dictConnection[connectionId] = conn;
    // //LogToConsole(connectionId + "-recv: " + conn.reqDataType);
    // // check for SR phrase-reset
    // int iIndexSR = sRecvMessage.IndexOf(",sr");
    // if(iIndexSR >= 0 && speechManager)
    // {
    // speechManager.ClearPhraseRecognized();
    // //LogToConsole("phrase cleared");
    // }
    // }
    // }
    // break;
    ```

    ```
    conn.keepAlive = false; // L623
    ```

    For now, it looks like it works properly, but please let me know if you have any concerns due to commenting out this code.
     
  6. and_hor

    and_hor

    Joined:
    Jun 12, 2017
    Posts:
    1
    Bind 2D info plane to vertice on avatar mesh

    hi!

    sorry for that my description is not that precise, but I find it hard to put it into words. I made a sketchup of the effect I want to achieve:

    [​IMG]
    So the point should allways stay on the right vertice, even when the avatar is moving acording to the kinectv2 avatar controller.
    I tried it with picking a vertice, transforming into world coordinates and using a seperate render layer for the 2d elements. but its problematic with skinned mesh renderers. even tried skinnedMeshRenderer().BakeMesh but no luck so far. I just get the vertice position of the original T-pose, not of the kinect controlled model.

    Any tipps on how to proceed? or is there an easier way to do this?

    Thanks!
     
  7. roumenf

    roumenf

    Joined:
    Dec 12, 2012
    Posts:
    349
    Sorry, but I don't understand why it should be so complicated. I would parent a small sphere object to the avatar's hand-node in Hierarchy. It will serve as the reference point you need. I would even use the hand/wrist node of the avatar as such point. You can get the world position of the hand (or sphere) transform at any time. Then project this position on the screen - there is Camera.WorldToScreenPoint()-method to help you with that. And finally, display the 2d-elements on screen, with respect to the projection point. Or, maybe I'm missing something...
     
    Last edited: Jun 15, 2017
  8. SagarDabas1

    SagarDabas1

    Joined:
    Apr 27, 2013
    Posts:
    5
    Hi,
    I am using fitting demo of your package. There were some bugs when using background removal and fitting demo together. I resolved them.

    But I couldn't get the portrait mode to be working. It is only working in the editor but when I build and make an exe, it does not work. Kinect camera takes the full screen width. Can you please suggest how do I resolve this.
     
  9. roumenf

    roumenf

    Joined:
    Dec 12, 2012
    Posts:
    349
    Integration of the 1st fitting-room demo with background-removal manager will be simplified in the next release.
    The portrait mode has been utilized by many users so far, so I'm a little surprised. What exactly does not work? Could you please post a picture of the result you get, as well as the configuration page, when you start the exe. If you don't want to share them publicly, feel free to e-mail me.
     
  10. Shadeypete

    Shadeypete

    Joined:
    May 15, 2017
    Posts:
    2
    Hmm just realised I posted this in the wrong thread:

    I'm trying to get the outline of the silhouette of the user to generate and/or attract some particles.
    I have managed to do this using an avateered humanoid but I am hoping to achieve it using the actual camera image.
    Anyone got any ideas how I might be able to achieve this?

    I'm currently trying to get the pixels from the alphatexture so I can write a routine which finds the 'edge pixels' and stores them somewhere.

    Anyone know if this is a good way to go?
     
    Last edited: Jun 21, 2017 at 4:35 PM
  11. Hertugweile

    Hertugweile

    Joined:
    Wednesday
    Posts:
    1
    Hello everybody and thank you to the author for the great and impressive Kinect-plugin. It works excellent - but I do have one question, if I may... I need to limit certain axises while rotating, because I have this LEGO-figure, which is somewhat limited in its movement.
    The arms should only rotate around one axis and the same goes for the hip and knees - but it seems, Unity ignores the limitations, set in 3D Studio MAX when exporting an FBX
    I tried to make some modifications to the Kinect2AvatarRot-method, but without success.
    Can any of you give me a push in the right direction, please?
     
  12. roumenf

    roumenf

    Joined:
    Dec 12, 2012
    Posts:
    349
    If you utilize the BackgroundRemovalManager in your scene, sensorData.alphaBodyTexture would be the right texture to process. It is the same size as the color texture (1920x1080), and its pixels are non-transparent (alpha != 0) there, where the user pixels are. You can get reference to it by invoking 'BackgroundRemovalManager.Instance.GetAlphaBodyTex()'. I would recommend to use a shader, to process this texture and create the other one - with the edges. Otherwise the processing will be slow. This is a classical CV task, so I suppose there may be ready-made shaders to do it.

    If you don't use the background-removal functionality, sensorData.bodyIndexTexture is the texture to process. It is similar to the alphaBodyTexture above, but its dimensions are those of the depth image (512x424) instead. This means you should map later the pixels from this texture to the pixels of the color-camera texture, if you look for the color camera pixels. Tell me, if you need more info regarding this mapping.
     
  13. roumenf

    roumenf

    Joined:
    Dec 12, 2012
    Posts:
    349
    Please try to set the limits in the 'Muscles & Settings'-part of the avatar definition. To do it, select your FBX in the assets, go to the Rig-tab, select 'Humanoid' as animation type, then press Apply and then Configure. There you will see the Muscles&Settings-tab. To apply the muscle limitations to the Kinect-controlled avatar, select its game object in the scene and enable the 'Apply muscle limits'-setting of its AvatarController-component. This setting is quite new and still experimental, so issues are possible. That's why it is disabled by default.

    If your idea was the Lego-figure to behave and rotate like in 2D, one other option is to enable 'Ignore Z-coordinates'-setting of the KinectManager. It is component of KinectController-game object in all demo scenes.
     
  14. digitalfunfair

    digitalfunfair

    Joined:
    Oct 21, 2014
    Posts:
    4
    Hey thanks I actually found the GetRawBodyIndexMap() function yesterday and wrote a basic script to loop through the pixels and extract the edge ones. Not sure if this is the most efficient solution but it seems hopeful at present!

    If I write a shader I will have to either get the pixels back off the GPU which seems counterproductive or write a gpu based particle system which seems daunting at present!
     
    roumenf likes this.