Search Unity

Rigging and Skinning solutions

Discussion in 'Animation' started by metamorphist, Mar 30, 2016.

  1. metamorphist

    metamorphist

    Joined:
    Mar 15, 2014
    Posts:
    77
    I need about several characters to be rigged and skinned,
    And although i know how to do them, i need it done faster.

    Anyone has a solution for it?

    It'll be both quadrupeds and bipeds.
     
  2. McMayhem

    McMayhem

    Joined:
    Aug 24, 2011
    Posts:
    443
    There are several solutions you can try, but most of them are going to require money.

    I would stay away from the Jobs/Collaboration on these forums, I've heard from many here that they tend to be over-saturated with either scammers or people who just don't end up working out.

    Mixamo has an Auto-Rigger that can handle your biped models, not sure about quadrupeds. They also do custom rigging which you can get a quote for. I used them once and they did a pretty good job.

    Polycount is a great place to find artists of all varieties, including animators and riggers. I've done most of my contracting through those boards and it's worked out wonderfully. But again, the price range can vary significantly depending on your budget.

    Contracting is usually the best way to go when you need things done quickly with quality. And while Mixamo's auto-rigger works (I've tried it once or twice back when it was in beta) it isn't as good as a hand-rigged solution.
     
    metamorphist likes this.
  3. theANMATOR2b

    theANMATOR2b

    Joined:
    Jul 12, 2014
    Posts:
    7,790
    Late reply - just a workflow solution I thought about.
    One solution - if using 3D Max you could use the skin wrap modifier.
    Skin wrap "wraps" one mesh around the other and transfers skinning data (even different density/resolution meshes) from one mesh to the other.

    The characters would need to be roughly the same size - and use the same animation rig.

    If the characters aren't the same size - but you were using character studio or CAT, save the skin data from character1, delete the skin modifier - resize the rig (with the animation tools) of character1, resize character1 mesh to fit character2 size, reset character1 xform, reapply skin data, skin wrap character2 to character1 mesh, when skin wrap is acceptable, convert skin wrap to a skin modifier, delete character1 mesh. Character2 is now skin and rigged to the rig of character1. Rename bones hierarchy for character2. (if using humanoid rigging in Unity renaming is alright, if using Generic and retargeting - renaming is not advisable)
    Reopen the original character1 file for original character1 sized rig.

    This process can also be used for quadrupeds, however resizing a bone based rig (not CS or CAT) isn't as easy as using the built in resizing tools for the CS or CAT animation systems.
     
    metamorphist likes this.
  4. metamorphist

    metamorphist

    Joined:
    Mar 15, 2014
    Posts:
    77
    Thank you for the responses! I'm currently using preexisting prototype asset, as it seems contracting people is such a risky businesss..

    Meanwhile, how do you split animation before you import?
    Apparently splitting in unity is not so easy....
     
  5. theANMATOR2b

    theANMATOR2b

    Joined:
    Jul 12, 2014
    Posts:
    7,790
    Splitting in Unity isn't that difficult. Check the learn - animation section - there are some quick videos that walk you through the process pretty simply.

    Externally there are several ways to accomplish - the first one is Saving out an animation range, but fbx exporting for a specific animation range is - more hit/miss, less accurate than doing it manually with a couple steps.
    Second - (personal preferred) method for an animation range (walk) frames 1-30 just delete all the animation keyframes past frame 30 then export as desired. Back in 3D either undo the deletion or reopen the file and perform the same process again for another animation range.

    I always bring in the exported fbx back into Max just to make sure the export is correct, before moving into Unity.
     
    metamorphist likes this.
  6. metamorphist

    metamorphist

    Joined:
    Mar 15, 2014
    Posts:
    77
    Hmmm, it's not difficult of course in unity but since unity goes to .0f frames, it becomes particularly detailed when separating the frames....
    And i have like 3000+ baked frames from my collection i need to work with...

    And what about animation states?

    Like how to layout them when you have like, different combo skills that require, if not precise, well timed, combinations of buttons?
    Do i split each to Attack A, B and C, and let it executes on the right button, without having it link from idle or moving?
    And what if i have different requirement of executions?
    Like midair attacks are different from ground attacks, etc

    Is there a particular framework of thought process to design these parameter and state machines?
    Can i use this state machines too as behaviour control?
     
  7. theANMATOR2b

    theANMATOR2b

    Joined:
    Jul 12, 2014
    Posts:
    7,790
    Best person I know to answer these specific questions is @TonyLi
    I have a good guess at this - but I'm not the most experienced at it - If you don't have a couple other experienced mecanim designers reply - I'll give you some of my limited advice on animation states and layering.
     
    metamorphist likes this.
  8. TonyLi

    TonyLi

    Joined:
    Apr 10, 2012
    Posts:
    12,706
    Hi all - this is just my approach. Others may have very different approaches. I like to keep things as simple as possible. Instead of one monolithic machine, I prefer to build up complexity from several simpler layers that each have a single responsibility. I separate input from character state from animation state. My character control systems typically have these layers:

    [Player Input] or [AI Input]
    ^ v
    [Virtual Controller]
    ^ v
    [Body State Machine]
    ^ v
    [High-Level Animation State Machine]
    ^ v
    [Mecanim Animator]​

    Here's what each layer does:
    • Player Input: Reads input from a controller and sets corresponding values on the Virtual Controller.
      • AI Input: Gets input from AI and sets the Virtual Controller.
    • Virtual Controller: Just holds inputs such as movement axis values, jump button, etc. Isolates the character from input methods so you can re-use the same character control logic for human players and different types of AI characters.
    • Body State Machine: Keeps track of what state the character is in, without regard to animation.
      • Input: Reads inputs from Virtual Controller and changes body state if allowed (e.g., change to Jump state when jump button is pressed).
      • Output: When the body state changes, tells the High-Level Animation State Machine to change state.
      • Input: Reads animation state changes from High-Level Animation State Machine and changes body state if necessary or handles animation events such as the point when hadouken should spawn a fireball.
    • High-Level Animation State Machine: Translates high-level animation concepts to Mecanim parameter changes. Is only concerned with animation, nothing higher level such as spawning projectiles or detecting hits.
      • Input: Gets requests to change animation state from Body State Machine.
      • Output: Sets Mecanim parameters and/or calls Animator.CrossFade/Play.
      • Output: When receiving an animation event from Mecanim, pass it up to Body State Machine if applicable.
    • Mecanim Animator: Just handles the actual, low-level animation.
    You could also wrap the high-level animation stuff into state machine behaviours if you prefer.

    Looking at the list above, it might seem like this is more complicated than just throwing all your logic into your animator controller. But mixing animation logic with higher-level logic can get really hard to manage and debug. Although the approach above has more layers, each one is much simpler to write, test, and debug in isolation.

    Then again, if you don't want to go through all that, look into Opsive's Third Person Controller. It takes care of all that for you.
     
  9. metamorphist

    metamorphist

    Joined:
    Mar 15, 2014
    Posts:
    77
    Thanks 2b, and TonyLi, hello!

    Yes, i want to go through all these... i find the Third Person Controller rather crude and very hard to reengineer...
    I guess i'm still pretty new in coding... i don't even know how to say, overload things.
    I can remove Mecanim Animator maybe, since i am using prebaked generic bone animation...

    What i'm curious is this virtual controller though...
    Is it like input manager? Recording how sharp the button is press, our timings for each buttons and etcs right?
    And whether each buttons are combined or so?

    This low level engineering, almost like designing your own console sensitivity...


    If say i just want to make something to play, like Dark souls, hack and slash, but with some button combinations,
    should i be worrying about this virtual controller?
    How rich should i be concerned with it and, if it is complicated, will it halt the playability?
     
  10. TonyLi

    TonyLi

    Joined:
    Apr 10, 2012
    Posts:
    12,706
    A virtual controller is always a good idea. It separates the device input layer from the character logic layer.

    The virtual controller layer is actually the easiest one. At its simplest, it's just a class with a bunch of variables:
    Code (csharp):
    1. public class VirtualController : MonoBehaviour {
    2.     public float horizontalAxis;
    3.     public float verticalAxis;
    4.     public bool jump;
    5.     public bool attack;
    6. }
    You could make it a C# interface or use inheritance, but I think the simple class above gets the point across best.

    Then your input layer can set the variables. If you're using Unity's standard input:
    Code (csharp):
    1. public class StandardInputToVirtualController : MonoBehaviour {
    2.     public VirtualController virtualController;
    3.  
    4.     void Update() {
    5.         virtualController.horizontalAxis = Input.GetAxis("Horizontal");
    6.         ...
    7.         virtualController.attack = Input.GetButtonDown("Attack");
    8.     }
    9. }
    Or if you're using Rewired:
    Code (csharp):
    1. public class RewiredToVirtualController : MonoBehaviour {
    2.     public VirtualController virtualController;
    3.     public Rewired.Player player;
    4.  
    5.     void Update() {
    6.         virtualController.horizontalAxis = player.GetAxis("Horizontal");
    7.         ...
    8.         virtualController.attack = player.GetButtonDown("Attack");
    9.     }
    10. }
    Or if you're using an AI script:
    Code (csharp):
    1. public class AIToVirtualController : MonoBehaviour {
    2.     public VirtualController virtualController;
    3.  
    4.     void Update() {
    5.         virtualController.horizontalAxis = DesiredHorizontalDirection();
    6.         ...
    7.         virtualController.attack = ShouldAttackNow();
    8.     }
    9. }
    Then your character logic script can read the values of VirtualController without having to know where the input is coming from (Unity standard input, Rewired, InControl, AI script, Unity's new in-development input system, etc.).

    Code (csharp):
    1. public class CharacterStateMachine : MonoBehaviour {
    2.     private enum State { Move, Attack, Fall } // Hacky example of using enum for hard-coded state machine.
    3.     private State state = State.Move;
    4.  
    5.     void Update() {
    6.         switch (state) {
    7.             case State.Move: Move(); break;
    8.             case State.Attack: Attack(); break;
    9.             case State.Fall: Fall(); break;
    10.         }
    11.     }
    12.  
    13.     void Move() {
    14.         if (virtualController.attack) state = State.Attack;
    15.         ... //etc.
    16.     }
    17. }
    The CharacterStateMachine script above is a sketch of the character logic layer, using a finite state machine, just to get the idea across.

    Since button combos are dependent on your character's current state, it's probably a good idea to put that logic in the character logic layer. For example, if the player presses Jump twice to double-jump, you only need to check this is the character is already in the jump state.
     
    metamorphist likes this.