Search Unity

Lipsync Rigging and Driver

Discussion in 'Works In Progress - Archive' started by mzartler, Feb 16, 2012.

  1. mzartler

    mzartler

    Joined:
    May 15, 2010
    Posts:
    19
    Hello Everyone,
    We've been working some scripts for rigging and facial animation. We think it's come a long way, but would be very interested in your opinions.

    We have a mature automatic lipsync technology that is used by game and film studios, but our unity support was weak. We retooled it that character setup is pretty straightforward. We ended up with something that works great with lipsync and facial animation, but also turned into an extensible bone based posing system. It seems pretty cool, even outside of sourcing with our lipsync data.

    Demo Video

    If you are interested, here is a video of usage. Disregard the speaking challenged narrator (or not!)

    http://www.youtube.com/watch?v=15-TWaWnHh4

    [You may need to go full screen]

    If you are interested in the unity package used in this demo:

    http://www.annosoft.com/unity/unity_annosoft_demo.unitypackage

    More Details

    A "mouth rigger" script is used to setup the visemes. Each viseme is posed in unity. Either by manually moving around the bones, or by sample capture from animations your fbx (or maya file). The animation capture is pretty slick. The visemes (or general poses) are constructed in maya, blender, or wherever, as a linear sequence. For lipsync, each frame represents the a mouth position.

    We provide a script "AnimationSampler" that allows you to cycle through the animations in unity. I used the animation sampler and mouth rigger to set up the visemes in the demo. it took about 5 minutes.

    The only downside to the whole thing is that the mouth rigger must be initialized with the bones list, the bones that will be manipulated, and this is a manual process. However, I think i can do this manually by analyzing the animation bones and determining which bones change. (To do!)

    We also include a generic "BonePoser" which can be used to record poses by name and then show them at playback (by name too). This seems pretty useful. We are currently using it to do automatic eye blinking and brow motion (from data from the lipsync tool)

    The lipsync data is sourced either from The Lipsync Tool, from our production lipsync SDKs, or from the unity realtime lipsync plugin. The scripts and unity side are cross platform and will run everywhere unity runs, but unfortunately, the Lipsync Tool on runs on win32. I'm going to put together skinny UI for the Macintosh (very soon), and introduce Lipsync Tool 5.0 for Macintosh.


    If you want to test out the Lipsync Tool, please send me an e-mail. We offer 30 evaluation at no charge. The Lipsync Tool from the annosoft.com is save disabled, so you need to get the time limited trial. i can send more stuff such as recommended viseme poses, etc, to enable you to get a high quality result.

    if you've got a project that needs lipsync, i do think this is worth a look. Our speech technology has been used in 100s of games, and i think the unity integration is looking very good too. It's probably overkill if you just have a couple of lines of audio, but anything more than that, a really automatic solution is going to save money.

    Thank you for listening!

    Cheers,

    Mark Zartler
    Annosoft
    mzartler@annosoft.com
    http://www.annosoft.com
     
  2. sybixsus2

    sybixsus2

    Joined:
    Feb 16, 2009
    Posts:
    943
    Hi Mark,

    This looks very interesting. I haven't personally worked with Lipsync before but I did work on a project where other people were using your tools, and the results were indeed excellent. So as I say, bit of a newbie to your toolset personally, and I'm trying to get a handle on what I need in order to have lipsyncing in my games.

    Have I understood correctly?

    • The Unity scripts - These are freely available and while not strictly necessary, are extremely useful for Unity integration.
    • The Lipsync Tool - This is what I use to create data to use with the Unity scripts. I "pre-bake" all my text/sound files into xml files which describe how the visemes will be used by the Unity scripts. I could also pre-bake data for use in 3dsMax or Maya, if I so wished. This costs $500.
    • The Lipsync SDKs - These are useful for when I want to generate the lipsync data from within my app - either to save me from having to pre-bake all of the data manually or because I want to be able to integrate with voice samples which are not part of the app itself. This costs $3000 plus and would have to be wrapped for use in Unity as it's a C++ SDK.

    So if I'm happy to pre-bake all of the lipsync data and not have my app/game lipsync with any sounds which are not part of the game, I just need to buy the Lipsync tool and download the Unity integration scripts?

    Have I basically understood everything correctly? If not, could you please clarify the points I'm mistaken on? Thanks!
     
    Last edited: Feb 19, 2012
  3. mzartler

    mzartler

    Joined:
    May 15, 2010
    Posts:
    19
    Hi sybix!

    Very close! The unity scripts are indeed freeware. I think the whole bone posing approach, using animations to capture the poses, is pretty cool. It seems like a generally useful thing. I hope so. It's changed how I think about bone based integration. But maybe this is just a discovery that is already well known to everyone but me. [this happens!]

    When dealing with rerecorded audio, the Lipsync Tool (with the unity scripts) provides everything. The production SDKs come into play when you need batching capability. For projects of 100 or less lines, where you can deal with the scope, the Lipsync Tool is a comparatively great value. The "batch" distinction is a bit capricious, but honestly, i don't have the volume to stay in business without it. It also allows a psuedo-tiered approach based on usage. If you've got 1000s of audio files to lipsync, it's somewhat more expensive. Otherwise, it's the same speech technology, with the same language support, etc.

    We do have microphone/realtime lipsync also. This SDK is in-game, capturing the microphone data, analyzing it, and generating lipsync data on the fly. It works pretty well, but it is not as high quality as our production sdks (or lipsync tool) which has the benefit of analyzing the whole audio stream before making a decision on lipsync.

    If you've got a project where it won't be brutal to manually open the files, generate the lipsync, and save them, the lipsync tool is definitely the way to go.

    Your pricing is correct on the Lipsync Tool. It's 500 USD. The SDKs vary from 3K to 5K depending for production/internal use (our typical configuration).

    The downside is our Macintosh support. The SDK runs on MacOS. It even runs on the iOS. But i have not done a Mac version of the lipsync tool. It is on my short list. Although I won't be able to port over the tool, as is, in a short time frame, I should be able to boot up skinny user interface in short order. I've been asleep at the wheel, and it's time to start the fix.

    If you want to try out the lipsync tool, just send me an e-mail. mzartler@annosoft.com. I'll send a 30 day trial that will work with our unity scripts, as shown. I don't have the save-enabled version downloadable from annosoft.com. Maybe I'm a bit of a tin-foil type about that, but it prevents cracks. The upside is that i get a chance to talk to customers.

    Also, I've made a couple changes since uploading the original video. I got tired of selecting bones, and can now extract them automatically based on transform changes in the animation. I've got a new video explaining the new (easier) process.

    http://www.youtube.com/watch?v=C-MhTN9vJD8

    Hopefully I've answered your questions coherently! If not, let me know. I'll make another stab at it.

    Cheers!

    Mark
    Annosoft
    http://www.annosoft.com
     
  4. sybixsus2

    sybixsus2

    Joined:
    Feb 16, 2009
    Posts:
    943
    Hi Mark,

    Thanks for the clarification, and pleased I was quite close.

    The batch distinction sounds pretty fair to me. I'm not sure yet whether the volume of audio will be such that I'll be happy to convert manually or want to batch. I'd prefer not to have to use the SDKs if possible (for time and effort reasons more than money) but do any of the SDK licenses provide access to a tool which will batch process my audio? Or is it just the SDK and I have to write the tool with it?

    I'm sure I will be in touch to ask about an evaluation copy at some point, but I don't yet have enough of my production artwork finished to make it worthwhile testing just yet. When I do, I will definitely sit down and take a good look at this. It could be ideal for my purposes, since it doesn't require the level of integration with modelling tools that some of the alternative solutions do.
     
  5. mzartler

    mzartler

    Joined:
    May 15, 2010
    Posts:
    19
    Hi Sybixsus,

    When you license an SDK, you get 5 seats of the Lipsync Tool with batching capability. It also can send you a compiled version of the command line tool if that fits the situation better. So yeah, there are a couple of ways to batch a bunch of audio without having to write a program around the functionality.

    Thanks. It's better to wait until you are ready to start testing. it'll give you full use of the free evaluation.

    Cheers,
    Mark
     
  6. sybixsus2

    sybixsus2

    Joined:
    Feb 16, 2009
    Posts:
    943
    Thanks for the info, Mark. It sounds as though I'll have to try the evaluation version when I have some assets together and then see how far my budget will stretch.
     
  7. Gregg-Patton

    Gregg-Patton

    Joined:
    Aug 31, 2009
    Posts:
    28
    Hi Mark,

    Do you have any advice or perhaps an example for blending the lip-sync with another animation? I'm having trouble with a full body animation interfering with the lip-sync. CrossFade and Blend produce similar results.

    This is an awesome package, thanks for releasing it to the community.
     
  8. mzartler

    mzartler

    Joined:
    May 15, 2010
    Posts:
    19
    Hi gopster,

    we have a code update for this problem, if i understand correctly. A simple example would be an animation which changes the mouth to frown, back to a smile, etc. The existing code overwrites the shared bones.

    we wrote a new deformer which *should* alleviate the troubles.

    http://www.annosoft.com/unity/unity_annosoft_code_only.unitypackage

    If you are using the posing package, it will work "out of the box" nothing to change. If you are snarfing code for a custom setup, I'll try to explain the necessary changes.

    The change is in AnnoBoneDefomer.cs. This has been moved to Plugins/Annosoft/Helpers, so you may need to delete the version in Plugins/Annosoft. (too many files in one spot)

    The approach is pretty simple. Hopefully my explanation is comprehensible.

    Keep track of previous deformations and don't absolutely reset bone transform.

    This is accomplished by recording changes to bones as they are transformed. The next frame, these values can be subtracted from the bone transform to get back to the base pose (sort of). The bones can change externally (via animation, etc) and those changes are sticky because everything is done with offsets instead of absolute values.

    Frame 0
    Reset to base pose: does nothing because we have no previous bone data to unwind
    Deform: Deform the bones as usual, but records the bone change values. These get unwound in the next frame "Reset"

    Frame 1
    Reset to base pose: Use the saved bone data to reverse the previous deformation.
    Deform the bones: Deform the bones as usual, but record the changes for the next "Reset".

    Frame 2
    Reset to base pose: Subtract the bone offsets accumulated during the previous frame deformation
    Deform the bones: Deform as usual, but record the changes for the next frame

    etc

    Gory Details:

    It starts in AnnoBoneDeformer.cs

    BoneDeformMemento is the object that carries around the old deformations and also accumulates new deformations.

    AnnoBoneDeformer aggregates a BoneDeformMemento. IAnnoDeformer.Reset is called to start the new blend. It instructs the BoneDeformMemento to unwind it's changes from the previous frame (instead of resetting absolutely to the base pose bones)

    BonePose.Blend receives the BoneDeformMemento by parameter. It deforms as usual, and passes the changes to the BoneDeformMemento so they can be unwound in the next frame.

    If you are using the pose system provided by the toolkit, it will run as is. If you have any trouble with it, you can disable it by changing one line of code:
    AnnoBoneDeformer.cs [Line 26]
    BoneDeformMemento DeformMemory = new BoneDeformMemento();
    change to:
    BoneDeformMemento DeformMemory = null;//new BoneDeformMemento();

    If you are using the system, and are finding trouble with BoneDeformMemento which is fixed by commenting out that code, let me know. I'd like to fix it.


    Cheers,
    Mark
     
  9. omarzonex

    omarzonex

    Joined:
    Jan 16, 2012
    Posts:
    158
    very nice rigging

    are you sure dynamic rigging (Automatic)

    please now
     
  10. media3d

    media3d

    Joined:
    Apr 15, 2009
    Posts:
    32
    Mark,

    Testing now the annosoft bone setup with our mesh and trying to find the AnimationSampler Script. Downloaded the dif unity packages but still can't find the script. Where can I get this?

    Thanks



    EDIT: Found the menu!
     
    Last edited: Apr 26, 2013
  11. Alex-3D

    Alex-3D

    Joined:
    May 21, 2013
    Posts:
    79
    Is there Annosoft alive?
     
  12. media3d

    media3d

    Joined:
    Apr 15, 2009
    Posts:
    32
    Just finished an early stage of a project using this setup and its working very well.
    Figured out that the annosoft scripts are not limited by parent/child relationship.
    --On the video tutorial, the script is set to the mesh, but we have had success applying the
    --scripts to the parent of the mesh we want to influence.
    --and still call out another animation to other bones!

    Just make sure to assigne ALL the visemes, including "x" :)

    Thanks

    Good job annosoft....
     
  13. manishamalik

    manishamalik

    Joined:
    Sep 29, 2016
    Posts:
    3
    i am facing problem that script can not be loaded please fix any compiler error and assign the valid script unity.JPG

    How should i Resolve