Search Unity

  1. We're looking for feedback on Unity Starter Kits! Let us know what you’d like.
    Dismiss Notice
  2. Unity 2017.2 beta is now available for download.
    Dismiss Notice
  3. Unity 2017.1 is now released.
    Dismiss Notice
  4. Introducing the Unity Essentials Packs! Find out more.
    Dismiss Notice
  5. Check out all the fixes for 5.6 on the patch releases page.
    Dismiss Notice
  6. Help us improve the editor usability and artist workflows. Join our discussion to provide your feedback.
    Dismiss Notice

Klattersynth TTS - Support Thread

Discussion in 'Assets and Asset Store' started by tonic, Aug 10, 2017.

  1. tonic

    tonic

    Joined:
    Oct 31, 2012
    Posts:
    309
    :eek: Klattersynth TTS
    Learn more from official website of the asset: http://strobotnik.com/unity/klattersynth/

    Klattersynth TTS is the first asset of its kind available for the Unity cross-platform engine:
    Small and fully embedded speech synthesizer.

    What features does Klattersynth TTS have?
    • It does not use OS or browser speech synth, so it sounds the SAME on all platforms. :cool:
    • Dynamically talks what you ask it for.
    • Generates and plays streamed speech in real-time.
    • In WebGL builds the AudioClips are quickly pre-generated and then played.
    • Contains English Text-To-Speech algorithm (transform to phonemes).
    • Alternatively you can enter documented phonemes directly, skipping the rules for English TTS conversion.
    • You can ask current loudness of the speech for tying effects to audio.
    • Uses normal AudioSource components: 3D spatialization, audio filters and reverb zones work like usual!
    • Contained in one ~100 KB cross-platform DLL file.
    • When embedded with your game or app and compressed for distribution, compresses down to less than 30 KB. o_O
    • Supports all Unity versions starting from 5.0.0 and available for practically all platforms targeted by Unity.
    Why Klattersynth TTS is different from many other speech-related assets for Unity?
    • No need for the underlying platform to offer speech features (OS or browser).
    • No need for a network connection for external generation of audio clips.
    • No need to pre-generate the samples before creating a build of your app or game. The clips are either streamed realtime or generated on the fly when the app or game is running.
    Visit the official website of the asset to try out a WebGL build yourself!
    http://strobotnik.com/unity/klattersynth/

    Demo videos of Klattersynth TTS:


    http://strobotnik.com/unity/klattersynth/
     
  2. Obsurveyor

    Obsurveyor

    Joined:
    Nov 22, 2012
    Posts:
    222
    Is this considered done or are you still working on the phonemes? The F's sound more like static and Th's are kind of just a pop. Also, in the WebGL demo, the base frequency doesn't seem to affect whisper very much. Are there more audio tweaks available?
     
  3. tonic

    tonic

    Joined:
    Oct 31, 2012
    Posts:
    309
    Hi @Obsurveyor, I won't be actively working on the sounds of phonemes. It's only a distant possibility that I'd add 1-2 more later, or try to adjust them. But with this technique there's not going to be huge improvements in that area, a synth this small is bound to have a bit of limitations.

    The example voices in "Text Entry" demo are made by adjusting these three available parameters: "Ms Per Speech Frame" (effectively controls the speed), "Flutter" and "Flutter Speed" (which can add a bit of unsteady weirdness to sound for example, although normally the flutter is just somewhat inaudible variance to the voice wave).

    Here's an image from the inspector:
    upload_2017-8-11_10-25-34.png
    (this is the "Slow and unsteady" voice of the text entry demo)
     

    Attached Files:

  4. DbDibs

    DbDibs

    Joined:
    May 23, 2015
    Posts:
    1
    Very interesting, a couple of questions though. Since it's being generated realtime, is it possible to adjust the actual speed/pitch realtime as well? (eg. in the WebGL demo, being able to adjust "Base Voice Frequency" and having it change realtime instead of having to prerender it, though I understand WebGL HAS to have it prerendered). If so, this would be PERFECT for my needs! And as for my second question - I completely forgot what it was! haha.
     
  5. tonic

    tonic

    Joined:
    Oct 31, 2012
    Posts:
    309
    Hi @DbDibs, you're correct - WebGL has to have audio prerendered, so in WebGL builds Klattersynth will need to generate the whole clip just before playing it. It doesn't take long, but it is pre-generated before actually starting to play the clip.

    However, it is of course possible to just adjust the pitch parameter of the AudioSource playing the generated clip, as you can with any AudioClip. This will of course both change the pitch and slow down at the same time when you lower it (and vice versa).

    When used in streaming mode, the synth will latch to the parameters given at the time of starting to speak that particular line (also the msPerSpeechFrame is locked on to at initialization time, to minimize any extra memory allocations needed later). Even real-time streamed audio is generated in batches, so fine-tuned control of parameters would need to be specified in advance (if batch size is not very small). That's not a feature of the API now, but it's a possibility for future version.

    However, currently supported way is that one could simply instruct the synth to talk e.g. just a single word at a time. And just adjust the base frequency for each word to talk once previous one is finished. This would work both with streamed and pre-generated (and possibly cached) speech clips.