Search Unity

  1. Megacity Metro Demo now available. Download now.
    Dismiss Notice
  2. Unity support for visionOS is now available. Learn more in our blog post.
    Dismiss Notice

UnityCollider - A flexible, general purpose audio engine

Discussion in 'General Discussion' started by Tinus, Feb 15, 2010.

  1. Tazman

    Tazman

    Joined:
    Sep 27, 2011
    Posts:
    94
    Thanks Tinus,

    Initially it will be part of Fabric integrated with its various systems (i.e events, dynamic mixer, runtime parameter control) but it's written in such a way that it could be used as a stand alone... I do plan to release a version on the asset store eventually... The beta version will be out next month sometime after AES and I will be more than happy to provide you, or anyone else, with access to the evaluation copy of Fabric.

    Your multi-audio-listener plugin looks really useful and its something that Fabric doesn't support at the moment so I will be very interested to discuss how we can make them compatible with each other... it will also be interesting to see if Unity will try to expose FMOD's multiple listener support or roll out their own.

    I will be more that happy to answer any questions you might have about audio... not that I claim to be an expert but I will do my best to share my knowledge and experience I have gained over the years doing game audio.

    Cheers,
    Taz
     
  2. figment

    figment

    Joined:
    Feb 1, 2013
    Posts:
    13
    HI, this is all very interesting - I am a musician, music producer and sound designer, as well as a programmer.. I got into Unity helping a friend to realise some ideas with Unity, and found this while looking for ways to do real-time custom sound processing within a "game".

    Fabric and ModularSynth look really good. Will they work with iOS?

    Thanks
    Adam
     
  3. Zaikman

    Zaikman

    Joined:
    Aug 2, 2011
    Posts:
    17
    This topic is super interesting to me. I've been designing a game for the past five years or so that requires expressive, dynamic sound synthesis in real-time.

    I didn't know about Unity when I first started thinking about this game; even then, it's taken a while for Unity to catch up in the audio department. It wasn't until I started experimenting with SuperCollider that I realized how perfectly the two could be combined.
    For the past week I've been working on a presentation on tools development for a local Unity meetup. I got a little sidetracked and wound up implementing an editor extension that can be used to generate SynthDefs in SuperCollider and then use OSC messages to communicate with the SC server.

    $synthGraph.png

    As you can probably tell from the above image, so far I only have a client side implementation of the SinOsc UGen. Adding new UGens is pretty trivial; the UGenSine class is about 60 lines of code, with most of that being boiler-plate. I intend to implement other generators (LFSaw, PinkNoise, etc.) once I get a few more interface things finished. Right now you can create connections betwen UGens but you can't break them, so that's probably going to be the first thing on my plate. My todo list also includes adding support for envelopes and a keyboard controller node that responds to MIDI input.

    It's not apparent from the screenshot, but all of the exposed parameters can be tweaked, from the Unity editor, while the synth is playing, which is so, so cool to play around with.

    Currently, all of the synthesized audio is playing back through the SuperCollider IDE. However, I'm going to be running some tests using SC buffers and the newer (as of Unity 3.5) write-to-audio-buffer features in Unity to see if audio can be streamed into the app in realtime. If I can get that working, it opens up a whole new realm of possibilities.

    Thoughts? Criticism? Suggestions for additional features?
     
  4. Tinus

    Tinus

    Joined:
    Apr 6, 2009
    Posts:
    437
    That's a really nice start, wonderful work!

    Any change of making this an open source project? I'd love to contribute.

    I have about a books worth of notes on creating game audio tools and workflows, gained from personal experience and from hosting several in-depth discussions with others in the game audio field. I could do a big blog-post (should do that anyway, actually) on our more recent findings, but I also suggest getting in touch through Skype. I'll send you a PM.

    Going for a Max-style visual editor is a great feature, as long as the node-graph is also fully represented in code. This is certainly the way to go to get others interested in messing around with it, since I doubt many would be willing to make the jump to the SuperCollider language. The whole thing is very analogous to visual shader editors, actually.

    I've thought for a while that it would be better to suspend FMOD and just let SC handle the output itself. The only advantage you get from piping SC audio into Unity is no more audio device access conflicts, the use of FMOD's spatialization, and use of the standard audio component workflow (which you would need to reimplement otherwise). However, you could just let SC do its own ambisonic 3D mixing and cut out FMOD altogether, which would give you more control over over spatialization and latency. With a generic ambisonic mix you could also support different listening setups: mono, stereo, quadrophonic, multichannel, headphones, even binaural.


    During our meetings we found that there are two quite distinct directions you can take tools such as these in:

    - Improving and building upon existing tools such as FMOD or WWise.
    - Experimenting with radically new workflows, game ideas

    Some features will overlap, others will be exclusive. Can you tell a little more about the game you are trying to make? That will doubtlessly influence the design of your tools.

    Sound design for a triple-A shooter could benefit from use of SuperCollider. You would hardly go full-synthesis though! Limited resource budgets for sound will mean you can only do synthesis sparingly, and sample-based approaches usually get much more bang-for-buck when you try to do realistic sound. SuperCollider's superior sound quality, flexible routing and large selection of UGens would still be really useful here.

    On the other hand there's entirely new game designs to be tried that you simply cannot do with traditional game audio tools. Game designs that use actual audio signals as game mechanics for example. Think of simulated creatures that produce and react to sound; games featuring playable instruments, and so on. These games would probably require a different set of design tools, even though the underlying audio engine would be the same.

    It's also worth thinking about which features of SCLang to implement in the Unity editor, and how. SCLang handles everything from chaining generators and filters, to setting up your mix/bus flow, to time and event-based sequencing. You've got the first aspect covered, and I'm curious to hear your thoughts for the others.

    Again, great stuff! And I urge you to open source this. :)
     
    Last edited: Apr 11, 2013
  5. Zaikman

    Zaikman

    Joined:
    Aug 2, 2011
    Posts:
    17
    Hey Tinus,
    Thank you for the feedback! You've given me a lot of great stuff to think on.

    I'm definitely open to the idea of open sourcing the project. I think this is a bigger undertaking than I can handle by myself and I'm also not entirely sure that the way I'm approaching it is the best way. I would love to get some input from people who have more experience working with SuperCollider. First I need to do some cleanup on the innards and the UI of the graph editor :)

    Yes, the node-graph is represented in code. In my implementation, a graph contains a list of UGen nodes, each of which contains some variable amount of input/output links that connect them to other nodes in the graph. To generate a SynthDef, I start at the output node and traverse backwards through its inputs, building up a string as I go. Each UGen node is responsible for declaring its own variables and function call.

    A typical SynthDef winds up looking something like this:
    Code (csharp):
    1. SynthDef.new(\MySynth, {
    2. arg
    3. freq4 = 100, phase4 = 0, mul4 = 1, add4 = 0,
    4. freq7 = 2, phase7 = 0, mul7 = 1, add7 = 0,
    5. freq6 = 100, phase6 = 0, mul6 = 600, add6 = 0;
    6.  
    7. var node6 = SinOsc.ar(freq6, phase6, mul6, add6);
    8. var node7 = SinOsc.ar(freq7, phase7, mul7, add7);
    9. var node4 = SinOsc.ar(node6, phase4, node7, add4);
    10. Out.ar(0, node4);
    11. }).send(s);
    Obviously, I'm not attempting a full implementation of SCLang in Unity. That would be ideal but I'm afraid my familiarity with SuperCollider at this point is not sufficient for me to take that approach.

    This setup requires running an SCLang script that sets up a message handler. At the moment, I'm only responding to four messages: adding a SynthDef, playing a Synth, stopping a Synth and setting a Synth's arguments.


    I completely agree that this type of approach would not benefit most triple-A titles. In fact, I don't think it would benefit most games, but it's immensely beneficial to my particular project. I've written a post about the conceptual overview of the game I have in mind, which is called Beat Farmer. You can read more about it here, but the basic premise is that you plant seeds and 'grow' patches of music. In this case, a seed would correspond to a SynthDef - its inner workings are fixed at compile time but values on the UGen nodes could be tweaked at runtime.

    I did consider letting SC handle all of the audio, but there are still things I want to do with the raw audio data in Unity. Additionally, since Beat Farmer focuses on tending separate patches of sound, I can avoid running dozens of Synths at once. Garden patches that are not being actively tended by the player could be rendered to audio clips and played back in Unity, reducing the synthesis workload on SuperCollider. I don't anticipate having more than nine synths running at once, and even then I could probably perform a similar optimization within a garden patch, so that only one or two synths are active at a time.

    As for the extra features of SCLang...
    I haven't given any thought to setting up the mix/bus flow, aside from the minimal work I've done on the output node used in the graph.
    Generators are already being chained together but I haven't gotten to play around with any filters yet; I would imagine they work in the same manner?
    I anticipate handling the sequencing aspects separately in my game. I would like to add a 'MIDI Keyboard' node to the graph that would allow for people to playback notes of a certain pitch. I don't think that will be too difficult and it's one of the higher pri items on my task list.
     
  6. DarkArts-Studios

    DarkArts-Studios

    Joined:
    May 2, 2013
    Posts:
    389
    Hi Guys,

    I don't know how I didn't come across this thread before! I've been working on something similar for a while and I'm sorry if my post seems like a bit of a "plug", but I thought some of you might be interested since it's very much on topic. I too had been looking for synth software for Unity and eventually gave in and created my own.

    Perhaps of interest to some of you? It's an in-editor visual synthesis designer (or runtime programatic composer) of what I call "compositions" which render to standard unity audioClips reusable on any platform.

    http://forum.unity3d.com/threads/221974-Sound-Generator-Create-any-audio-for-your-game-RELEASED
     
    Last edited: Jan 14, 2014
  7. intboom

    intboom

    Joined:
    Jul 2, 2013
    Posts:
    2
    Hi, can anyone tell me what kinds of legal ramifications there would be if I wanted to use supercollider as the dynamic audio synth for a game as you guys describe? I hear SuperCollider is on the GPL, doesn't that mean that any interface that is made would have to be under GPL too and make all its source code available?

    Tl;dr if I use a plugin that uses supercollider would I have to give away the source code to my whole game (or a chunk of it)?
     
  8. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,303
    I'd say that you can call/exec compiled gpl binary from other application regardless on license - http://www.gnu.org/licenses/gpl-faq.html#NFUseGPLPlugins

    OTOH I guess they themselves don't have pretty much cleared all the corner cases : http://www.gnu.org/licenses/gpl-faq.html#IfLibraryIsGPL
    It depends whether SC is called plugin or library I guess.
     
  9. intboom

    intboom

    Joined:
    Jul 2, 2013
    Posts:
    2
    I watched this video (https://unity3d.com/learn/resources/real-time-audio-synthesis-supercollider) by Zach Aikman on the subject, and he keeps calling the Ugens (the individual bits of supercollider that you call and tweak to make things happen on the supercollider server) plugins, so I'm guessing that they count as plugins, and that maybe running the server alongside your game isnt a problem if you don't mess with any source code.

    I still think the question of distributing a supercollider server executable or whatever with your game seems murky somehow but I'd like it to not be a problem because supercollider is cool

    Edit: I'm not sure if the dynamically generated code from Zach's video counts as the plugins calling other plugins case or not
     
  10. joeri_07

    joeri_07

    Joined:
    Dec 18, 2015
    Posts:
    45
    Hi Tinus, Hi everyone!

    I would love to see this project become realised, The possibilities you get when SC is flawlessly integrated in Unity are endless!

    Is there any progress? I know we can communicate from Unity to SC with OSC messages, but I want SC to be contained in Unity or routed back in some sort so I can take advantage of OpenAL with SC as a source.

    Thnx and good luck everyone!
     
    Last edited: Feb 21, 2017
  11. joeri_07

    joeri_07

    Joined:
    Dec 18, 2015
    Posts:
    45
    BUMP.

    No more progress on this? :(

    We could use Supercollider to do synthesis and generate samples, write that to a float-buffer and put in an audioclip with AudioClip.SetData I guess. Anybody has experience with this or something similar? By doing this we don't need the SC-server, only the language. Plus we can combine the source-data with OpenAL which I think is awesome!

    Cheers!