Search Unity

  1. Megacity Metro Demo now available. Download now.
    Dismiss Notice
  2. Unity support for visionOS is now available. Learn more in our blog post.
    Dismiss Notice

UnityCollider - A flexible, general purpose audio engine

Discussion in 'General Discussion' started by Tinus, Feb 15, 2010.

  1. Tinus

    Tinus

    Joined:
    Apr 6, 2009
    Posts:
    437
    --- Update ---

    There's a feature request up that some of you guys might be interested in:

    Feature: Write access to AudioClip buffers

    With the ability to write audio buffers to Unity's sound sources we could build a library of audio synthesis tools and start exploring some ideas. We could even take buffers generated in external audio applications (Max, SuperCollider) and stream them into audio clips. That way you can generate sound material with a super-sophisticated toolkit, and then run it through FMOD so you can hear it. With PD or SC then refactored and compiled as a plugin, the resulting game would be easily distributable.

    If you have some votes left, please consider spending one or two* on this request.

    * Or three!


    --- Original Post---

    Here's an idea I've been playing with for a while now. I'm very curious what you guys think about it.

    The State of Game Audio

    The traditional sample-based approach to game audio is old and dated.

    Over the course of the last two decades, game graphics have evolved from bitmap sprites to near photo-realistic imagery running at a solid 60 frames per second. We have shaders, massively parallel calculations running on dedicated hardware, and much more. With today's and tomorrow's hardware you can literally trace a ray of light as it bounces from surface to surface (and even through them!) towards the camera, creating crystal clear pictures.

    Some of these developments are slowly starting to transfer to game audio, but not nearly enough! Games across the entire spectrum, from AAA to Indie, still resort to using ancient sample-based approaches for audio. Middleware packages such as WWise or FMOD offer real-time effects processing, which is a step forward, but they don't offer you the possibility to model your own synthesis model and generate sound on-the-fly. Furthermore, these packages seem to be mostly aimed at AAA first-person-shooter titles, making it difficult to do something radically different with them.

    This inhibits development of game audio as a more integral part of game design. The result is that audio in games is still mostly an afterthought. In my opinion, game audio is at least 10 years behind on game graphics.

    Audio Design Process

    The huge gap in technology means that audio development is a parallel process.

    A typical game designer writes a design document with little attention to audio whatsoever. With tools such as Unity, a studio can start prototyping a game within weeks, allowing very agile development methodologies. Sound designers and composers typically get none of these benefits. They are called in late in the process, get their assignment, produce the end result, and that's it. Game designers embraced agile development, while audio designers are still stuck with the waterfall approach.

    Game audio is still mostly linear, applied in a non-linear context. Situations in games change continuously, all the time! Audio and music in games should be able to instantly adapt to this.

    All this means that audio is never central to a game design, while audio and music are actually a great area for innovation! There is a huge amount of creative potential here, completely untapped.

    Laptop Performance Live Coding

    Stepping outside of the world of game development for a moment, let's look at what musicians can do with technology.

    The world of electronic music has plenty of tools that enable rapid, on-the-fly development of audio. Think of Ableton Live and Reason. There are also plenty of packages that enable a programmer's approach to audio, such as Pure Data, Max/MSP, SuperCollider and ChucK. Performers can take these tools on stage, start from a blank slate, and entertain crowds within minutes!

    So I'm left thinking: Why on earth have we not integrated these tools into our game development workflows yet?

    Check some of these video's out for an idea of what this technology can do for you:

    http://www.youtube.com/watch?v=StV-NzmTZ1A&feature=related
    http://www.youtube.com/watch?v=jrWdy9plmm0
    http://www.youtube.com/watch?v=hFAh-pgxGyc
    http://www.youtube.com/watch?v=ZXuZpAYqmco
    http://www.youtube.com/watch?v=rGoA6g2dvzQ
    http://www.youtube.com/watch?v=4tiEfHaRszU
    http://impromptu.moso.com.au/gallery.html
    (Long vids, scroll through them if you're impatient ;) )

    UnityCollider - A flexible, real-time, general purpose audio engine

    I've been looking at my options for integrating that kind of technology directly into Unity.

    My first thought was Pure Data. PD, with its visual programming paradigm is very easy to get into, and the software is quite mature. However, PD does not support object oriented programming, which means that its architecture does not map well onto a game engine.

    Next I looked at ChucK. ChucK has great ideas about managing and playing with time, and its language contains very simple, yet very powerful semantics. ChucK's implementation however is still very immature, resulting in very instable performance.

    It appears SuperCollider is probably the most suitable for integration with Unity. Its language is well-defined, its implementation seems very robust, and it is very feature-rich.

    Regardless of the eventual implementation: The idea is that you get full control over the in-game audio.

    Implementation

    SuperCollider is multiplatform, so builds for both Windows and Mac should be possible. It has a client-server relationship, meaning that the server could run in-game and in-editor, and unity could provide an integrated front-end. Communication with the server is possible through open sound control.

    My plan is to isolate the server code and compile it into a library. Then, I'll have to see if the existing client code can be utilised, or if an entirely new client will need to be provided (I hope not...)

    As a Unity Basic user this would mean you can freely experiment with SuperCollider running externally (you can do this right now actually, just an OSC library from C#). For Pro users, this would mean you can seamlessly integrate SuperCollider into your builds!

    Thoughts?

    If you're still with me, I'm very interested in hearing what you think! Already have an idea for a game using this technology? See some pitfalls in the implementation? Think all of this is nonsense? Do tell! :eek:

    Example Ideas

    Modeling wind and airflow: For my current game, I want to model the sound you hear when your head moves through air at very high speeds (say, like sticking your head outside the window while driving at 80mph. The thing is, the resulting sound is entirely dependent on your head's orientation in relation to the airflow! This is not something you can do effectively with samples, as even changing your orientation by a couple of degrees causes dramatic changes in the character of the sound.

    Binaural mixing: Using filtering systems based on the human ear (HRTF), you can process sounds such that the signal contains spatial cues that the brain can understand. This results in highly realistic sound localisation which has to be heard to be believed: Virtual Barbershop Demo - Use Headphones

    How about a music game where the music doesn't remain static but actually changes as the player plays? I've recently done a project in which a player could use a Guitar Hero controller to actually play guitar, and produce feedback-fuelled Jimi Hendrix solos
     
    Last edited: May 23, 2011
  2. Jessy

    Jessy

    Joined:
    Jun 7, 2007
    Posts:
    7,325
    As a sound designer and musician, I'm intrigued. I have no idea what most of the stuff you mentioned is, however. (I've got a degree in Music Synthesis, but while I enjoy modular synthesis for fun, I don't believe that a generic interface such as that of Max, Reaktor, or Tassman allows for fast enough production workflow, and prefer dedicated, one-trick pony modules with beautiful, easy-to-use interfaces. Reason would be a good example of the kind of thing I'd like to see running in Unity.)

    In any case, I don't believe UnityCollider is a good name for this. The name Collider is already taken.
     
  3. Tinus

    Tinus

    Joined:
    Apr 6, 2009
    Posts:
    437
    Thanks, you make a good point. SuperCollider takes a step back and lets you design your own synths and interfaces, but not everyone is willing to do that.

    I think several standard tools/frameworks will need to be built in SC for easy integration with Unity. Ironically, you will still want a good sample-player in there, for example). A set of standard instruments and effects will also need to be developed to get people up and running quickly.

    The whole point, wether you like to use one-trick-pony modules or not, is that you can generate and change everything in real-time, giving you full control over audio while the game is running. Either through traditional synth interfaces or through scripts.

    Also, good call on the name. Wouldn't want people to be confused. Does anybody have any suggestions? :)
     
  4. WillBellJr

    WillBellJr

    Joined:
    Apr 10, 2009
    Posts:
    394
    While I have most of the popular music creation suites (ACID Pro, Ableton Live, FruityLoops, Sonar etc.,) briefly looking over the SuperCollider page had me thinking wow, that's complicated!


    What I'd like to see for Unity's audio system is perhaps something similar to Microsoft's Direct Music where you can setup motifs and other musical cues based on what's happening currently within your game - switching the music on the fly based on context.


    I had an idea a coupla months ago where it would have been handy to be able to trigger events at specified times from a playing audio clip; when the audio cursor hits 00:42, for example trigger an explosion or open a door, trigger another event at 01:13 etc.

    You would need either an editor that lets you load the audio and specify event points on a timeline OR perhaps specify in say a text CSV file, create a list of event times linked to the audio clip either by matching name or some other method.

    -Will
     
  5. Quietus2

    Quietus2

    Joined:
    Mar 28, 2008
    Posts:
    2,058
    Sort of like the audio equivalent of Animation Events? I can see something like that being incredibly useful.
     
  6. Tinus

    Tinus

    Joined:
    Apr 6, 2009
    Posts:
    437
    Great suggestion!

    This is very much possible in SuperCollider, and I'll probably make an easy to use framework for that. Since supercollider is not dependent on the visual framerate it can monitor time or audio signals and trigger events in Unity with very high precision. :)
     
  7. techmage

    techmage

    Joined:
    Oct 31, 2009
    Posts:
    2,133
    I always wanted to be able to control reason from Unity or any game engine.


    The only issue with this though is that, when making songs in something like ableton live, using all the latest tech, VST's and all the proper methods for modern audio production. The CPU usage can get very very high. So on current technology, most developers would probably still want to use pre-recorded audio to save CPU cycles.

    But that won't be an issue some day... so no point in not starting now.

    I've actually had much daydreaming about this. Being able to like put midi notes on the timeline of a character, map the f-curves of a characters movements to filter knobs on synthesizers. It would open a whole now level of sound design.

    Although before being able to do that efficiently, you would need to get an animation package integrated Unity as well.

    Actually I think if Unity just integrated some more programmatic audio functions, you'd be able to do alot. Like how you can control the pitch of an audio clip. If there was some function that worked like a filter knob. A reverb knob. A distortion knob. Just those 3 added functions, you could do alot of audio effects.
     
  8. Jessy

    Jessy

    Joined:
    Jun 7, 2007
    Posts:
    7,325
    That's negative thinking. :( Personally, I'm a big fan of physical modeling and convolution reverb; I don't expect to be using either in a game for a little while, but we certainly can use subtractive, FM, sampling, and a wealth of effects. My assumption is that all this stuff could be done on an otherwise less-taxed CPU core, as well. Synthesis lends itself extremely well to parallelization.

    Perhaps you missed the entirely new Animation Editor in 2.6, which is perfectly-suite to that??? :wink:

    Since Unity uses FMOD now, we can be pretty certain that this stuff is already on the way, I think.
     
  9. dawvee

    dawvee

    Joined:
    Nov 12, 2008
    Posts:
    276
    This sounds really cool Tinus, and I look forward to seeing what comes out of this! I've been really getting into DSP stuff lately, since I'm in the process of building a synthesizer/toy for the iPhone (non-Unity), but I keep getting distracted seeing all the cool things that are possible with higher level libraries and utilities on the desktop.

    It would be really cool to have a nice bridge like this to smooth the way for cross-platform sound stuff on the desktop - most games seem to end up stuck with the sample-based libraries you mention, which often seem far lower level than they need to be for what they're capable of (OpenAL, I'm looking squarely at you).
     
  10. Tinus

    Tinus

    Joined:
    Apr 6, 2009
    Posts:
    437
    I know what you mean. A couple of months ago I was attempting to write an audio engine for Android. I got a basic framework setup through which I could play with basic oscillators and effects, but going back to SuperCollider I realised I still had years of work ahead of me to get anywhere near the functionality of that package. Add to that examples of people succesfully porting PD and SC to the iPhone... and I basically gave up at that moment. ;)

    Yeah, audio DSP could potentially suck up way more processing power than you'd want in a typical game. For the moment I wouldn't mind sacrificing some fancy shaders for better audio, but it will not be like that for everyone. An idea I've had is that audio engines could offload some heavy duty calculations to the GPU through CUDA or something. But that would probably take ages to implement, and not everyone has a fancy GPU installed. Still, it's something to think about. :)
     
  11. Killergull_legacy

    Killergull_legacy

    Joined:
    Dec 16, 2008
    Posts:
    9
  12. mcroswell

    mcroswell

    Joined:
    Jan 6, 2010
    Posts:
    79
    Just a quick note to support this thread and the great ideas in it!

    I like the idea of some kind of DirectMusic approach combined with SuperCollider granularity. The idea of leveraging the animation editor to work with music (or possibly motifs) is a great idea too.

    A sophisticated system (like WWise, FMod or DirectMusic) would be a great thing.
     
    Last edited: Oct 23, 2010
  13. gl33mer

    gl33mer

    Joined:
    May 24, 2010
    Posts:
    281
    I've been playing around with the idea to implement something like the "Real-Time Sound Synthesis for Rigid Bodies" (Game programing Gems 8).

    But then , no way to Synthesize with Unity, right?
     
  14. gorgonaut

    gorgonaut

    Joined:
    May 12, 2011
    Posts:
    6
    I have, perhaps, some very good news for you.

    For my Master's dissertation project, I will be creating a 3D-audio engine for Unity using Max/MSP.

    I already have Unity speaking to Max. You will be able to trigger any sound effect from the scripts. For my purposes, the sounds will be procedurally-generated sounds that attempt to be realistic, but one could easily add functionality to playback "normal" sound files.

    Of course there will be virtualization of the sounds within 3D-space as well.

    I plan on implementing it as a patch to one or more Unity demos (particularly "Bootcamp"), so people can see the changes needed.

    It will be ready by August 19th (when the project is due)
     
  15. Tinus

    Tinus

    Joined:
    Apr 6, 2009
    Posts:
    437
    Hey Gorgonaut, cool stuff! I've got a bunch of questions regarding the kind of architecture, workflow and deployability you're going for, but I've got to run right now.

    Are you the guy from the Utrecht School of the Arts, btw? :)
     
  16. starpaq

    starpaq

    Joined:
    Jan 17, 2011
    Posts:
    118
    I look forward to seeing the result of your project gorgonaut.

    But in the topic of sound, I have always been disappointed even in so called AAA games that use repetitive sound fx for things such as gunshots, character voices(attack/yelling), walking/steps. Anytime I pay attention to the sounds its usually just one sound you constantly hear over and over. No one ever seems to think to have variation in those noises or effects.

    If those sounds could be procedurally generated, the sounds could be similar but still unique in pitch, echo/reverb, frequency without pre-recording static audio bits.
     
  17. Skulltemp

    Skulltemp

    Joined:
    May 2, 2011
    Posts:
    78
    I believe that any progress towards higher quality audio is a huge step forward.

    Up until Windows Vista, there was a huge focus on audio. Creative Labs introducing the EAX effects, and games like Battlefield 2142 taking advantage of these.

    Sadly, I have all but seen awesome sound and special effects disappear in gaming, in replace of simplistic audio. Perhaps I just stopped focusing, but Creative Labs also stopped producing. So did Razor, another company which made sound cards, headphones, etc.

    I checked recently, and the sound card I bought 5 years ago is still the "best" you can get today. However, with the introduction of Windows Vista, things went downhill fast, especially with Creative Labs stopping their updates and screwing their customers with horrible products.


    I really think Sound is a HUGE thing in games. As important as every other aspect. However, it is ignored and often thrown in just for completion, not for feature.
     
  18. PaulTurowski

    PaulTurowski

    Joined:
    May 17, 2011
    Posts:
    3
    Hi All,
    I am extremely interested in finding robust and efficient ways to implement real-time sound synthesis in Unity. I'm relatively new to Unity, but I've done a lot of work in Max/MSP and Jitter over the past few years and have been successful in relaying parameter data to/from Unity via OSC. However, based on my limited experience with SuperCollider, I get the sense that SC would be superior to Max for a few reasons, some of which are mentioned in the first message of this thread by Tinus. In short, it just seems more efficient, and for precise timing Max is problematic since the scheduler is indeterminate and does not run at audio rate. Not to mention that SC is open source. Furthermore, it would be great to have the synthesis integrated fully, i.e. without having to separately start a Max app/collective. (Don't get me wrong though--I think Max is great for other things...) I'm starting to conduct some research on the subject now, but I wanted to see if anybody has already made any significant progress in testing both methods (or any other audio synthesis environments, for that matter).

    I definitely agree with a lot of what was said in the original message, and it's exciting that people like Tinus are starting to move forward in exploring possibilities in the audio domain. I wonder about the statement about Pd though, since Pd was used in the game Spore. As I understand it, timing is sample-accurate in Pd, so that might be another audio environment worth exploring.

    Anyway, for those of you that have been working on this stuff over the past year I would love to hear more about your discoveries. I'm happy to share my progress as I get deeper into this.
     
  19. Tinus

    Tinus

    Joined:
    Apr 6, 2009
    Posts:
    437
    @Starpaq: Agreed, a lot of developers don't go there. While super-advanced stuff like Max or SC would be great for this, you can get very diverse results with Unity's FMOD. Check out this out: Interstellar Marines - Speed of Sound prototype. Those guys get great mileage out of Unity's sound system, simulating some form of sound propagation. I don't know how far they are willing to take that stuff, but it definitely made me tone down my slamming of Unity's audio capabilities a bit. :)

    @Skulltemp: There's been loads of progress in audio interface technology, just not in the consumer section of the market. You might be interested to check out what companies like M-Audio, Novation and Native Instruments are putting out for amateur and professional musicians and sound designer. In researching the history of digital audio hardware I found that all these interfaces have the same roots as current gaming audio hardware. Fascinating stuff, I think, and it's useful to think about why one branch kept evolving and the other all but died off.

    @PaulTurowski I think there's a bunch more people like us that want to explore this, we just need to get together and build a platform (easier said than done, I figured out) :). As I said before, I too think SuperCollider has a brighter future since it is lean, mean, open source, and any kind of graphical max-like environment can be built on top of it if need be. But we need Max/PD as well, on the short term it's probably faster to get up and running with.

    On the long run: I found Max and PD's flowchart programming doesn't map well to game engines. They don't support instancing, let alone something like object oriented programming or inheritance. This means that for anything more than simple prototypes you're likely to find that it's very hard to manage complexity. It's next to impossible to cleanly express your game logic in Max/PD, their syntax simply doesn't allow it. SuperCollider's language does, and if that's not enough there's nothing stopping you from implementing your own language on top of it's synthesis server. :)

    I wrote a bunch of words about my research, with which I investigate these issues more in-depth. You can download them in PDF form. As a masters thesis it's not great, but I guess it is a useful document.


    Finally a bit of news: I got a change to speak with Nicholas Francis two weeks ago, at Festival of Games. I asked him whether there's a chance that FMOD would get an 'off' switch, so SuperCollider could get full access to the audio hardware through ASIO drivers, but he said it won't happen. Maybe we can try again in the future, once we've gotten more proof of its merit. He did say write access to audio buffers should be happening soon, though. Actually he thought it was already possible, so I hope he remembered to check. This would at least allow us to write some simple synthesizers and filters to play with. We could do something like Minim, which is a java-based audio library with oscillators, filters, mixers, audio units and what have you. It wouldn't scale to AAA or anything, but we could sure do some quirky indie games and get filthy rich. ;)
     
    Last edited: May 19, 2011
  20. bryanleister

    bryanleister

    Joined:
    Apr 28, 2009
    Posts:
    130
    I've also been working with Max/MSP Jitter and was able to hook up an iOS OSC interface to a 3D object in Jitter and then use Rewire to run the controls in Reason.

    I think a good approach would be to focus on using Rewire in Unity. I agree, Max is not an elegant way to program and would be hard to connect to Unity. The advantage of just using Reason is the quality of the synths is already there, no need to recreate anything from scratch in Unity. I tried a primitive sound experiment in Unity 2.5 and the sound was definitely not all there.

    The other fun thing to do, and maybe more appropriate for games would be a top notch FMOD editor/system in Unity. The mod trackers I've played around with seem quite odd.

    Would love to see where this thread goes!
     
  21. J_P_

    J_P_

    Joined:
    Jan 9, 2010
    Posts:
    1,027
    I'm going to keep up with this :)

    Hope something pans out!
     
  22. Muzzn

    Muzzn

    Joined:
    Jan 23, 2011
    Posts:
    406
    As all the sound people are going to be reading this, then in case you haven't noticed, check kongregate. There's a competition there ($15,000 prizes!) for the best music-based game. Anyone fancy a collab, PM me.
     
  23. Tinus

    Tinus

    Joined:
    Apr 6, 2009
    Posts:
    437
    Good to see so many interested people. :)

    @Muzz5: Ooh! I hadn't noticed that before! I might give that a go. Here's the link for anyone that wants a quick click: Kongregate - Experience the music.
     
  24. Tinus

    Tinus

    Joined:
    Apr 6, 2009
    Posts:
    437
    I just remembered that I have a feature request up that some of you guys might be interested in as well:

    Feature: Write access to AudioClip buffers

    Like I mentioned before, with the ability to write audio buffers to Unity's sound sources we could build a library of audio synthesis tools and start exploring some ideas. If you have some votes left, please consider spending one or two on this request. :)


    Edit: Oh, here's a crazy thought I just had. With write access, we could even take buffers generated in SC or PD and stream them into audio clips. That way you can generate sound material with a super-sophisticated toolkit, and then run it through FMOD so you can hear it. With PD or SC then refactored and compiled as a plugin, the resulting game would be easily distributable.

    An extra thing that you'd want to configure in FMOD would then be its buffer size, so you have some control over latency... Hmmm, must ponder this more.

    Doing things like this would definitely on the todo list. Patchwork is a Max-like patcher with oscillators, filters, clocks, envelopes, all purely done in Flash.



    It wouldn't make a great game, but it would work well as the start of a tool straight within the editor.
     
    Last edited: May 22, 2011
  25. PaulTurowski

    PaulTurowski

    Joined:
    May 17, 2011
    Posts:
    3
    Does Max suffer from the same ASIO driver problem on Windows computers as SC? I don't currently have access to a Windows version of Max so I can't test this.

    For those of you bent on pursuing Max integration: While dynamic instancing is not really possible in Max, it can be simulated in a sense using the poly~ object, which allows you to create a fixed number of instances and turn off the ones you aren't using via messages to thispoly~. Furthermore, the number of poly~ instances can be changed using scripting. Here's an example of using scripting to create a specific number of poly~ instances at the start of a game, based on the number of players.



    You can't really do this dynamically though since Max reinitializes all preexisting poly~ instances every time the number of players changes. And using scripting to create new MSP objects (e.g. cycle~) outside of poly~ causes a brief stop/start of the DSP.

    Thanks for sharing this. :) Contextualizes the whole endeavor pretty well.

    This is a bummer. Let's hope write access gains more support. (I voted...) A Minim-like toolkit would be a big step in the right direction.

    I also wonder about latency/timing issues with this method, but nevertheless an interesting idea!

    @Muzz5 - That competition looks pretty interesting--Thanks!
     
    Last edited: May 23, 2011
  26. jasonkaler

    jasonkaler

    Joined:
    Feb 14, 2011
    Posts:
    242
    Most of this wen't way over my head but I'd just thought I'd mention something here
    I noticed that warcraft 3 has a very dynamic score. When you're simply building stuff, the music is quite calm, but when you're fighting, the music changes and it's totally seamless.
    Having a way to blend from one track into another based on what the character's up to would be great.
     
  27. Tinus

    Tinus

    Joined:
    Apr 6, 2009
    Posts:
    437
    @PaulTurowski: I'm glad you're with me on this. I find it somewhat hard to gather support for this stuff. :)

    I don't know about Max and ASIO on windows, actually. I don't have a licence anymore, so I'll have to ask some friends on their experience.

    Yeah, it's worth dwelling on the instancing issue. Even something as trivial as bullet sounds becomes hard to do. Ideally you'd write a patch that produces a single bullet sound, and then instance it at runtime for each bullet fired. But because of Max's limited way of doing this it actually becomes far from trivial to pull off; you have to hack it.

    Also, consider the way Max chains things together. It is really hard to dynamically rearrange relationships between objects once you've created your patch. The reason almost any game engine these days uses a component-based object modeling paradigm is because you are constantly hotplugging different behaviors together. A classic example of this is Unreal Tournament's redeemer rocket. It is a normal rocket with a camera and player control scripts attached to it, resulting in a completely different kind of behavior. Doing this kind of wizardry in Max requires that you manually rewire everything, while in SC you can very much jumble sections of the node tree around with great ease. With a well designed protocol in place it could even be fully automatic.

    To state it briefly: SC's node tree structure is conceptually very similar to Unity's scene graph and component system. It's probably possible to achieve a 1 on 1 mapping of game object hierarchies and their SC counterparts.

    @JasonKaler: You can probably do something like that with FMOD right now. Set up an object with multiple sound sources on it, each with a different layer of the music. Then have a script fade the individual sources in and out to mix the layers together. This mixer can be driven by stuff happening in the game, whatever you want basically. :)

    An issue to be aware of is timing. It is hard (but not impossible) to get your musical layers to play exactly at the same time. The easiest way to do this is to start them all at once from the same script, from the same frame, and then leave them running. If you want to start and stop layers throughout the game you need a timing mechanism that ensures you can trigger playback in time with the beat.
     
    Last edited: May 23, 2011
  28. mikewest

    mikewest

    Joined:
    May 31, 2011
    Posts:
    2
    I have to agree with JasonKaler...a lot of software and terms I'm not familiar with, but I'm certainly interested in where this thread goes. I downloaded SuperCollider and started checking out the functionality. I'm working towards having some sort of 3D audio in Unity using HRTFs.

    @Tinus:
    You mention that unity basic users could integrate with SuperCollider with "OSC library from C#." Forgive my ignorance, but how does that work? I'm new to Unity/SuperCollider/OSC so I'm finding myself a least slightly over my head in figuring out how to get the three to play well together.
     
  29. Tinus

    Tinus

    Joined:
    Apr 6, 2009
    Posts:
    437
    Heya Mike, I missed your response earlier. Hope you're still around to read this. Indeed, this stuff can be quite complicated. :)

    Open Sound Control is a protocol that works through sockets just like regular networking. It allows you to easily send signal used to control audio from one application to another; either on the same machine, over the network, or both. OSC is becoming quite popular, and pretty much all the software mentioned in this thread support it in some way. SuperCollider has a dedicated server program for sound production that listens for OSC messages, so as long as your program can work with OSC it can talk with a SuperCollider server.

    There are a few implementations of the OSC protocol for DOTNET, such as Bespoke.Osc. These give you a fully functional API for constructing and sending OSC messages with relative ease. Combine that with a decent understanding of SuperCollider's Server Command Reference, and you've got the basic building blocks.

    The only difference between Unity Basic and Unity Pro is that Pro users could eventually have a version of SuperCollider compiled and integrated into the game as a plugin. Like that, any game built using it could be distributed and sold just like any other Unity game. Basic users can't use plugins, so they would have to settle for running a SuperCollider instance outside of Unity. As far as the OSC communication goes, there's no difference whatsoever. :)


    So, Unity 3.5 will bring low-level low-latency audio buffer access! Looks exactly like what's needed! I'm pretty excited, and I'll be mentally preparing myself for when it hits. :)
     
  30. Muzzn

    Muzzn

    Joined:
    Jan 23, 2011
    Posts:
    406
    I'm gonna have to get Unity pro just to right audio filters...
    Thanks for the explanation, helped me too. Incidentally, did you know that two Unity games are leading the kong contest? Who said it's capabilities were limited?
     
  31. Meltdown

    Meltdown

    Joined:
    Oct 13, 2010
    Posts:
    5,816
    Agree with Jessy, UnityCollider doesn't make sense as a name for your audio engine :p
     
  32. Tinus

    Tinus

    Joined:
    Apr 6, 2009
    Posts:
    437
    @Muzz5: Yeah, Quickfingers is doing a great job! He's really taking it to places I had already given up on. I do think he'll be exited about more in-depth audio technology, though.

    Whether Unity's audio is limited depends on what you want to do with it, really. For a lot of things Unity's audio is fine. It does exactly what it says on the tin, and with some inventive hacking you can even make it do some things it wasn't intended for.

    But for some other concepts you might really want this:



    instead of this:



    Much like when your working on this:



    you'd feel kind of constrained having to use this:



    I'm not saying you couldn't, I'm saying it might not be the best tool for the job. ;)

    @Meltdown: Certainly! The current working title is Singularity. I'd change the thread title, but I don't think that's possible. :)
     
    Last edited: Jun 22, 2011
  33. mikewest

    mikewest

    Joined:
    May 31, 2011
    Posts:
    2
    @Tinus:
    Thanks for the explanation! I took a break for a while, but I'll be jumping back into playing around with SuperCollider, so I'll definitely be referring back to this thread when I do.
     
  34. Tinus

    Tinus

    Joined:
    Apr 6, 2009
    Posts:
    437
    @mikewest: No problem!

    On a sidenote: I just got myself the SuperCollider Book. Even though I've been mucking about with SC's internals for a while, I've never really done some proper sound wrangling projects with it. That's at least in part because there weren't any comprehensive learning materials available; something they've definitely fixed with this book. :)
     
  35. gorgonaut

    gorgonaut

    Joined:
    May 12, 2011
    Posts:
    6
  36. gorgonaut

    gorgonaut

    Joined:
    May 12, 2011
    Posts:
    6
    Hey there, FMOD and Wwise both have an interactive music element that allows you to mix, switch, and transition to different clips based on current game situations. With FMOD, for instance, you can rearrange these elements and connect them visually. Left4Dead is a more recent game that does this to great effect (albeit with Miles sound system, I believe)
     
  37. gorgonaut

    gorgonaut

    Joined:
    May 12, 2011
    Posts:
    6
    exciting!!!
     
  38. paulidis

    paulidis

    Joined:
    Aug 11, 2010
    Posts:
    21
    hi everyone. ive hired programmers from russia to make me a custom audio library.

    the reason i need this custom library made is I need the audio to respond in real-time. ie < 8ms (then the brain can start to detect delay)

    unity audio on iphone deployed apps is slowww. i esimate about 50ms or so when i am changing the pitch of an audio source in real time.

    when i have the engine complete i plan to put it on the unity app store for at least a thousand dollars as i've spent close to 10 grand and over a year developing it

    its build on core audio so it limits deployment to apple only hardware. this sucks cause i want to deploy to android as well. by the way, no current android device has audio hardware capable enough for what i need in terms of audio latency

    the interface of this custom engine matches that of unitys - play, stop, pause, volume, cue position, pitch, effects, etc.

    in other words. unity's built-in fmod engine is excellent in my opinion in terms of features for audio samples. Its just too damn slow for music games! :(
     
    Last edited: Oct 4, 2011
  39. Muzzn

    Muzzn

    Joined:
    Jan 23, 2011
    Posts:
    406
    Too slow? Really? I got FMOD to compose original music in realtime.
     
  40. PaulTurowski

    PaulTurowski

    Joined:
    May 17, 2011
    Posts:
    3
  41. Pan_Athen_SoundFellas

    Pan_Athen_SoundFellas

    Joined:
    Mar 7, 2009
    Posts:
    39
    Hi to all, I didn't see the date but this thread is very interesting and thank you Tinus for the thought you gave on the subject, this is so interesting to me that I would like to write some stuff.

    As far as I can see it there are some very interesting advancements in audio and audio for interactive applications and physical modelling, check out for example the Wwise wind synthesizer, convolution reverb, McDSP plugins feature. Also we will see the next Fmod version boasting algorithms from iZotope and having a real studio feel with mixer and audio grouping as we audio people used to it from the consoles of the DAWs and real consoles of the iconic recording studios.

    Also check out those projects:

    http://www.youtube.com/watch?v=cK4wx4pom_0&feature=player_embedded
    http://www.youtube.com/watch?v=hZC6ORUbLog&feature=player_embedded
    http://www.youtube.com/watch?v=nHH8N_lNZzI&feature=player_embedded
    http://www.youtube.com/watch?v=BjZ7CV6giII&feature=player_embedded#!
    http://www.youtube.com/watch?v=l95tZCl7YlQ&feature=player_embedded

    All those which are very interesting and will probably get into game audio engines sooner or later, are the future.

    Also I can understand that real time control of musical commands and control signal is one of the best things to happen if you are a musician/indie experimenting developer/artist, and features like MIDI or OSC input and output from a game developing middle-ware will give tremendous power and fuse creativity in new exciting ways! But its along way there because of the massive ammount of technology that needs to be improved/developed in order for this dream to happen. See as example the differences between Fmod and Wwise. Fmod is more basic and thus having a bigger compatibility span across all platforms. Wwise on the other hand was the first to put in wind synthesizers and convolution reverbs and this is making it more hard to port everywere fast and with complete features over platforms.

    To my opinion, the state as it is right now calls for a more immediate approach, and as a sound designer for the most part of my life (I'm talking almost 20 years - yes I was lucky I was allowed to take razors from my dad and cut tape recordings of my sisters voice my mother showed me how to make and cut them and join them to make strange noices :p Then I started assembly and made my own sequencer to use on a game I was making, cool stuff, I miss them.) I think that we need tools at the moment.

    I could propose some ideas like:

    - An implementation of EBU R128 metering recommendations within Unity in order to be able to check our levels when making games - http://tech.ebu.ch/loudness.

    - The ability to stream audio (with the lowest latency possible) to various devices from the sequencer in order to condact quality assurance and audio mastering.

    - The ability to stream audio from a sequencers channel directly to the environmental editor of the game development middleware in order to check how is your sound heard within the game world.

    - The ability for the game engine to route through ASIO and the other similar professional audio drivers in order to be easy for the developers to develop applications that can be used in a music/audio creative environment by musicians.

    All I want to say is that the approach you propose is a wonderful idea and surely will happen in the very near future. But until then why not focus on making our life easier with tools that will allow us to start realize and get our selves comfortable while this big gamedev-audiodev gap closes by the hour.

    My 2 cents.

    Great thread, great answers and questions, really enjoyed reading it!

    By the way - some advertizing, you can find my products on the asset-store if you search for "Panozk" :p

    Cheers!

    panozk.com
     
  42. Tinus

    Tinus

    Joined:
    Apr 6, 2009
    Posts:
    437
    Hey Panozk, thanks for the input! Your suggestions are all perfect additions to the growing list of ideal features for future game-audio technology. Checking out the links you provided now. :)
     
  43. Riddler

    Riddler

    Joined:
    May 17, 2012
    Posts:
    1
    Hi Tinus, I realise I'm a little late to the party but the ideas presented here look really interesting.

    I've just read your thesis (all 28 pages) and it certainly gives one something to think about. I'm very curious as to what impact the proposed approach towards dynamic sound generation would have both on sound designers and composers.

    I'm particularly interested in how composers would respond to the new toolsets and steep learning curves put in front of them. There are always composers who embrace new technologies but this would create a radically different way of working. The way I imagine it changing is that the thought process behind a piece of music would be what was recorded/produced, rather than the music itself. This would obviously require a very different workflow than current methods of game music production.

    Another point is that the work I've heard produced by Max, Supercollider, Pure Data et al does not represent the output of cross-genre contemporary music production. As a general trend I'd say visually programmed music creates an experimental/ambient feel, which effectively means that their application is limited. If the platform is presented as being truly adaptable and versatile it would be good to see some examples of dynamically generated music in multiple established genres that exhibit convincing results.

    From the sound designers perspective; enjoy the jobs while they last! I'm only (half) joking but when the game audio engine is dynamically generating sounds with convincing realism and well-timed accuracy, the role of the sound designer will become very different.

    Tim.
     
  44. keithsoulasa

    keithsoulasa

    Joined:
    Feb 15, 2012
    Posts:
    2,126
    I will look at this project as it develops , looks very very cool !
     
  45. Helgosam

    Helgosam

    Joined:
    Feb 2, 2012
    Posts:
    12
    Hi Tinus

    I'm trying to get an android unity App to receive OSC messages. It worked fine on the desktop game (using stuff from here http://forum.unity3d.com/threads/16882-MIDI-or-OSC-for-Unity-Indie-users) but if I just compile and run onto android it doesn't work out for me (maybe not surprising).

    Have you had OSC messaging working on Android? Any tips?

    *EDIT
    So I got the OSC messages through to my android device - pretty sweet really:


    This image shows how I got the Wiimote connected to darwiinRemoteOSC (via bluetooth) to send OSC messages straight to the android tablet (located at 10.0.2.12: 8080 on my home-grown MBP wifi network), and the Android unity app actually responds to the wiimote controls... pretty nice!
     
    Last edited: Jun 5, 2012
  46. AndyHultberg

    AndyHultberg

    Joined:
    Jan 6, 2013
    Posts:
    1
    I just noticed this thread. What is current status on the Supercollider/Unity ideas?

    As a sound designer I would LOVE to be able to use a flexible tool like SC with Unity! It can do anything...

    /andy
     
  47. Tinus

    Tinus

    Joined:
    Apr 6, 2009
    Posts:
    437
    Hey guys!

    I haven't actually worked on this at all. My time is split between freelancing and creating Volo Airsport, and several other little things. I really want to do this, but I'm not sure when I'll get to it.

    Anyone feel like collaborating on this project? Any programmers or sound designers experienced with both Unity and SuperCollider, or willing to get into them? This would be a great project for afterhours and weekends, but I don't want to develop in a vacuum. :)

    @Riddler: You're hardly late! It's been more than half a year since your post, and still hardly anything is happening on this front globally. Thanks for reading my thesis! Wow, I had little hope anyone would read the thing. :)

    It'll be a big learning experience for anyone to start working with a new paradigm like this, but then, I think it's unavoidable. Games are dynamic, procedural, non-linear in nature. It makes sense to me then, that sound and music (if they're to be more than superficial parts of a game) should be too. We'll need new ideas, tools, aesthetics, values, tricks and patterns, and that will take time.

    You're right in that tools in the category of SuperCollider and Max steer the creative process and thus the artifact created in specific directions. Like any instrument, there's only so much you can do with them, and they will lend themselves to certain kinds of sound more than others. The choice for SuperCollider for me is not because it is the ultimate blank slate in terms of possibility space, but because it has the functionality of FMOD and WWise as a very small subset of its functionality, and can do so much more besides that. As an instrument, it's a thousand times more flexible and capable than those two toolkits. It's practical, bottom up design. I don't know what is best in the ultimate sense, but in this case I know which is better.

    Indeed, the role of sound designers will change. It can be frighting, but looked at in another way: isn't it so very exciting? :)
     
  48. Tinus

    Tinus

    Joined:
    Apr 6, 2009
    Posts:
    437
    Right, I'm back on this.

    Again, if you want to help out in any way please let me know!
     
  49. Tazman

    Tazman

    Joined:
    Sep 27, 2011
    Posts:
    94
    Hi Tinus,

    This is a very cool and interesting project... I don't want to hijack the thread but since its relevant I thought of bringing to your attention Fabric's latest extension called "ModularSynth" which is a framework with the aim of providing similar functionality you get in Reaktor/PD/MXP etc.. but inside Unity... The whole thing is written in C# (possibly a native C++ version coming out as well) so it could be used on all platforms that support the OnAudioFilterRead function.

    You can find more info about it on my website: http://www.tazman-audio.co.uk/?page_id=412

    If you or anyone else is attending AES audio for games next month I will be doing a hands on demo of Fabric so drop by and have a look.

    Cheers,
    Taz
     
  50. Tinus

    Tinus

    Joined:
    Apr 6, 2009
    Posts:
    437
    Hi Tazman,

    That looks fantastic. I was wonder whether anyone was working on C#-based synthesis frameworks, and so it appears there is. I'll certainly want to try it out. Is it a separately sold product or part of your Fabric package?

    I stumbled across Fabric while implementing a multi-audio-listener plugin for Unity. There's quite some features in Fabric that could overlap well with it. Perhaps we could discuss making the two compatible somehow?

    I can't make it to AES (different countries and all that), but I'd really appreciate a chance to get your thoughts on Unity and game audio in general.

    Thanks for posting this!
     
    Last edited: Jan 22, 2013