Search Unity

[RELEASED] G-Audio : 2D Audio Framework

Discussion in 'Assets and Asset Store' started by gregzo, Jan 20, 2014.

  1. gregzo

    gregzo

    Joined:
    Dec 17, 2011
    Posts:
    795
    $G-Audio_screenshot.png

    G-Audio is an optimized audio framework for Unity which provides exceptional low-level control over the processing, filtering and playback of sounds. Its easy to use inspectors and straightforward API deliver an integrated set of powerful audio tools - from the procedural sequencing of samples, to the mixing of realtime filtered tracks. With G-Audio’s power and flexibility, you can create game audio and music that evolves dynamically according to code logic and user interactions.

    If having a dynamic and interactive audio experience is important to your game, or if you want to develop a music or sound app with Unity, then we encourage you to consider using G-Audio.

    "Break Me!" Procedural Music Demo
    G-Audio Website
    Asset Store Link

    Features

    Edit Mode Prototyping: Prototype procedural sequences or effects without entering play mode thanks to custom inspectors and windows.

    Custom Mixer: Monitor levels and apply filters to multiple tracks through a friendly in-editor mixer. G-Audio enables fire-and-forget audio – just tell the engine to play a sample, it never cuts-off playback of another.

    Realtime filters for Unity Free: Low pass, high pass, shelf, peak, notch, distortion, gain, delay and LFO. Filters can be applied to tracks in real-time, or to cached samples on the main thread. G-Audio is also compatible with Audial Manipulators: Audial filters show up in G-Audio’s mixer just like core filters. Audial filters include reverb, bit crusher, compressor and more…

    Supports all Unity platforms: Including support for Web Player (no DLLs).
    Robust Pulse System provides sub frame rate speeds, ideal for granular synthesis ( tested at up to 150’000 BPM ).

    Next to zero garbage collection: Creating and destroying AudioClips can lead to heavy garbage collection spikes and framerate drops. G-Audio does not rely on AudioClips and pre-allocates memory to practically eliminate audio related GC.

    Smooth stops without adjusting volume: Stop any sound or track in less than a frame without clipping, G-Audio handles buffer level fades for you.

    FFT Module for ready-to-use spectral analysis and graphing.

    Extensible API: Both higher level and lower level components and classes can be the basis of your own custom behaviors. Most high level components have low level, non-mono behavior classes that code oriented devs can interact with directly.

    Flawless Support: We care about your projects! Wether you need help realizing an idea, or would like to know if G-Audio is the right tool for you, we look forward to hearing from you.
     
    Last edited: May 25, 2014
  2. gregzo

    gregzo

    Joined:
    Dec 17, 2011
    Posts:
    795
  3. Play_Edu

    Play_Edu

    Joined:
    Jun 10, 2012
    Posts:
    722
  4. gregzo

    gregzo

    Joined:
    Dec 17, 2011
    Posts:
    795
    v1.1 is in the works, please let me know if you have specific feature requests.

    Planned for 1.1:

    -Gain control per sample is simplified( no need to go through GATPanInfo anymore )
    -Microphone classes
    -Step by step HowTo guide

    Cheers,

    Gregzo
     
  5. uniphonic

    uniphonic

    Joined:
    Jun 24, 2012
    Posts:
    130
    Sounds very interesting! Any idea how it performs on mobile?

    Is there a built in way to make some kind of step sequencer, that would allow you play specific samples at specific beats at specified pitches?

    Thanks!
    Jacob
     
  6. uniphonic

    uniphonic

    Joined:
    Jun 24, 2012
    Posts:
    130
    Some additional questions:
    What kind of polyphony can we expect on desktop and mobile? (specific examples?)

    is it possible to use multiple Audio Sources, so that one audio source could have reverb, while the other does not, and still keep sync between them?

    Thanks!
    Jacob
     
  7. uniphonic

    uniphonic

    Joined:
    Jun 24, 2012
    Posts:
    130
    And sorry, one additional question:
    When it says "*G-Audio is not the right choice for playback of long tracks, but may coexist happily with Unity's audio components."
    how long are we talking about, and what are the symptoms of a problem in this respect?

    Thanks!
     
  8. gregzo

    gregzo

    Joined:
    Dec 17, 2011
    Posts:
    795
    Hi uniphonic,

    At work right now, will take the time to reply in detail when I finish the day.

    Thank you for your patience,

    Gregzo
     
  9. gregzo

    gregzo

    Joined:
    Dec 17, 2011
    Posts:
    795
    Hi uniphonic,

    I've taken the liberty to number your questions.

    Answers!

    1) G-Audio comes with MasterPulseModule and SubPulseModule components, which enable simple step sequencer structures as well as more intricate designs. You can map an envelope to a pulse so that samples are automatically cut to the right length. If you organise your samples as consecutive midi codes
    , you can refer to them by index. G-Audio is quite low-level for now - if you need to build a virtual sample bank where only 1 out of three midi codes are real samples( minor thirds ), it's perfectly possible but requires a bit of work. There are so many use cases - G-Audio provides the building blocks, and I can help as much as my skills allow.

    2) Polyphony: 20 mixed 44.1khz mono samples on 4 tracks with 1 effect per track and reverb on the master mix results in about a 20% increase in cpu load on iPad 3.

    Multiple AudioSources: in v1.0, only one player can exist per scene ( 1 player = 1 AudioSource ). This is simply a design choice: the code is ready to accept multiple players, but the only reason to do so would be your exact use case - adding some of Unity's built in filters to the master mix of one player, and other filters to another. I'm waiting to see if there is any demand for that functionality before exposing it.

    Also, G-Audio can work in parallel with Unity AudioSources, in sync.

    3) G-Audio was built for realtime sample processing: generative music or on-screen instruments are great examples of what it can do well. If you just need to play a track, Unity's API should do fine. How long? Samples are typically 0.1 - 10 seconds long. The limit is really performance and ram bound. Note that G-Audio 1.1 will include a mono wav streamer.

    The main issue is memory: to process samples quickly, they need to be loaded, and audio can quickly get big. G-Audio does provide a memory management solution to that effect, which minimises greatly garbage collection.

    In the end, I would have to know about your precise use case to let you know if G-Audio is an appropriate solution for your use case. Feel free to write more!

    Cheers,

    Gregzo
     
  10. uniphonic

    uniphonic

    Joined:
    Jun 24, 2012
    Posts:
    130
    Thanks for your response! It sounds excellent.

    Did you use some version of this framework making uPhase or End?

    My use case is a little more towards the generative music side, with user interactions. I'd really like to have varying amounts of reverb on different things, as I want big reverb soaked pads, but I don't want a bunch of reverb on my kick drum for instance, or bass. Do you think it would add much overhead to the processing if you were to add multiple players?

    Would it be possible to sync waves played back from standard Unity Audio Sources, with G-Audio? Maybe I could run my drums and bass that way...

    Thanks very much for making this framework! I purchased it, and am looking forward to using it.

    Jacob

    P.S. By the way, I think that the "2D" in your framework name might be a little misleading, as at first I guessed it was a framework geared for 2D games, and almost overlooked it. I might suggest substituting another word in the title, such as "Advanced" or something like that. Just my thoughts from a marketing perspective, that you can take as a grain of salt if you like.
     
  11. gregzo

    gregzo

    Joined:
    Dec 17, 2011
    Posts:
    795
    uPhase was very much a learning project and only used Unity's default audio components. It made me realise the limits and garbage collection issues of these components where sample processing is concerned. End is so simple, it didn't need anything very sophisticated and took advantage of 3d audio.
    My two multi-iPad installations, Music in the Room and Music in the Room( Sharjah ) were a first step towards G-Audio, but mixing everything in a 1 second AudioClip on the main thread. Mixing directly on the audio thread is a bit more challenging because of threading, but makes much more sense.
    The overhead of having separate players should be quite minimal.
    Another solution I'll provide hopefully in v1.1 is to add reverb to G-Audio filters so that you can apply it as you would any other G-Audio filter: per track, per sample, pre-processed or real-time. Downside is that reverbs unity proposes are FMOD reverbs, probably computed in C or C++ - can't compete with that performance wise. Plus they are pretty good.
    Yes it would. G-Audio timing is based on AudioSettings.dspTime, so it's fully synched if you take proper care( dspTime is updated on the audio thread and should be cached before use ). It's also the reason why G-Audio does not support 48khz output: a glitch in dspTime update on unity's side of things…
    You're welcome! Do post a review when you find the time!
    Hadn't thought of that - I just wanted to make it clear that G-Audio doesn't spatialise audio. I see where you come from, maybe "Advanced 2D Audio"?

    I hope you find the product useful, and please don't hesitate if you have questions / suggestions! I'll have some more time to give to the project from mid-february onwards.

    Cheers,

    Gregzo
     
  12. gregzo

    gregzo

    Joined:
    Dec 17, 2011
    Posts:
    795
    Hi to all G-Audiophiles,

    v1.1 is progressing smoothly and adds:

    -Per sample gain control when playing back through tracks
    -2 new track filters: PRCRevenerb and NReverb adapted from keijiro's classes
    -Mono wav streamer
    -Microphone classes ( in the works, should be in 1.1 if all goes well ).

    @uniphonic: should do the trick for your use case!

    Do let me know if you have feature requests.

    Cheers,

    Gregzo
     
  13. neuromorph

    neuromorph

    Joined:
    Jul 5, 2013
    Posts:
    6
    I'm unable to set AudioSettings.outputSampleRate on OSX. The problem has nothing directly to do with G-Audio other than it's keeping me from going forward with testing it on my primary machine. On that machine the AudioSettings.outputSampleRate seems stuck at 48000. G-Audio's GATAudioInit is also unable to change it after putting 44100 sample rate into inspector for both osx editor and player. Is there a trick to OS X and setting the ouputSampleRate that you know of?
     
  14. gregzo

    gregzo

    Joined:
    Dec 17, 2011
    Posts:
    795
    Hi neuromorph,

    Sample rate should be 44.1 khz by default on OSX. Did you install custom drivers? Do you use an external audio card?

    Also, GATAudioInit needs to have it's own scene with nothing else but that script in the hierarchy: the audio engine needs to reset for output sample rate to be applied.

    Let me know if that doesn't help!

    Cheers,

    Gregzo
     
  15. gregzo

    gregzo

    Joined:
    Dec 17, 2011
    Posts:
    795
    @neuromorph

    Ah, and forgot the most straight forward possible cause:

    Applications - > Utilities -> Audio Midi setup -> change sample rate to 44.1khz there. If it's set to 48 kHz, Unity won't be able to override, just tested.

    Will include that in v1.1's read me.

    Cheers,

    Gregzo
     
  16. metaphysician

    metaphysician

    Joined:
    May 29, 2012
    Posts:
    190
    Hi Gregzo! Just discovered this - i have to say it looks pretty appetizing, but i don't have time or money yet to investigate this at the moment, plus a step by step guide would help as i'm still a very inexperienced coder. but this is definitely going in the Wish List for sure! at the moment, i do have some questions, though:

    first of all, this Multiple Audio Sources thing. i'm not entirely sure you meant Unity's Source/Listener combo or that you were creating your own audio system similar to FMOD Studio, with your own Listeners, etc. for sure +1 on being able to loop any voice through Unity's AudioSources, even 2D would be very desirable.

    the filtering , sample reversal, and looping could be very useful on standard AudioSources within a 3D game world as well. at the very least it allows acousmatic approaches to sound design in a game or installation. also considering you had reversed audio in uPhase, i was curious if that's easily possible in Unity's own audio system as it is. and is the filtering done with OnFilterRead() and Write(), with some math in between for the different filter types?

    this is very exciting for me. it's almost a valid low level synthesis/sampler engine. things that come to mind at the high end of this pulse setup as applied to 3D would be positional granular synthesis. a multitrack sequencer would probably be quite easy on the CPU/memory in comparison. the middleware use possibilities on this are also very enticing indeed...i will definitely be keeping my eye on developments here for sure!

    scott
     
  17. gregzo

    gregzo

    Joined:
    Dec 17, 2011
    Posts:
    795
    @metaphysician

    Hi Scott,

    straight to the points:

    -One AudioListener component, one AudioSourceComponent
    -GATAudioPlayer component mixes everything to the one AudioSource via OnAudioFilterRead
    -Processing can happen on the audio thread as audio is mixed, or on the main thread in advance. It's up to the programmer to find a suitable balance between performance, memory and latency.

    In the demo scenes of the web player, the ones where you can adjust bpm, samples are chopped up and faded on the main thread before being routed to a track on which effects are applied. Reversing also happens on the main thread, pre-computed and cached.

    -Granular synthesis may be possible with G-Audio: I've tested BPMs of up to 10'000 without a glitch.

    -Caching of pre-processed samples can be handled automatically. No garbage collection occurs, everything happens in one big pre-allocated array managed by GATDataAllocator.

    -3D mixers are possible, it was my first approach: Loop a 1s AudioClip, and continuously feed data to it as it is playing. This approach works but has severe drawbacks: increased latency, and frame rate dependency. As AudioClip.SetData cannot be called from outside the main thread, this is unfortunately unavoidable. My 2 multi iPad installations used this approach - in a controlled environment it's fine, but to propose it in a framework would be asking for trouble: frame rate drop leads to immediate audio stutter.

    If you find the time, but are short on funds, PM me with your e-mail, I can send you a trial version.

    One important disclaimer: as G-Audio does everything in .NET, it cannot compete with lower level mixers / engines. It is multi-platform and Unity Free compatible, but at a certain performance cost. I'm happily mixing 30+ samples on iPad, but if you need every last drop of juice for fancy graphics, a native solution would be more appropriate.



    Cheers,

    Gregzo
     
  18. gregzo

    gregzo

    Joined:
    Dec 17, 2011
    Posts:
    795
    Hi to all,

    v1.01 has just been submitted to the Asset Store for review, hopefully out by the end of the coming week.
    I've had to submit earlier than I wished to to fix a silly bug affecting import of stereo samples.

    Here are the fixes and additions:

    -New: ReverbModule adds 2 reverbs to tracks( NReverg or PRCReverb )
    -New: Added a gain parameter to all of GATPlayer, GATProcessedSample and
    GATRealTimeSample's Play methods.
    -New: Stream mono wav files from disk with GATWavStreamer and GATWavFile classes

    -Fix: Processed samples obtained with GATResamplingSampleBank.GetProcessedSample were updating their audio data needlessly
    -Fix: Stereo samples are now properly loaded by GATSampleBank classes

    Do get in touch if you need the package early: PM me with your invoice nb and e-mail and I'll send the package.

    Cheers,

    Gregzo
     
  19. gregzo

    gregzo

    Joined:
    Dec 17, 2011
    Posts:
    795
    Short message to share some exciting news:

    I've begun investigating the Accelerate framework ( iOS and OSX only ), and it looks like it's gonna be quite easy to integrate to G-Audio.
    Very substantial performance gains can be expected on these platforms. I'll keep you posted as to my progress.

    Cheers,

    Gregzo
     
  20. uniphonic

    uniphonic

    Joined:
    Jun 24, 2012
    Posts:
    130
    Ooooh! That does sound exciting! Thanks for keeping us in the loop. :)
     
  21. metaphysician

    metaphysician

    Joined:
    May 29, 2012
    Posts:
    190
    thanks very much Gregzo! right now the main issue is time, really. but this definitely looks like something i'd be really interested in...

    one thing i might definitely suggest as it seems this is headed to real-time performance directions, is the inclusion of OSC and MIDI to control things. as your framework seems to support the building of sample banks, a convenient way to trigger and modify them in performance would seem to be useful.

    another i might suggest is allowing table based waveforms for LFOs rather than simple sine, square, or triangle waves. i used this approach using a 128 X 128 drawn table waveform in my Max/MSP live processing setup i used for years and found it surprisingly effective at times.

    lately i've been looking at wavetable synthesis through the iOS Waldorf Nave app and sampling apps like Samplr. i think both of these are quite good but seem to be totally stuck in the box - they offer very little in terms of external control. so when i regain spare time to think experimentally, i plan on creating a much more externally controllable version of the basic setup i found in these apps. i was thinking of using libPD and either Dan Wilcox's port of PDParty(originally Droid Party), or the MobMuPlat platform for the graphics, but as i'm more familiar these days with Unity, i was also considering toolkits like Kalimba as well. but perhaps doing it in there with G-Audio instead of (or perhaps in addition to?) libPD might be interesting to try.

    scott
     
  22. metaphysician

    metaphysician

    Joined:
    May 29, 2012
    Posts:
    190

    hmm...i got curious about this framework, and also if there was an equivalent framework solution for Android (i don't have it, but i figure more platform options are good...). it led me up some interesting alleys regarding optimizing NEON for ARM with FFT. since you're not knee deep into implementing Accelerate just yet, you might want to check these links out (i have no idea how they do for audio but they're definite math and DSP optimized, just not sure how they stack up against Accelerate):

    http://stackoverflow.com/questions/...d-android-equivalent-to-ios-accelerate-veclib

    http://stackoverflow.com/questions/...stest-fft-library-for-ios-android-arm-devices

    http://anthonix.com/ffts/

    best,
    scott
     
  23. neuromorph

    neuromorph

    Joined:
    Jul 5, 2013
    Posts:
    6
    Thanks Gregzo your suggestion along with unistalling osx sound flower plugin and resetting airplay did the trick for me. Heads up OSX users: As Gregzo points out in an email, it's currently not possible in OSX Unity to change AudioSettings.outputSampleRate! Can't change it in OSX editor or the player. He is submitting a bug report to Unity among the many other issues he's requesting the mothership to fix.
     
    Last edited: Feb 10, 2014
  24. uniphonic

    uniphonic

    Joined:
    Jun 24, 2012
    Posts:
    130
    Thanks! Which part were you saying is for my use case? The reverb? It sounds good to have both of those reverb's, but I'd still really like the ability to use the FMOD reverb too, with different amounts on different parts... it seems like a separate Audio Source is the only way to go about that. What's holding you back from adding the feature of being able to target multiple Audio Sources?

    Thanks again!

    Jacob
     
  25. neuromorph

    neuromorph

    Joined:
    Jul 5, 2013
    Posts:
    6
    Wow this might be very good news for me indeed as my initial target platform is iOS. Optimization for Andriod can come later. Also the fact that your initial tests indicate granular synthesis may be possible at 10,000 BPM is very encouraging. Does the length-time of your audio grain fit within the beat and what about amplitude blending of the grains?
     
  26. Jimww

    Jimww

    Joined:
    May 14, 2013
    Posts:
    63
    2 Questions:

    If I have a 5 minute audio file, are there any existing ways that I can link up events to get fired at the appropriate timecodes within Unity3D? I want to be able to use an xml based list of timecodes to add annotations of various sorts. The only way I've thought that might work is using HTML5's audio embedded in Unity via webkit or something and using javascript to get the events via popcorn.js or something like it.

    And maybe more related to this plugin as it currently exists, if I have a 8 second audio file, with a bunch of different notes being played, is there a way that I can easily auto identify the regions and reference each of these notes? Basically I want to auto analyze a file and have it see that I have 10 distinct sounds, and be provided an easy way to name these sounds and then be able to call up the sounds by name or index. My alternative is to have to cut out each of the 10 notes and save each separately as it's own aif, which is a pain if I have to do it for a bunch of different files. Or is there are any combination of tools that make something like this easier, separating out sounds within a file and being able to programmatically reference each of those sounds?

    Thanks
    Jim
     
    key2thacity87 likes this.
  27. metaphysician

    metaphysician

    Joined:
    May 29, 2012
    Posts:
    190
    hey Jim - i've been messing around with this a bit lately. if you're talking about a sound playing at a specific time in the future you could use PlayScheduled() and AudioSettings.dspTime in combination. but i think you're also talking about the equivalent of a TextureAtlas for sounds or what FMOD calls a bank, which would also be useful. i know even without using G-Audio you can start an AudioClip at a specific sample point, and audio editors like TwistedWave can export XML cue lists based on time or sample. they can also cut long files up by marker points and export each slice individually.

    the tricky thing then if playing from a single long file is constantly running updates to see to see whether that AudioClip has reached the end of that region and resetting loops if so. still i personally think a marker based system is lacking in Unity's audio. i am curious whether G-Audio has the ability to put sounds into a bank.

    lastly - i think we should meet sometime. i'm in Berkeley, i use MasterAudio a lot in game projects, and have also done a basic test of using WebAudio in Unity Webplayer. i'll PM you with some details.

    scott
     
    key2thacity87 likes this.
  28. neuromorph

    neuromorph

    Joined:
    Jul 5, 2013
    Posts:
    6
    @Jimww

    @metaphysician

    I'm seeking to accomplish some of the functionality that Jimww is specifying. My current thoughts involve a 1/60th of a second main game loop that timestamps a master array of all required object events-properties. In the timestamps are audio events and any timing offsets for nearest AudioSettings.dspTime sample increment. Also an element in the array could store and pass a next dspTime delta for PlayScheduled().
     
  29. neuromorph

    neuromorph

    Joined:
    Jul 5, 2013
    Posts:
    6
    @ gregzo

    If it isn't too much to ask - I'd like zero garbage collection. I did see some foreach loops in your code. Getting rid of those might help shave off a bit. : )
     
  30. gregzo

    gregzo

    Joined:
    Dec 17, 2011
    Posts:
    795
    Hi to all,

    After this sudden burst of activity on this thread, I hope you'll forgive a long post by theme, hopefully all will find answers to their specific questions.

    A few things first:

    -I'm thrilled there's interest in the project, and really hope it can grow.

    -G-Audio aims to be a "lowish" level API, with which you can build more specialised tools that suit your needs. I hope you understand that it would be counter productive to focus on high level stuff before the lower level parts are fully optimised and suit your needs.

    So, here we go!

    1)MIDI and OSC:
    Why not, but later( see above ).Also, you can already organise sample banks by midi code. Triggering playback of the right sample at the right parameters is easily achievable with a MIDI plugin.

    2) Table based waveform:
    Also a 'good idea but later' suggestion: it's quite a specialised tool and possible already by extending G-Audio. It's not too much work to write a filter which extends AGATFilter, and just feed another sample( from a wavetable ) to it for LFO purposes. I'll gladly help if needed.

    3) NEON and optimising on more platforms:
    Many thanks for the links, @metaphysician. If G-Audio becomes popular, it'll be a top priority to optimise all platforms.

    4) Sample rate issues: it seems OSX does not allow the output sample rate to be changed in any other way than through the Audio - MIDI utility ( please correct me if I'm wrong ). When targeting OSX, one should provide a dialog box prompting the user to change his sample rate, or bundle samples of every possible sample rates with the app ( not so practical ).

    5) Multiple players ( @uniphonic ): I'll unlock the functionality in the next update. It's not much work, I'll do it as soon as I find the time( next week, still in Australia for work and the week-end will be spent flying ). I can send you a link to the package so that you won't have to wait for it to be published in the store. You're right, it should have been there from the start!

    6) Granular synthesis ( @neuromorph ): I'm not super familiar with the subject, but G-Audio being already able to play more than 100 samples a second( grains of less than 10ms ) at arbitrary sample accurate points, overlapping or not, with envelopes per grain, leads me to think it should be doable with G-Audio. I'll try to give it a shot - the subject is fascinating.

    7) XML time codes / events : That's pretty high level and specific. You could very well build your own system which triggers audio playback via G-Audio, but unless there are many similar requests, this feature is not planned.

    8 ) Automatically separate sounds: are they separate samples in one file, or a continuous sound you're chopping up? If the sounds are distinct( i.e. separated by silence ), it would of course be very easy to do. If they overlap, very hard and potentially messy ( should the first detected sound be filtered out of the second one? What are the sounds, do they have clean attacks, is it an instrument playing different notes, word being spoken? We quickly jump into massively complicated algorithms… ).

    9)foreach loops: all 7 of them replaced with for loops. They were not performance critical, but @neuromorph is right, why not go all the way? I just left one in to tease.

    Back to work!

    Cheers to all,

    Gregzo
     
    Last edited: Feb 26, 2014
  31. Nifflas

    Nifflas

    Joined:
    Jun 13, 2013
    Posts:
    118
    About #1. I personally don't like when something on the asset store is all over the place and covers too much functionality outside what it is set out to be, and have a strong preference towards completely separate plugins for Midi IO, OSC, and audio, as I can handle the interaction between them (which is probably different for every project anyway). Small, powerful and focused plugins, like what G-Audio is right now, are the ones I like the most.

    In a project I'm working on, I hooked up Mike Heaver's minimal and elegant OSC Receiver to a G-Audio sample player. It was easy and worked super nicely.
     
    Last edited: Feb 10, 2014
  32. gregzo

    gregzo

    Joined:
    Dec 17, 2011
    Posts:
    795
    Hi Jacob,

    All done, I'll resubmit the asset right now. Multple players allowed, all Play methods are now overloaded to enable routing playback to a specified player, which you'll have to keep track of. Track numbers are per player( 2*( tracks 0-3 ) is valid ).

    Get in touch by pm with your e-mail and invoice number if you want the package early!

    Cheers,

    Gregzo
     
  33. gregzo

    gregzo

    Joined:
    Dec 17, 2011
    Posts:
    795
    G-Audio v1.03 is now live on the asset store.

    Reverb, gain per sample, mono wav streaming, and a few important fixes.

    Cheers,

    Gregzo
     
  34. neuromorph

    neuromorph

    Joined:
    Jul 5, 2013
    Posts:
    6
    Great! Really happy about the update including: "multiple GATPlayer instances are now allowed" - enabling separate Unity filters for multiple players.

    I've been exploring PulseModules. A lot of functionality there, almost a framework in their own right. [Edit: Removed more detailed questions/suggestions on PulseModules - will PM on those]

    Did you use the PulseModules in your granular synthesis test? I'm going forward with a granular synthesis system and am wondering if a simple co-routine loop, that calls core G-Audio scripts, might be a good place to start?
     
    Last edited: Feb 16, 2014
  35. thebarryman

    thebarryman

    Joined:
    Nov 22, 2012
    Posts:
    130
    Hi gregzo,

    This seems very interesting! Many of your features seem interesting, but you don't address the primary reason I am looking for an alternative audio solution. I'm wondering if you might be able to shed light on my issue, and whether your plugin would help with it.

    I'm currently building a high-level framework for music-based applications within Unity, and have been wrestling with Unity's audio playback to get steady, rhythmic playback of short files going. Even though I am running my FixedUpdate loop at a high frequency, the rhythms are stuttering and inconsistent, particularly when multiple samples are played at the same time, so I'm assuming that the cause of my issue is with the AudioSource.Play latency, not the frequency of my timing loop.

    One important component of the design of my framework is that it needs to be able to trigger an arbitrary number (within reason, of course) of short audio clips at the precise moment they are requested. Most of the workarounds to my issue that I've seen involve cuing up sounds to be played several frames before they are actually needed, which unfortunately just doesn't work with the design of my framework.

    Thanks for any insight you may be able to shed on this. Your plugin sounds very promising!
     
  36. gregzo

    gregzo

    Joined:
    Dec 17, 2011
    Posts:
    795
    Good! As I wrote earlier, it was just a matter of adding a few overloads to allow specifying which GATPlayer should play.

    Waiting for your PM! The PulseModules were quickly written to allow experimenting in the editor without coding. They use delegates and not events because delegates are lighter and the only reason to use events here would be protecting the delegate list from straightforward assignments.

    I would suggest extending G-Audio's pulse classes: they can support irregular pulses too, and do not limit pulse frequency. A pulse is simply a signal to trigger precise playback of a sample.

    Cheers,

    Gregzo
     
  37. gregzo

    gregzo

    Joined:
    Dec 17, 2011
    Posts:
    795
    Play will simply start playback of a sample as soon as possible. Calling Play from Update, FixedUpdate or LateUpdate won't allow precise timing, as all these callbacks are much too irregular for musical timing. PlayScheduled is the only way to play samples precisely, but immediately is simply impossible: latency cannot be less than the size of your audio buffer, typically about 20ms at default settings and 44.1khz ( buffer size of 1024 samples per channel ).

    Give G-Audio a try, I haven't had problems triggering playback of simultaneous samples. But as written above, 0 latency is simply impossible.

    If you plan making an asset to sell on the asset store, I must warn you from experience that basing an asset on another can be quite frustrating: your asset may break every time the one it's based on updates. I've had to update NGUI Infinite Pickers many times in the past months due to NGUI's fast evolution - time spent on compatibility issues, not on new features.

    Cheers,

    Gregzo
     
  38. gregzo

    gregzo

    Joined:
    Dec 17, 2011
    Posts:
    795
    Hi to all,

    Important notice: just found out that my implementation of reverb filters is not safe for use when fully panning to the left or right: it will feedback nastily, beware!

    Fix is very simple, let me know if you need it before the next update. In the meantime, be careful with G-Audio's reverbs, always clamp stereo pan between .001 and .999.

    Apoligies for any inconvenience caused, haven't received any feedback on the subject yet so I guess no one got hurt.

    Cheers,

    Gregzo
     
  39. gregzo

    gregzo

    Joined:
    Dec 17, 2011
    Posts:
    795
    Hi to all G-Audiophiles,

    Great news today: I've finished implementing Accelerate on G-Audio's mixer, and the results are excellent.

    Here's a little benchmarking on iPad 3, mixing up to 120 44.1khz samples continuously. Mixing routed through 1 track, no effects.

    Scene base cpu load( Unity Engine, some GUI, no audio ): 25%
    Ticks averaged on 100 audio frames.


    1)Mixing 10 sources

    -.NET: 6'900 ticks
    -Accelerate: 2'300


    2)Mixing 25 sources

    -.NET: 13'500
    -Accelerate: 5'000


    3)Mixing 50 sources

    -.NET: 24'500
    -Accelerate: 9'200


    4)Mixing 120 sources

    -.NET: 51'000 +25% CPU load
    -Accelerate: 20'000 +8% CPU load

    8% extra cpu load on mobile to mix 120 samples, whoop!

    Coming to an asset store near you very soon.

    Cheers,

    Gregzo
     
    Last edited: Feb 20, 2014
  40. uniphonic

    uniphonic

    Joined:
    Jun 24, 2012
    Posts:
    130
    That's awesome! Will you be putting other things into Accelerate, such as filters, ASDR envelopes, and effects?

    Also, I'm curious how do you measure ticks? Thanks!
     
  41. gregzo

    gregzo

    Joined:
    Dec 17, 2011
    Posts:
    795
    Hi uniphonic,

    I've already implemented Accelerated resampling, ADSR is easy too but won't make much difference on top of Accelerated mixing. Biquad filters I'm looking into, they are apparently supported but there's no documentation. Other effects( reverb, distortion etc.. ) are tougher to vectorise than mixing. Simple vector operations are typically multiply this array by this value and add it to that other array, perfect for mixing. For more complex work, I think matrice maths could help, I'll look into it.

    Ticks: System.Diagnostics.Stopwatch, of course!

    Cheers,

    Gregzo
     
  42. uniphonic

    uniphonic

    Joined:
    Jun 24, 2012
    Posts:
    130
    Sounds great!

    On a separate note, do you think you could implement a new audio effect: a compressor limiter? :)

    Also, is it possible to chain multiple effects?

    Thanks!
     
    Last edited: Feb 24, 2014
  43. gregzo

    gregzo

    Joined:
    Dec 17, 2011
    Posts:
    795
    Hi uniphonic,

    The trouble with compressors is that they need look ahead - ie more latency. They are not hard to implement, just not a top priority right now as more often than not, game audio needs lower latency, not higher... It would be quite easy to implement a no extra latency limiter, but it would sound artificial when pushed too hard. The gist of it is to monitor the peak amplitudes of every audio buffer, and to smooth gain from one buffer to the next if exceeding the threshold.

    Would you need per channel limiting, or per player, or both?

    Chaining effects: no problem, just add them to a track, they'll all process the audio data, one after the other.

    Cheers,

    Gregzo
     
  44. thebarryman

    thebarryman

    Joined:
    Nov 22, 2012
    Posts:
    130
    Hey Gregzo,

    I've been trying to integrate G-Audio into my project and am hitting a snag -- there is no audio coming from my project at all. Following the FAQ's suggestion, I checked the vu meter on the GAT Player, but no dice -- and the track settings are default (.5 pan, 1 gain).

    Any suggestions?

    Thanks!
    Julian
     
  45. gregzo

    gregzo

    Joined:
    Dec 17, 2011
    Posts:
    795
    Hi Julian,

    What does the console say?
    Do the demo scenes work?
    Which platform are you on?

    I need more info to help. It's morning here, and a G-Audio day too - I'm fully available for support.

    Cheers,

    Gregzo
     
  46. thebarryman

    thebarryman

    Joined:
    Nov 22, 2012
    Posts:
    130
    I replied via PM. Thanks again for helping out and for making this great add-on!
     
  47. uniphonic

    uniphonic

    Joined:
    Jun 24, 2012
    Posts:
    130
    Hi Gregzo,
    I tried to reply yesterday, but my browser glitched just as I was about to send it.

    I know that compressors can sound better with lookahead, but I didn't think it was a requirement. I've used software compressors that allow you to turn off the look ahead feature. I agree that it would be best not to introduce latency. Do you think you could implement a compressor without lookahead, and maybe it just wouldn't be capable of very low attack values?

    From my understanding though, a limiter is basically a compressor with infinite ratio, and a short attack. Does that sound right to you?

    If it would be easier than a compresssor, maybe you could make something closer to tape saturation? Sometimes tape saturation can sound like a cross between a compressor and some very light distortion. I know this is probably getting too deep, but I love the sound of Propellerhead Reason's tape saturation audio effect, if you've ever heard that.

    Anyway, to answer your question, I think it would probably be ok to just include a compressor at the chanel level, since a chanel can include one or multiple players, right? Am I getting the terminology correctly?

    Thanks!
    Jacob
     
  48. gregzo

    gregzo

    Joined:
    Dec 17, 2011
    Posts:
    795
    Hi Jacob,

    It is simply impossible to limit a signal with no lookahead without distorting the input . 0 look ahead has consequences on sound quality, as to avoid discontinuities, non-linear limiting has to be applied. Some algorithms do this, with good results depending on the type of input signal. For G-Audio, I've decided to implement naive additive mixing, with an optional hard limiter ( the clipMix field of GATPlayer turns the hard limiter on ).

    In many cases, I find additive mixing perfectly OK providing that the input samples are normalized at a low enough volume. For the record, I've tested mixing 100 piano samples normalized at 0.3 float value with barely any clipping.

    Is mixing at max volume worth the extra processing power / latency / potential distortion ? Maybe!

    Anyway, very interesting problematics. Have a read here to get an idea of how the problem doesn't have a straight forward solution ( the comments are more important than the article ). It's from Michael Tyson's blog, and he is absolutely one of the star iOS audio devs ( he's behind nothing less than AudioBus… and Loopy ).

    I don't know Reason's tape saturation - from what you describe, it might clip signals? Perhaps a mix of hard limiting and Tyson's algorithm?

    Lot's to do on G-Audio. You're suggestions are always welcome, and have been added to the list!

    Hard at work these days implementing I/O ( write to wav file, from mic or tracks or listener, load/stream wav file, etc… )

    v1.1 will be full of surprises!

    Cheers,

    Gregzo
     
    Last edited: Feb 28, 2014
  49. uniphonic

    uniphonic

    Joined:
    Jun 24, 2012
    Posts:
    130
    Hi Gregzo,

    I see that there is a way to have an LFO control volume, but is there currently a way to have LFO control pitch or filter? In my sound design work the most common use of an LFO is to control pitch or filter cutoff. It would be great if the LFO could be assigned to any parameter.

    Thanks!
     
    Last edited: Mar 2, 2014
  50. gregzo

    gregzo

    Joined:
    Dec 17, 2011
    Posts:
    795
    Hi uniphonic,

    Currently, you'd have to implement that yourself. I have noted it down on the to do list, and had a good thought about it following your post: I think I can sort out something pretty cool. The idea is to enable modulation of any parameter by any waveform, so you could even use a sample or white noise to modulate distortion or pitch or cutoff…

    Modulating biquad filters per sample is quite resource hungry, I must warn. I might try out some resolution options: modulate the parameter every x samples…

    Can't promise anything time wise, v1.1 or a bit later.

    Cheers,