Search Unity

Audio Passing audio buffers between AudioSources

Discussion in 'Audio & Video' started by DanielRothmann, Jun 8, 2017.

  1. DanielRothmann

    DanielRothmann

    Joined:
    Feb 4, 2016
    Posts:
    10
    Hi all,

    My apologies if this question has already been answered, but in searching around I have not been able to find an answer.

    I'm working on a modular synthesizer in Unity and, in order to get audio-rate modulation and truly modular signal flow, I would like to find a way to pass audio between audio sources. Let's say I procedurally generate a sound in OnAudioFilterRead for one source - Is there any way for another AudioSource to access that sound in realtime?

    I'm already aware of AudioSource.GetOutputData, but this method is only allowed to be called from the main thread, and as such is cannot be called in OnAudioFilterRead which runs on the audio thread. I have tried to store the audio of a filter in a preallocated buffer for others to access but with no luck so far.

    The only solution for signal flow I've come up with so far is, once a sound source like an oscillator is set up, working in all effects/processing with additional filters on that audio source - This however does not enable for audio-rate modulation (which requires passing audio between filters) as I'd hoped for.

    Do anyone have some tips or tricks on how to achieve this? Thanks!
     
  2. aihodge

    aihodge

    Joined:
    Nov 23, 2014
    Posts:
    163
    Interesting problem! The use of a shared buffer is the approach that I would pursue further, personally. Based on your description, it sounds like you need to set up a circular buffer for each Audio Source that is generating audio data that you'd like to access elsewhere. Seems like the difficult part would be making a "buffer manager" script which could set flags indicating that there is sufficient data in each buffer to be consumed by another source. It would likely require the buffer be some multiple of the length of the data you are generating in OnAudioFilterRead, and there would be some latency (on the order of milliseconds, based on the size of these buffers and the sampling rate) so it wouldn't be realtime exactly, but you would be able to achieve audio rate modulation this way.
     
  3. DanielRothmann

    DanielRothmann

    Joined:
    Feb 4, 2016
    Posts:
    10
    Hey aihodge, thanks for your insights! I'll look further into the shared buffer approach, although I did control for buffer length, it wasn't circular. That would be interesting to examine further. One buffers worth of modulation latency is fine as long as its the same for all components.

    I looked into a popular Unity music making game and found their approach to modular signal path to be interesting as well: Instead of relying on OnAudioFilterRead for each component, each module has a "ProcessBuffer" method - Then a top level component (could be considered a "speaker" or "audio out" component) calls ProcessBuffer on the connected sound source in OnAudioFilterRead. If there's anything connected to that component, it calls for it to process its buffer. This way you can call from the front of the chain to the back, mostly on a single shared buffer owned by the speaker. Thought that was quite clever optimization.
     
  4. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,305
  5. DanielRothmann

    DanielRothmann

    Joined:
    Feb 4, 2016
    Posts:
    10
    Thanks for the suggestion @r618 - I did have a look at the native audio sdk, but as far as I could gather, it seems pretty constrained to the mixer system, making it difficult to set up and rewire modules at runtime. I'm looking forward to Unity opening the native audio part further, as they have teased a bit already.

    I have done a little write-up on my solution so far, using Pure Data for designing synth modules and Heavy to compile it to native code, might be useful to someone looking for similar answers here in the future:
    https://www.linkedin.com/pulse/building-modular-synthesizer-unity-using-pure-data-heavy-rothmann
     
    NikH likes this.
  6. NikH

    NikH

    Joined:
    Dec 24, 2013
    Posts:
    14
    It's 3 years later, I'm interested to know if a solution for this was found - certainly would be helpful.
     
  7. ReaktorDave

    ReaktorDave

    Joined:
    May 8, 2014
    Posts:
    139
  8. DanielRothmann

    DanielRothmann

    Joined:
    Feb 4, 2016
    Posts:
    10
    Looking back, my problems were caused by the architecture I was trying to implement.

    The issue has to do with threading - any work to do with audio is running on a dedicated audio thread. Attempting to pass data between gameobjects often leads to doing work on the main thread instead (which is not allowed).

    My solution to organize my work in a modular audio processing graph which is executed from a single audio source. That means a single gameobject with an audiosource is orchestrating all the processing work from its OnAudioFilterRead method.

    To play back audio from certain processing paths, each node in the graph can save its work to a temporary buffer, which is public for other gameobjects and audiosources to read.

    Does that make sense?
     
  9. nzhangaudio

    nzhangaudio

    Joined:
    May 26, 2015
    Posts:
    20
    Does that mean you're doing all the processing and mixing in some other scripts and just passing the final result to one OnAudioFilterRead()? If so, that wouldn't be "audio-rate" but framerate processing, would it?
     
  10. DanielRothmann

    DanielRothmann

    Joined:
    Feb 4, 2016
    Posts:
    10
    No, it's the other way around - All signal processing work is being orchestrated and executed from the OnAudioFilterRead callback within a single gameobject. That means that the processing work is always done from within a central audio callback (at "audio rate").
     
    nzhangaudio likes this.