Search Unity

  1. Megacity Metro Demo now available. Download now.
    Dismiss Notice
  2. Unity support for visionOS is now available. Learn more in our blog post.
    Dismiss Notice

Audio stepsequencer - and they said it couldnt be done!

Discussion in 'Made With Unity' started by tomnullpointer, Feb 14, 2011.

  1. tomnullpointer

    tomnullpointer

    Joined:
    Sep 20, 2010
    Posts:
    142
    Ok so its a bit of a cheat, but it does work!

    Try the webplayer.

    Ive tried what seems like a million ways to get realtime audio sequencing in unity. Unity isnt very good for small MS callbacks, it wont give access to fmods callbacks and all manner of timers (threads, coroutines,system timers) have lag and variable results when down to audio MS range. Probably only hardware bound callbacks (ie buffer requests from audio cards) are accurate enough. However, with some sneaky tricks Ive managed to get triggers to work down to a acceptable range.

    Heres the trick ,just in case anyone else has struggled with this.

    Run one sample on loop as a metronome.
    Run a fixedupdate (or any fast thread) and use this to find the current PCM sample pos of the metronome.
    Use this position to detect if the current loop of the metronome sample is nearing its end (the margin you can detect depends on the frequency of your fixedupdate etc).
    If its close enough to the next loop then trigger a new sound to play (and stop checking ).. BUT offset it by the remaining samples in the current loop (loop sample length- current sample position).

    This means it will play on the exact loop point that is coming up. This solution might not work for super slow machines and it less accurate as your tempo increases (but you can adjust the margin for that) but its the best, rather only solution ive found so far.

    enjoy!
     
    Last edited: Feb 18, 2011
  2. RobbieDingo

    RobbieDingo

    Joined:
    Jun 2, 2008
    Posts:
    484
    @tomnullpointer,

    Very well done man, a brilliant piece of creative-lateral-thinking there - result seems pretty stable at that tempo.

    (Sent you a PM).
     
    Last edited: Feb 15, 2011
  3. DJAZLAR

    DJAZLAR

    Joined:
    Feb 18, 2011
    Posts:
    37
    @tomnullpointer,

    Awesome work on this project ,its great to see someone pursuing the possibilities of audio sequencing and timings involved . I have been doing the same also and trying to find a way to achieve perfect sync .Its an interesting way u worked the timings and has given me new ideas and unity is a great tool to add visuals too
     
  4. DJAZLAR

    DJAZLAR

    Joined:
    Feb 18, 2011
    Posts:
    37
    here is a test of what i have come up with so far

    this was a simple project where we wanted to make someone with no timing knowledge be able to make something sounding good with no skills in music making.

    http://www.2shared.com/file/X5c9oQDJ/jackdbquaver.html

    after watchin your sequencing i also got to try the idea out the blocks are representative of an audio loop and i placed them in various places and used collisions to check for hits and if hit start playing the relevant loops still got a lot of work to do on it but gives a basic idea

    http://djazlar.net46.net/WebPlayer/WebPlayer.html
     
    Last edited: Feb 19, 2011
  5. pat_sommer

    pat_sommer

    Joined:
    Jun 28, 2010
    Posts:
    586
    looks great! i actually wanted to tamper with making one of these myself, but i had no idea how i would export it out as say an mp3 or somethin,

    great 1st steps tho!
     
  6. DJAZLAR

    DJAZLAR

    Joined:
    Feb 18, 2011
    Posts:
    37
    Just a quick tip
    when you see a line on left it starts playing when you hover over it with mouse pointer
     
  7. FeloniousMonk

    FeloniousMonk

    Joined:
    Dec 18, 2010
    Posts:
    9
    Hey folks! Stumbled upon this thread after MUCH searching for information on implementing this sort of thing with Unity (meaning triggering things in nearly sample accurate musical time). But, despite the info provided here and the example webplayers, my friend and I have been unable to get this technique that tomnullpointer described to really work correctly, and all our results still have a very audible inaccuracy about the timing that isn't present in any of the other projects this thread. Could anyone who's had success with this maybe shed a little light on how to make it work or what to avoid that might fudge it up? A peruse through the forums tells me there's still a lot of people looking to crack this. The code I've come up with so far, based on tomnullpointer's original description, is as follows:

    Code (csharp):
    1. var checkingTime : boolean = false;
    2. var audioSample : AudioSource;
    3. var audioMetronome : AudioSource;
    4. var checkingSamples : boolean = true;
    5.    
    6. function FixedUpdate () {
    7.  
    8.     if (checkingSamples == true) {
    9.         samplePos = audio.timeSamples;
    10.      }
    11.     if (samplePos < 20000) {
    12.         checkingTime = false;
    13.         checkingSamples = true;
    14.     }
    15.     if (samplePos > 20000) {
    16.         checkingTime = true;
    17.     }
    18.     if (checkingTime == true) {
    19.         checkingSamples = false;
    20.         audioSample.Play(audioMetronome.audio.clip.samples - audioMetronome.audio.timeSamples);
    21.         var offset = audioMetronome.audio.clip.samples - audioMetronome.audio.timeSamples;
    22.         print("Sample position is " + samplePos);
    23.         print("Offset is " + offset);
    24.     }
    25. }
    Note: The sample we're using as a metronome is one beat long and 22050 samples; test we logged showed that 20,000 seems to be an appropriate threshold that always gets caught. The result from this script is that our sample to be triggered plays near each quarter note at the metronome's tempo, but is always off; the amount of inaccuracy changes from trigger point to trigger point, so it doesn't seem to be a matter of a static offset for latency or something like that. The two print commands at the end have confirmed that the math is coming out right, and it should be triggering the sounds right at the start of the next 22050 sample loop. Even if folks don't want to spell this out in their own source, any help would be appreciated about what direction to look in. Thanks!
     
  8. CharlieSamways

    CharlieSamways

    Joined:
    Feb 1, 2011
    Posts:
    3,424
    Was really fun to play with :) more sounds would be awesome
     
  9. starpaq

    starpaq

    Joined:
    Jan 17, 2011
    Posts:
    118
    I'm trying to understand what you are trying to accomplish. It appears that you are timing each consecutive note based on the amount of samples that have passed for the currently playing audio. If I'm right then, you are using the audio itself as a measurement for the tempo. Are you trying to keep the audio in sync or are you trying to play the audio clips from a given point within each audio?
     
  10. FeloniousMonk

    FeloniousMonk

    Joined:
    Dec 18, 2010
    Posts:
    9
    Sorry, typo in that first if statement I think; let me fix and clarify. There's two audio source variables that are assigned to the two audio sources that are part of the gameobject this is attached to. As per tomnullpointer's description, there's one looping sample (audioMetronome) that is one beat long at the desired tempo and used as a master clock. audioSample is the sound effect that is meant to be triggered at the start of every loop of audioMetronome. Does that make a little more sense?

    Code (csharp):
    1. var checkingTime : boolean = false;
    2. var audioSample : AudioSource;
    3. var audioMetronome : AudioSource;
    4. var checkingSamples : boolean = true;
    5.    
    6. function FixedUpdate () {
    7.  
    8.     if (checkingSamples == true) {
    9.         samplePos = audioMetronome.audio.timeSamples;
    10.      }
    11.     if (samplePos < 20000) {
    12.         checkingTime = false;
    13.         checkingSamples = true;
    14.     }
    15.     if (samplePos > 20000) {
    16.         checkingTime = true;
    17.     }
    18.     if (checkingTime == true) {
    19.         checkingSamples = false;
    20.         audioSample.Play(audioMetronome.audio.clip.samples - audioMetronome.audio.timeSamples);
    21.         var offset = audioMetronome.audio.clip.samples - audioMetronome.audio.timeSamples;
    22.         print("Sample position is " + samplePos);
    23.         print("Offset is " + offset);
    24.     }
    25. }
     
    Last edited: Jun 8, 2011
  11. starpaq

    starpaq

    Joined:
    Jan 17, 2011
    Posts:
    118
    ok I will try this as best as I can. I actually learned something reading the entire forum post :). First off i tried modifying your code:

    Code (csharp):
    1.  
    2. var checkingTime : boolean = false;
    3. var audioSample : AudioSource;
    4. var audioMetronome : AudioSource;
    5. var checkingSamples : boolean = true;
    6.    
    7. function Update () { // why not use and update function
    8.  
    9.     if (checkingSamples == true) {
    10.         samplePos = audioMetronome.audio.timeSamples;
    11.      }
    12.     if (samplePos < 20000) {
    13.         checkingTime = false;
    14.         checkingSamples = true;
    15.     }
    16.     if (samplePos >= 20000) { // larger than equal
    17.         checkingTime = true;
    18.     }
    19.     if (checkingTime == true) {
    20.         checkingSamples = false;
    21.         audioSample.Play(audioMetronome.audio.timeSamples); // offset?
    22.         var offset = audioMetronome.audio.clip.samples - audioMetronome.audio.timeSamples;
    23.         print("Sample position is " + samplePos);
    24.         print("Offset is " + offset);
    25.     }
    26. }
    27.  
    I used an update because to my knowledge fixedUpdate is usually for physics. Then I changed the offset code. Subtracting the current metronome current position from its total length would only give you the remainder of time left before the metronome finishes. So why not play the audioSample from the same point where the metronome is at. Perhaps that was the problem.

    Then I decided to modify my method a bit because at the time I did not use any offsetting of audio samples. I use a a function that invokes itself over and over at a desired rate of time. Then i tried using an offset with time instead of samples. Here it is:

    Code (csharp):
    1.  
    2. var metronomeSpeed : float = 1.0; //seconds
    3. var audioSample : AudioSource;
    4.  
    5. function Start(){
    6.  
    7. metronome();
    8.  
    9. }
    10.  
    11. function metronome(){
    12. var timeLapse : float = Time.deltaTime;
    13. var offset : float = timeLapse - metronomeSpeed;
    14. audioSample.Play()
    15. if( offset > 0.0 ){
    16.     audioSample.time = offset;
    17. }
    18.  
    19. Invoke( "metronome", metronomeSpeed );
    20.  
    21. }
    22.  
    I have not tested my modified version. But that is my insights on how to go keeping sync. With my sequencer I did not use audio sample offsetting. The timing using the invoke appeared to keep sound timing accurate as long as the fps was constant.
     
  12. starpaq

    starpaq

    Joined:
    Jan 17, 2011
    Posts:
    118
    Code (csharp):
    1.  
    2. var metronomeSpeed : float = 1.0; //seconds
    3. var audioSample : AudioSource;
    4. var  timeLapse : float;
    5.  
    6. function Start(){
    7.  
    8. metronome();
    9.  
    10. }
    11.  
    12. function Update(){
    13.     timeLapse += Time.deltaTime;
    14. }
    15.  
    16. function metronome(){
    17. var offset : float = timeLapse - metronomeSpeed;
    18. audioSample.Play();
    19. if( offset > 0.0 ){
    20.     audioSample.time = offset;
    21. }
    22. timeLapse = 0.0;
    23. Invoke( "metronome", metronomeSpeed );
    24.  
    25. }
    26.  
    updated
    EDIT: I just tried it out in unity and it works! I also put a missing semicolon into the code.
    2nd Edit: Nevermind :( there is an extra piece of the audio playing. It probably needs an audio clean up.
    LAST EDIT (for sure) : It really does work! I just had "play on awake" selected when it should have be deselected.
     
    Last edited: Jun 8, 2011
  13. FeloniousMonk

    FeloniousMonk

    Joined:
    Dec 18, 2010
    Posts:
    9
    I tried your code out, starpaq, and it seems to have about the same result as a lot of our previous tests, in that there are still very noticeable inaccuracies in the timing of the triggered sample (i.e. it doesn't sound very musical). There still remains a very big difference between the results of my scripts/your suggestion, and the results of tomnullpointer's example at the top of the thread, which is what I'm mainly confused about.

    To answer your other questions:

    We cached the remaining samples in the current metronome loop because, while the goal was to trigger the sound effect at the start of every metronome loop, neither Update nor FixedUpdate (which is faster, to my knowledge) updates anywhere near as fast as the sample rate of an audio file (in this case, 44100 Hz), so it would be impossible to cache the metronome's sample position and simply trigger a sound whenever that position is 0; it simply wouldn't see 0 most of the time. So by checking once a threshold is hit, it ensures that a new sound will always be triggered once every loop, and then that sound is offset by the number of samples left in the metronome loop, so that the sound plays exactly at the start of the next loop. At least that's the theory.
     
  14. starpaq

    starpaq

    Joined:
    Jan 17, 2011
    Posts:
    118
    I think the accuracy you are looking for is too limited by using Unity because the audio timing is based on updated cycles which are too variable. I was even considering simulating a buffer in code to play the entire beat on delay in order to anticipate playing the sample on time. However, even that seems to be crippled by variable performance. It appears to be not possible given the current access or resources being used in code. If I think of something I will post it.

    Perhaps someone else would add to this in order to steer you in a better direction.

    I'm not even sure if this level of accuracy is possible programming through middleware or without using a fully accessible fmod plugin.
     
  15. hippocoder

    hippocoder

    Digital Ape

    Joined:
    Apr 11, 2010
    Posts:
    29,723
    how does this differ from simply checking millisecs? millisec-based tempo is rock solid on time, I don't understand how this would be any more accurate than firing events based on how many millisecs have elapsed since the last metronome.
     
  16. FeloniousMonk

    FeloniousMonk

    Joined:
    Dec 18, 2010
    Posts:
    9
    It was my understanding, hippocoder, that actual millisecond based checks aren't possible or are difficult because of the speed at which Update, FixedUpdate, and coroutines run in Unity. You can tell the script to look for each millisecond, but the engine isn't checking fast enough to actually see every millisecond (thus why tomnullpointer's solution at the top of the thread was necessary) as far as I know. As for whether or not that accuracy is possible, there's two step sequencers out (the one at the top of this thread, and the one on Kongregate made by quickfingers) that seem to indicate it is possible in Unity. Whether or not you need some sort of custom .dll, plugin, or middleware is another question, but the description at the top of the thread seems to indicate it was done without using millisecond counters and without middleware.

    Basically, we have yet to see a solution that is actually rock solid on musical timing, and we've tried to do a lot of time/second/ms based solutions; entirely possible I've missed something, but I'm just trying to get a discussion going on the matter and let ideas bounce around.
     
    Last edited: Jun 8, 2011
  17. starpaq

    starpaq

    Joined:
    Jan 17, 2011
    Posts:
    118
    Are you using anything specifically to measure the accuracy of timing? As I would like to know because I have not been able to determine the level of accuracy with my own sequencer. I have played my sequencer and personally haven't noticed (with my ears) any obvious poor timing and I have not received feed back about its approximation of timing. I would appreciate any insight or opinion you have on my sequencer.
     
  18. FeloniousMonk

    FeloniousMonk

    Joined:
    Dec 18, 2010
    Posts:
    9
    I just checked yours out starpaq. Coming from a musical background I can definitely hear some discrepancies in the timing of notes in your step seq., i.e. if I fill every bubble/button of a single row (so 16 notes should play in total), they sometimes sound equidistant from one another in terms of time, and other times they don't; they sound closer or farther apart in subtle (but still noticeable ways). It's the same result that I've been getting with a lot of my tests. But, to answer your other question, we haven't been scientifically measuring anything, I've just been relying on my ears, and even though I trust them, feel free to take my feedback with a grain of salt.

    If you do want to test the timing, I suppose you could record the audio output of your webplayer (probably just by recording the output of your computer somehow), put the resulting file in some audio editing software and measure the milliseconds between the transients (or the start of each sound). Alternatively, you could always have a backing track or metronome start to play when the sequencer starts moving, at the tempo that the seq. is moving at, and see how the triggered sounds line up with the streaming audio. I think even the example at the top of this thread has some slipping of the time between samples, but it's very small. (Not trying to be down on your work, just giving some honest feedback).
     
  19. starpaq

    starpaq

    Joined:
    Jan 17, 2011
    Posts:
    118
    Thank you. It is highly appreciated!
     
  20. Evil-Dog

    Evil-Dog

    Joined:
    Oct 4, 2011
    Posts:
    134
    Hey TomNullPointer, I don't know if you'll read this but you're a freaking genius, talk about thinking outside the box, this works perfectly, I just tried it. So thanks a bunch for this ingenious implementation, I've been looking around for a solution for seamless sound sequencing.
    Cheers
     
  21. gregzo

    gregzo

    Joined:
    Dec 17, 2011
    Posts:
    795
    @Evil-Dog: I disagree. Offsetting samples kind of works, but it still feels like a hack. I'm a musician with a passion for coding, not a coder, so maybe it's just my skills not being acute enough. My ears certainly are. Reasons of disagreement below, please prove me wrong as this is despairing me!

    1- it's not rock steady. Have tried a looping .25 s click against the same click fired by Play(samplesOffset), and you can clearly hear random delays (minute but perfectlu audible).

    2- for sounds such as piano, repeated notes just don't work as Play(samplesOffset) actualy cuts the sound directly, so since it's offsetting slightly different amounts of samples everytime, the result is just plain not usable. Could jump through hoops and detect adjacent notes to automaticaly assign them to a different AudioSource.

    3- milisecond accurate firing of PlayOneShot would be really badass.
     
  22. soren

    soren

    Joined:
    Feb 18, 2008
    Posts:
    123
    You have to fire the sounds from the same frame to get it sample-precise. Like this (pseudo-code):

    Code (csharp):
    1.  
    2. void Update()
    3.         {
    4.             if (timeToUpdateMySequence)
    5.             {
    6.                 // piano
    7.                 pianoA.Play(0);
    8.                 pianoB.Play(1000); // play precisely 1000 samples after pianoA
    9.                 pianoC.Play(2000); // play precisely 2000 samples after pianoA
    10.             }
    11.         }
     
  23. reissgrant

    reissgrant

    Joined:
    Aug 20, 2009
    Posts:
    726
    Very interesting way of doing this, thanks for sharing!
     
  24. gregzo

    gregzo

    Joined:
    Dec 17, 2011
    Posts:
    795
    Thanks, I had figured that out, but the downside is once a few sounds are fired in the same frame, bye bye real-time. Example: every beat at 60 bpm, 8 notes are triggerd with a different delay. If the user adds or moves a note just after, it will of course not sound. Mentioned this on another thread, repeating for the sake of clarity! It's not that bad, but just forces me to add quite a lot of code to avoid deleted notes still sounding in certain circumstances, trying to give the impression that my sequencer is more responsive than it actualy is...

    P.S.: point 1 I made in the previous post still stands. Minute but audible random delays.

    Looking forward to 3.5, and enjoying 3.4 a bunch nevertheless!
     
  25. cecarlsen

    cecarlsen

    Joined:
    Jun 30, 2006
    Posts:
    858
    I'm teaching a bunch of sound design students at Sonic College at the moment and a couple of them need a stable step sequencer in Unity. I followed the guide provided by tomnullpointer in the top of this thread. On my machine I start hearing audible glitches when playing at bpm above 200, so don't push it too hard. The script could be greatly improved by dynamically creating multiple AudioSources per AudioClip in Start() and only calling Play() on sources that are not already playing. I can also recommend creating a struct to contain volume and pitch for each step.

    I hope it will be useful.

    Greetings from Haderslev
    Carl Emil



    Code (csharp):
    1. /*
    2.     StepSequencer.cs
    3.     Carl Emil Carlsen
    4.     http://sixthsensor.dk
    5.     January 2012
    6.    
    7.     THE DEAL:
    8.     Use without restrictions. No warranty. Share improvements.
    9. */
    10.  
    11.  
    12. using UnityEngine;
    13. using System.Collections;
    14.  
    15. public class StepSequencer : MonoBehaviour
    16. {
    17.     public AudioClip syncLoop;
    18.    
    19.     public float bmp = 120; // set before Start()
    20.    
    21.     public AudioSource bdAudioSource;
    22.     public AudioSource sdAudioSource;
    23.     public AudioSource hhAudioSource;
    24.    
    25.     const int STEP_COUNT = 16;
    26.     bool[] bdSteps = new bool[]{true,false,false,false,true,false,false,false,true,false,false,false,true,false,false,false};
    27.     bool[] sdSteps = new bool[]{false,false,false,false,true,false,false,false,false,false,false,false,true,false,false,false};
    28.     bool[] hhSteps = new bool[]{true,true,true,true,true,true,true,true,true,true,true,true,true,true,true,true};
    29.    
    30.     float cycleDuration; // seconds
    31.     int currentStep;
    32.     float normalizedCyclePosition;
    33.    
    34.     AudioSource syncAudioSource;
    35.    
    36.    
    37.     void Start()
    38.     {
    39.         syncAudioSource = gameObject.AddComponent<AudioSource>();
    40.         syncAudioSource.clip = syncLoop;
    41.         syncAudioSource.loop = true;
    42.         syncAudioSource.pitch = bmp / 120f;
    43.         syncAudioSource.Play();
    44.        
    45.         // beats per minute to seconds per beat times four (4/4)
    46.         cycleDuration = (60 / bmp) * 4;
    47.     }
    48.    
    49.    
    50.     void Update()
    51.     {
    52.         normalizedCyclePosition = syncAudioSource.timeSamples / (float) syncLoop.samples;
    53.         float currentFloatStep = normalizedCyclePosition * STEP_COUNT;
    54.         float nextStep = currentStep + 1;
    55.        
    56.         if( currentFloatStep > nextStep - 0.5f  !( currentStep == 0  currentFloatStep > STEP_COUNT-1 ) )
    57.         {
    58.             currentStep++;
    59.             if( currentStep == STEP_COUNT ) currentStep = 0;
    60.            
    61.             float offset = ( ( nextStep - currentFloatStep ) / (float) STEP_COUNT ) * cycleDuration; // in seconds
    62.            
    63.             if( bdSteps[currentStep] ) bdAudioSource.Play( (ulong) ( offset * bdAudioSource.clip.frequency ) );
    64.             if( sdSteps[currentStep] ) sdAudioSource.Play( (ulong) ( offset * sdAudioSource.clip.frequency ) );
    65.             if( hhSteps[currentStep] ) hhAudioSource.Play( (ulong) ( offset * hhAudioSource.clip.frequency ) );
    66.         }
    67.     }
    68.    
    69.    
    70.     void OnGUI()
    71.     {
    72.         for (int s = 0; s < STEP_COUNT; s++) {
    73.             float normalisedIndex = s / (STEP_COUNT-1f);
    74.             bdSteps[s] = GUI.Toggle( new Rect( 20 + 400 * normalisedIndex, 20, 20, 20 ), bdSteps[s], "" );
    75.             sdSteps[s] = GUI.Toggle( new Rect( 20 + 400 * normalisedIndex, 40, 20, 20 ), sdSteps[s], "" );
    76.             hhSteps[s] = GUI.Toggle( new Rect( 20 + 400 * normalisedIndex, 60, 20, 20 ), hhSteps[s], "" );
    77.         }
    78.         GUI.HorizontalSlider( new Rect( 20, 80, 400, 20 ), normalizedCyclePosition, 0, 1 );
    79.     }
    80.    
    81.    
    82. }
     
    Last edited: Jan 31, 2012
  26. metaphysician

    metaphysician

    Joined:
    May 29, 2012
    Posts:
    190
    hey everyone! - this is quite an interesting thread.

    @ Carl i did now just try to download and try out your example, and i definitely get strange behavior. i'm not sure what types or lengths of files should be present but my current choices don't work too well it seems. the metronome file should be a beat long i beieve but not sure how long the Bass, snare and hi hat sounds should be to get best results.

    @ everyone - i'm starting on an interesting project to create a transforming step sequencer (like an 8X8 tenori on)with 4 layers, but with a twist(literally). the principle that inspired me was one of stars in a constellation contained in a cube of 8X8X8 that can be rotated to rearrange the contents and reassign timbres in (hopefully) a novel way. since actually trying to create or draw a 3D melody of points seems fairly difficult, i've decided to split the cube at that point into 4 8X8 layers, and then provide a way where the layers can be stacked up into a cube and rotated in that way.

    after checking out Quickfingers sequencer i can pretty much agree that i want 4 8X8 sequencers (1 rhythm, 1 bass, 2 instruments) his version has the tightest timing i've heard so far of that style, and it can sound 32 notes at once. i'd consider buying the project but i'm poor and sort of intrigued by the challenge of doing it myself at the same time. at the moment my demo is very bad - it uses OneShot with 3D objects and no guide track - done entirely with visual collisions to generate sounds.

    so one difference i'm planning on is implementing phasing parts by allowing each layer's size to be setup independently anywhere from 1 to 8, but still using the same beat to drive all layers. but i'd like to know if anyone has advice on how best to acheive this and the rest of the project both visually and musically. i would assume i should keep the visual and audio parts separate.

    any and all advice/help appreciated.

    scott
     
  27. EvansGreen

    EvansGreen

    Joined:
    Nov 9, 2012
    Posts:
    129
    It was great to play around with that tool. I agree with gregzo in that as a musician, it's still difficult to find a use for it, but coding-wise it's a nice work of thinking out of the box. Cheers!
     
  28. cecarlsen

    cecarlsen

    Joined:
    Jun 30, 2006
    Posts:
    858
    @metaphysician. The sync audio clip has to be four beats at 120 bpm. The length of the other sounds are up to you. You could also rewrite the script to just create a sync audio clip on Start using AudioClip.Create.

    It sounds like a fun project. I bet you've already seen Amit Pitaru's work (http://vimeo.com/10514562).

    Drop me a message when there somthing to see. I'm curious.

    ~ce
     
  29. gregzo

    gregzo

    Joined:
    Dec 17, 2011
    Posts:
    795
    Hi Metaphysician,

    A lot of headaches, failed attempts, and one published app have passed since my last post in this thread.

    Using a reference clip is totally an option, albeit a somewhat clumsy one : what do you do if you need to change the tempo whilst playing? The code for that isn't pretty, I can assure you... Plus you waste a perfectly good AudioSource for nothing. And Play(offset) can be inaccurate on iOS with best latency settings. By the way, iOS tip : do everything at 24khz, as anything above will be downsampled by fmod anyway.

    For my current project, I took a different route : treat AudioClips as ring buffers, and fill them continuously with fresh data. The clip can be any length, but at least twice as long as your longest sample (and a bit more). It's more complex to setup, but far more flexible. It also requires less ram, but more processing as data has to be continuously fed into the clip.

    The script below demonstrates this approach. It is adapted from my current project, where I need to change the tempo continuously (every beat). Feed the data for the first 2 beats in PrepareClip, call StartPlaying, and the script will itself ask for data as it needs it.

    Example :
    Code (csharp):
    1.  
    2. class RingAudioClip : MonoBehaviour
    3. {
    4.      public float bpm;
    5.      public int nbOfBeats;
    6.      public int nbOfChannels;
    7.      public bool is3dClip;
    8.      public int longestSample;
    9.    
    10.     int _samplesPerBeat;
    11.     const int _sampleRate = 24000;
    12.     AudioClip _clip;
    13.     int _clipLength; //in samples
    14.     AudioSource _source;
    15.     int _nextSetDataPoint = 0;
    16.     int _nextOffsetInClip = 0;
    17.     int _currentBeat;
    18.    
    19.     int _beatToFill;
    20.    
    21.      void Awake()
    22.      {
    23.           float beatTime = bpm/60f;
    24.           _samplesPerBeat = (int)(beatTime*_sampleRate);
    25.            _source = gameObject.AddComponent(typeof (AudioSource)) as AudioSource;
    26.           _source.loop = true;
    27.          
    28.           if(longestSample<_samplesPerBeat)
    29.           {
    30.             longestSample = _samplesPerBeat;
    31.               }
    32.           _clipLength = longestSample*2 + _sampleRate; //extra second
    33.           _clip = AudioClip.Create("RingClip", _clipLength,nbOfChannels,_sampleRate,is3dClip,false);
    34.            _source.clip = _clip;
    35.      }
    36.    
    37.    
    38.     public void PrepareClip(float[] beat0Data, float[] beat1Data)
    39.     {
    40.         _clip.SetData(beat0Data,0);
    41.         _clip.SetData(beat1Data,_samplesPerBeat);
    42.        
    43.         _nextSetDataPoint = _samplesPerBeat; //could add value here to set data later in the beat for less latency.
    44.         _nextOffsetInClip = 2*_samplesPerBeat;
    45.        
    46.         _beatToFill = 2;
    47.     }
    48.    
    49.     public void StartPlaying()
    50.     {
    51.         _source.Play ();
    52.        
    53.         StartCoroutine(MonitorClip());
    54.     }
    55.    
    56.     bool _clipWillLoop;
    57.     bool _keepSettingData;
    58.     IEnumerator MonitorClip()
    59.     {
    60.         _keepSettingData = true;
    61.    
    62.         while(_keepSettingData)
    63.         {
    64.             if(_clipWillLoop)
    65.             {
    66.                 while(_source.timeSamples>10000) //wait for the clip to have looped
    67.                 {
    68.                     yield return null;
    69.                 }
    70.                 _clipWillLoop = false;
    71.             }
    72.            
    73.             while(_source.timeSamples<_nextSetDataPoint) //wait for next beat to set new data. So, when playback just passed beat 2, set data for beat 3.
    74.             {
    75.                 yield return null;
    76.             }
    77.            
    78.             _clip.SetData(GetDataForBeatNb(_beatToFill),_nextOffsetInClip); //set data at correct offset in clip
    79.            
    80.             _beatToFill = (_beatToFill+1)%nbOfBeats;
    81.            
    82.             _nextSetDataPoint = (_nextSetDataPoint+_samplesPerBeat)%_clipLength; //re compute next set data point and offset
    83.            
    84.             _nextOffsetInClip = (_nextOffsetInClip+_samplesPerBeat)%_clipLength;
    85.            
    86.             if(_nextOffsetInClip<_source.timeSamples)
    87.             {
    88.                 _clipWillLoop = true; //got to take care of the special case when clip's timeSamples are reset to 0
    89.             }
    90.         }
    91.        
    92.     }
    93.    
    94.     float[] GetDataForBeatNb(int beatNb)
    95.     {
    96.         float[] data;
    97.         //data should be ready, preferably in another class, indexed by beat
    98.         return data;
    99.     }
    100. }
    Bonuses :
    -No timing problems, ever. Sample accurate.
    -Change tempo on any beat without resizing the clip. Resizing an audioclip means destroying one and create a new one, a great opportunity to feed the garbage collector! Here, no destroy, no new allocations, nothing.
    -Uses far less ram. Not limited : a 256 beats step seq doesn't eat up more ram or need more processing than an 8 step seq. With minimal adjustments, can also accept irregular divisions of beats (heck, I'm changing the duration of every single beat by an instable amount continuously without a hitch).

    Issues :
    Setting data every beat will tax the cpu more than looping a simple clip of the right size.
    I've tested 5 simultaneous sources in the editor, running on an 2010 iMac, without issues, but wouldn't be surprised to see framerate drops above that. SetData could be spread on multiple frames to avoid spikes. Ultimately, the right method depends on your needs and the platforms your software is aimed at.
     
    Last edited: Mar 3, 2013
  30. gregzo

    gregzo

    Joined:
    Dec 17, 2011
    Posts:
    795
    Hi CarlEmail,

    Why should the sync clip be that long? A waste of useful ram, especially on iOS, imo.
    One beat is enough, and far more convenient to time stuff :

    samplesUntilNextBeat = syncClip.samples - syncClipSource.timeSamples;
    mySourceToPlayInSyncWithNextBeat.Play(samplesUntilNextBeat);

    What was your rationale behind needing such a long sync clip?
     
  31. metaphysician

    metaphysician

    Joined:
    May 29, 2012
    Posts:
    190
    thanks gregzo! - a lot of great stuff to chew on there. did not know that FMOD downsampled iOS audio in Unity. that's depressing, though probably not a big deal for me. not sure if the ring buffer method will work for me. if each source could be instanced to be polyphonic it might work, but i wasn't originally planning on changing the tempo on the fly too much, though changing one layer's timing to be multiplier or divisor was a thought.

    i'm planning on making my version be one beat as well, so thanks for that bit of math on the sync clip method.

    @Carl - thanks - your method works perfectly.

    @ everyone here's a suggestion i was thinking of with regard to triggering audio. we all seem to know that Play() has the issue of cutting off the sound. PlayOneShot seems to be better but has bad timing, but how about creating and then destroying a standard AudioSource using our adjusting Play() function with samples? i came across this script when someone asked a question about PlayClipAtPoint:

    Code (csharp):
    1. function PlayClipAt(clip: AudioClip, pos: Vector3): AudioSource; {
    2.   var tempGO = GameObject("TempAudio"); // create the temp object
    3.   tempGO.transform.position = pos; // set its position
    4.   var aSource = tempGO.AddComponent(AudioSource); // add an audio source
    5.   aSource.clip = clip; // define the clip
    6.   // set other aSource properties here, if desired
    7.   aSource.Play(); // start the sound
    8.   Destroy(tempGO, clip.length); // destroy object after clip duration
    9.   return aSource; // return the AudioSource reference
    10. }
    so i was thinking i could translate that to C# so far i have this but i'm getting errors. Carl, can you help me out here? here's what i have:

    Code (csharp):
    1.     void AudioSource PlayClipAt (AudioClip clip, Vector3 pos ) {
    2.         // create the temp object
    3.         public object tempGO = GameObject("TempAudio");
    4.         // set its position
    5.         tempGO.transform.position = pos;
    6.         // add an audio source
    7.         public AudioSource aSource = tempGO.AddComponent(AudioSource);
    8.         // define the clip
    9.         aSource.clip = clip;
    10.         // set other aSource properties here, if desired
    11.         // start the sound
    12.         aSource.Play();
    13.         // destroy object after clip duration
    14.         Destroy(tempGO, clip.length);
    15.         // return the AudioSource reference
    16.         return aSource;
    17.     }
    The main issue is that i'm unsure of how to declare a void whose return type is an AudioSource.

    at any rate, the logic i'm thinking is that the AudioSource is created at collision (or whatever event) time. the AudioSource can use Play() and its more precise placement via samples to sync the sound. pitch values can be passed, envelopes could potentially be added by running an array changing volume over time, etc. if there's an issue of conflict with AudioSources being destroyed and cutting off audio, maybe create an array of sufficient size so that each Audio source is unique enough (maybe like a ring buffer?).

    thoughts? suggestions? is it a ton of overhead on the CPU to do this? i've been told that the above method is virtually identical to PlayOne Shot resource wise and this way it keeps track of each instance. seems a better method.

    scott
     
  32. gregzo

    gregzo

    Joined:
    Dec 17, 2011
    Posts:
    795
    @metaphysician

    Hi!

    In c#, you need to specify a mthod's return type. void tells the compiler that the following method doesn't return anything. So this :
    Code (csharp):
    1. void AudioSource PlayClipAt (AudioClip clip, Vector3 pos )
    is telling the compiler that method PlayClipAt is private (implicit), returns nothing, and returns an AudioSource.
    You probably need this instead :
    Code (csharp):
    1. public AudioSource PlayClipAt (AudioClip clip, Vector3 pos )
    Syntax issues aside, I wouldn't advise this on iOS, as you'll be creating and destroying objects all the time and feeding the gc monster.

    As you've noticed, Play(offset) stops a source from playing if it is already. One solution is to have pairs of AudioSources, and to play the clip on the free one as you fade out and stop the busy one (voice stealing). If all you need to do is to play clips without digging into their data, it would be a viable option. Each track of your sequencer could have 2(or more) AudioSources, and apply a "steal oldest" behaviour.

    Another route would be to create empty clips (one per track), and to manage audio data yourself. This is guaranteed to be 100% precise. Read about GetData and SetData! Main issue, as mentioned in my previous post, is that resizing tracks becomes more cumbersome.

    Oh, and the proper c# translation for the js code above would be :
    Code (csharp):
    1.  
    2. AudioSource PlayClipAt (AudioClip clip, Vector3 pos ) {
    3.  
    4.         // create the temp object
    5.  
    6.         GameObject tempGO = new GameObject("TempAudio"); //new keyword in c#
    7.  
    8.         // set its position
    9.  
    10.         tempGO.transform.position = pos;
    11.  
    12.         // add an audio source
    13.  
    14.         AudioSource aSource = tempGO.AddComponent(typeof(AudioSource)) as AudioSource; //need to cast as AddComponent returns a Component
    15.  
    16.         // define the clip
    17.  
    18.         aSource.clip = clip;
    19.  
    20.         // set other aSource properties here, if desired
    21.  
    22.         // start the sound
    23.  
    24.         aSource.Play();
    25.  
    26.         // destroy object after clip duration
    27.  
    28.         Destroy(tempGO, clip.length);
    29.  
    30.         // return the AudioSource reference
    31.  
    32.         return aSource;
    33.  
    34.     }
    But I wouldn't advise it anyway.
     
    Last edited: Mar 4, 2013
  33. nice sprites

    nice sprites

    Joined:
    Mar 20, 2013
    Posts:
    2
    I've been playing around with CarlEmail's code and I was wondering if there was any way whatsoever to change the bpm on the fly. I am also primarily a musician and not ultra good with programming.

    Any help would be appreciated.
     
  34. gregzo

    gregzo

    Joined:
    Dec 17, 2011
    Posts:
    795
    @nice sprites
    Read the page again, I've posted a script to that effect.
    It doesn't handle the sequencer part (tracking of beats and bars), but implementing that is not the hard part there!
     
  35. nice sprites

    nice sprites

    Joined:
    Mar 20, 2013
    Posts:
    2
    Thanks for your reply. I didn't really understand your code cos I'm still quite new at this. I did find a method of changing it but it doesnt change whilst it is playing. however if you add a couple buttons for stop and play. and the play button if statmenets triggers the void Start() bit of code. the bpm adjusts when it restarts.
     
  36. gregzo

    gregzo

    Joined:
    Dec 17, 2011
    Posts:
    795
    Hi nice sprites,

    There are 4 ways I 've experimented with to change tempo on the fly.

    1) use a looping AudioClip that is as long as your sequence requires. Fill it with data as the user fiddles with your sequencer's steps. To change tempo, build a new AudioClip of the require number of samples, assign it to a second AudioSource, set it's timeSamples to the next beat, stop source 1 and start source 2. This is made much easier by the newly introduced audio functions of Unity 4.1. See my audio bugs thread, I've posted scripts demonstrating similar stuff.

    Advantages: easy to implement. Will change tempo on the next beat, close enough to a real time feeling.
    Disadvantages: ram heavy for long sequences. Creating a new clip and destroying the old one will mean the garbage collector will kick in, triggering a framerate drop, especially on mobile devices.

    2) each step is a seperate AudioSource. Simply adjust when they are fired according to tempo. Easy to implement with 4.1's new PlayScheduled function. No needto use SetData here, but clips might overlap. Will use lots of sources.could be combined with a voice stealing implementation (2-3 sources per track) which would complexify things further, making SetData seem like a much more elegant solution imo.

    3)use the ring audioclip method described in a post above.

    4) Go directly through OnAudioFilterRead. Lowest latency solution. Requires a similar implementation to the ring audioclip method, except the buffer's size is very small, requiring to keep track of which chunks of your samples' data you're feeding into the buffer. Beware, as OnAudioFilterRead runs on a seperate thread.

    Which platform do you have in mind, and what's your precise use case? I'll try to advise accordingly.

    Cheers,

    Gregzo
     
    Last edited: Mar 20, 2013
  37. willemsenzo

    willemsenzo

    Joined:
    Nov 15, 2012
    Posts:
    585
    I'd like to share my attempt of a step sequencer. It plays 100% in time without sync issues. I think being able to change bpm while sequence is playing is just as important as accurate timing so that's what I aimed for. Use start/space to stop sequence.

    http://www.textureminator.com/WebSequencer/
     
    Last edited: Sep 18, 2014
  38. kinnik

    kinnik

    Joined:
    Sep 6, 2014
    Posts:
    17
    Awesome work willemsenzo! Best timing I've seen aside from the 100$ one.. :) What method did you use to get this accurate timing? Are you willing to share any code? Thanks for sharing, you keep my hope alive for getting my own working!

    @gregzo: Your solution sounds lovely for my purposes but I'm too new to figure it out I think. "Data should be ready, prepared in another class"... I have no idea how to feed that and what to feed it. :( Without anything extra I get an error *Use of unassigned local variable `data'

    Anyone willing to give me some tips?
     
    Last edited: Sep 10, 2014
  39. willemsenzo

    willemsenzo

    Joined:
    Nov 15, 2012
    Posts:
    585
    Thanks kinnik. The timing is as perfect as possible because I actually write samples to a designated data buffer there is no 'scheduling'. No audioclips are used to play audio. If there is one thing I can suggest you to do: read the entire Unity documentation about audio (isn't too much material) because it's crucial that you have an idea of how digital audio works and how Unity goes about it. The concept itself is really easy, once you understand just a little bit more you can go and explore.

    To give a hint, I did what gregzo proposed as the 4th option. This requires you understand the basics of digital audio and you also know some programming as well. There is also maths involved to calculate bpm and how many samples to write and where to write them to but thats pretty straightforward when you're already in the process (I'm not a math guru but figured it all out by just playing around and observing). A good practice could be to toy around with OnAudioFilterRead and AudioClip.GetData because those are the functions you most likely going to need.
     
    Last edited: Sep 18, 2014
  40. smokingbunny

    smokingbunny

    Joined:
    Jul 24, 2014
    Posts:
    98
    just came across this post, so excuse the resurrection of it ;)

    i used to make tons of music software in good ol' max/msp/jitter which i loved doing. but have been thinking for some time on making something with unity. plus the upside is proper code, much faster, and you can export to multiple platforms. which is great.
    max/msp/jitter was brilliant, because you could see what connects with what, but it did become slow since its visual based coding

    so cool. thanks for this. id like to make some of the things i made before which you can see here:

    and here:


    actual pro sound designers bought these as well [when i sold them], such as richard devine & drasko v. which was awesome
    but would also like to look at synths and then rewire into DAW
     
  41. Nidre

    Nidre

    Joined:
    Dec 12, 2012
    Posts:
    22
    Hello Everyone,

    I wonder if you guys are still looking for an audio sequencer solution. If yes, I have created one that worked reliably up to 250-300 bpm on my computer. Also it is available as an open source project for anyone to use and contribute!

    https://github.com/Nidre/Unity-Audio-Sequencer

    Basic feature list:
    • Seamless and Stable Audio Sequencer.
    • Stable even in high metronomes.
    • Ability to change Bpm or Sequence at runtime.
    • Works with Unity3D 5+ since it uses OnAudioFilterRead to access Audio Buffer.
    Acknowledgments
    Components
    • Sequencer: Basic and main component that actually plays the audio files.
    • Sequener Group: Manages child sequencers.
    • Sequencer Driver: Manager any list of Sequencer Groups or Sequencers.
    • Sequencer Base: Base class for all of the classes above. Should not be used by itself.
     
    joe_crawley and DHARMAKAYA like this.
  42. moshangmusic

    moshangmusic

    Joined:
    Jul 16, 2015
    Posts:
    9
    @Nidre This is awesome - thanks a lot!
     
  43. r618

    r618

    Joined:
    Jan 19, 2009
    Posts:
    1,302
    just a remark: OnAudioFilterRead is present in unity since from very long time ago - probably 2.x - , if you're basing the 5+ version requirement on it, that is not a valid reason