1. Help us improve the editor usability and artist workflows. Join our discussion to provide your feedback.
    Dismiss Notice
  2. We're looking for feedback on Unity Starter Kits! Let us know what you’d like.
    Dismiss Notice
  3. Unity 2017.2 beta is now available for download.
    Dismiss Notice
  4. Unity 2017.1 is now released.
    Dismiss Notice
  5. Check out all the fixes for 5.6 on the patch releases page.
    Dismiss Notice

Writing AudioListener.GetOutputData to wav - problem

Discussion in 'Developer Preview Archive' started by gregzo, Jan 13, 2012.

  1. gregzo

    gregzo

    Joined:
    Dec 17, 2011
    Posts:
    795
    Hi! Getting to grips with audio in 3.5, making good progress in spite of my lack of experience. Next task : recording the mixer's total output to a wav file. I've tried using PCMReaderCallback, but get an error : GetOutputDataHelp can only be called from the main thread. Also tried, naive me, to store GetOutPutData's output every FixedUpdate and build data from that, of course I got very choppy results. Any help would be great!
     
  2. soren

    soren

    Joined:
    Feb 18, 2008
    Posts:
    123
    You should not use GetOutputData from PCMReaderCallback(). Just save the float[] passed to the function to disc. Remember to add a WAV header to that. I'll do an example of this for the final documentation.
     
  3. gregzo

    gregzo

    Joined:
    Dec 17, 2011
    Posts:
    795
    Hi Soren! Just a bit confused by your answer... GetOutputData fills a float array with previous audio data, but to record continuously I need to call it at precise intervals, hence my attempt at using PCMReaderCallback to do that. Can you just give a slightly more precise hint? As to wav header, that's fine, fought my way through that one yesterday, wrote my first audio file with sheer joy! It was a 2s piano E5...
     
  4. gregzo

    gregzo

    Joined:
    Dec 17, 2011
    Posts:
    795
    To make my question more clear:
    GetOutputData returns previous audio data. I pass to it an array of, let's say, 512 floats. How can I fill a new array of floats with the data precisely every 512 samples (i.e. every 44100/512 seconds) ? Should I use the reference audioclip trick here as well and calculate how many samples have been played since the last update, and then create a float array of the right size to get just the new data? Or is there, as I strongly suspect and violently hope, another, much simpler way of doing things?

    Cheers!
     
  5. soren

    soren

    Joined:
    Feb 18, 2008
    Posts:
    123
    No, you shouldn't use GetOutputData, it's impossible to sync correctly (as you're find out yourself).
    Use OnAudioFilterRead, and place it either on the AudioSource you want to record or on the AudioListener if you want the entire mix.

    All samples are passed thru OnAudioFilterRead()

    void OnAudioFilterRead(float[] data, int channels)
    {
    .. just write data to disc .. after a WAV header.
    }
     
  6. gregzo

    gregzo

    Joined:
    Dec 17, 2011
    Posts:
    795
    Thanks very much! just finished implementing a start/stop writing to wav function, which outputs listener audio. I'll post if anyone asks!

    Happy at last,

    Gregzo
     
  7. AdbC99

    AdbC99

    Joined:
    Jan 18, 2012
    Posts:
    52
    Hi,

    I saw this thread and I am having the same problem, would you be able to post a solution? Do you know what the sample rate of the wav file is, 44kHz?

    Cheers,
    Alistair
     
    Afreshman likes this.
  8. gregzo

    gregzo

    Joined:
    Dec 17, 2011
    Posts:
    795
  9. soren

    soren

    Joined:
    Feb 18, 2008
    Posts:
    123
    The samplerate should match AudioSettings.outputSampleRate
    Unless you resample yourself
     
  10. gregzo

    gregzo

    Joined:
    Dec 17, 2011
    Posts:
    795
    Hi!

    Sorry for the wait. The following script will, when attached to a listener, record a wav file in stereo.
    "r" to start recording, "r" to stop recording. Multiple recordings will overwrite the same file!

    My code is surely not the best, I've started coding not long ago. More experienced people, feel free to make fun and enlighten!

    Cheers,

    Gregzo

    Code (csharp):
    1. #pragma strict
    2.  
    3. import System.IO; // for FileStream
    4. import System; // for BitConverter and Byte Type
    5.  
    6.  
    7.  
    8. private var bufferSize : int;
    9. private var numBuffers : int;
    10. private var outputRate : int = 44100;
    11. private var fileName : String = "recTest.wav";
    12. private var headerSize : int = 44; //default for uncompressed wav
    13.  
    14. private var recOutput : boolean;
    15.  
    16. private var fileStream : FileStream;
    17.  
    18. function Awake()
    19. {
    20.     AudioSettings.outputSampleRate = outputRate;
    21. }
    22.  
    23. function Start()
    24. {
    25.     AudioSettings.GetDSPBufferSize(bufferSize,numBuffers);
    26. }
    27.  
    28. function Update()
    29. {
    30.     if(Input.GetKeyDown("r"))
    31.     {
    32.     print("rec");
    33.         if(recOutput == false)
    34.         {
    35.             StartWriting(fileName);
    36.             recOutput = true;
    37.         }
    38.         else
    39.         {
    40.             recOutput = false;
    41.             WriteHeader();     
    42.             print("rec stop");
    43.         }
    44.     }  
    45. }
    46.  
    47. function StartWriting(name : String)
    48. {
    49.     fileStream = new FileStream(name, FileMode.Create);
    50.     var emptyByte : byte = new byte();
    51.    
    52.     for(var i : int = 0; i<headerSize; i++) //preparing the header
    53.     {
    54.         fileStream.WriteByte(emptyByte);
    55.     }
    56. }
    57.  
    58. function OnAudioFilterRead(data : float[], channels : int)
    59. {
    60.     if(recOutput)
    61.     {
    62.         ConvertAndWrite(data); //audio data is interlaced
    63.     }
    64. }
    65.  
    66. function ConvertAndWrite(dataSource : float[])
    67. {
    68.    
    69.     var intData : Int16[] = new Int16[dataSource.length];
    70. //converting in 2 steps : float[] to Int16[], //then Int16[] to Byte[]
    71.    
    72.     var bytesData : Byte[] = new Byte[dataSource.length*2];
    73. //bytesData array is twice the size of
    74. //dataSource array because a float converted in Int16 is 2 bytes.
    75.    
    76.     var rescaleFactor : int = 32767; //to convert float to Int16
    77.    
    78.     for (var i : int = 0; i<dataSource.length;i++)
    79.     {
    80.         intData[i] = dataSource[i]*rescaleFactor;
    81.         var byteArr : Byte[] = new Byte[2];
    82.         byteArr = BitConverter.GetBytes(intData[i]);
    83.         byteArr.CopyTo(bytesData,i*2);
    84.     }
    85.    
    86.     fileStream.Write(bytesData,0,bytesData.length);
    87. }
    88.  
    89. function WriteHeader()
    90. {
    91.    
    92.     fileStream.Seek(0,SeekOrigin.Begin);
    93.    
    94.     var riff : Byte[] = System.Text.Encoding.UTF8.GetBytes("RIFF");
    95.     fileStream.Write(riff,0,4);
    96.    
    97.     var chunkSize : Byte[] = BitConverter.GetBytes(fileStream.Length-8);
    98.     fileStream.Write(chunkSize,0,4);
    99.    
    100.     var wave : Byte[] = System.Text.Encoding.UTF8.GetBytes("WAVE");
    101.     fileStream.Write(wave,0,4);
    102.    
    103.     var fmt : Byte[] = System.Text.Encoding.UTF8.GetBytes("fmt ");
    104.     fileStream.Write(fmt,0,4);
    105.    
    106.     var subChunk1 : Byte[] = BitConverter.GetBytes(16);
    107.     fileStream.Write(subChunk1,0,4);
    108.    
    109.     var two : UInt16 = 2;
    110.     var one : UInt16 = 1;
    111.    
    112.     var audioFormat : Byte[] = BitConverter.GetBytes(one);
    113.     fileStream.Write(audioFormat,0,2);
    114.    
    115.     var numChannels : Byte[] = BitConverter.GetBytes(two);
    116.     fileStream.Write(numChannels,0,2);
    117.    
    118.     var sampleRate : Byte[] = BitConverter.GetBytes(outputRate);
    119.     fileStream.Write(sampleRate,0,4);
    120.    
    121.     var byteRate : Byte[] = BitConverter.GetBytes(outputRate*4);
    122.  // sampleRate * bytesPerSample*number of channels, here 44100*2*2
    123.  
    124.     fileStream.Write(byteRate,0,4);
    125.    
    126.     var four : UInt16 = 4;
    127.     var blockAlign : Byte[] = BitConverter.GetBytes(four);
    128.     fileStream.Write(blockAlign,0,2);
    129.    
    130.     var sixteen : UInt16 = 16;
    131.     var bitsPerSample : Byte[] = BitConverter.GetBytes(sixteen);
    132.     fileStream.Write(bitsPerSample,0,2);
    133.    
    134.     var dataString : Byte[] = System.Text.Encoding.UTF8.GetBytes("data");
    135.     fileStream.Write(dataString,0,4);
    136.    
    137.     var subChunk2 : Byte[] = BitConverter.GetBytes(fileStream.Length-headerSize);
    138.     fileStream.Write(subChunk2,0,4);
    139.    
    140.     fileStream.Close();
    141. }
     
    Last edited: Jan 20, 2012
  11. dyox

    dyox

    Joined:
    Aug 19, 2011
    Posts:
    483
    Hi , i see your code , it's for change the OnAudioFilterRead Data to Wav file , but do you know how convert this data to ClipData ?

    I would like record output and rewrite it on other audioclip.
    If you have any solution.
    Thx.
     
  12. gregzo

    gregzo

    Joined:
    Dec 17, 2011
    Posts:
    795
    Use AudioClip.GetData and SetData, c'est bien documenté si tu cherches sur le forum! Très simple.
     
  13. toomas

    toomas

    Joined:
    Jun 19, 2009
    Posts:
    16
    Thanks for sharing the code. I am trying to convert it into C# which in itself is quite effortless, but I am having some trouble with other things.

    I get the following compile errors:
    Property or indexer `UnityEngine.AudioSettings.outputSampleRate' cannot be assigned to (it is read only)
    `UnityEngine.AudioSettings' does not contain a definition for `GetDSPBufferSize'

    According to Unity Scripting Manual, the code is right. I have 3.4.2f3 Pro and can't upgrade to 3.5 yet, because of known problems with it. Any ideas what might be going on or on what version does it run on? I can share the resulting code when someone is interested :)
     
  14. gregzo

    gregzo

    Joined:
    Dec 17, 2011
    Posts:
    795
    @toomas
    Hi!

    You won't be able to get this to work if you don't upgrade to 3.5...

    Sorry!

    G
     
  15. Kolger

    Kolger

    Joined:
    Feb 25, 2012
    Posts:
    7
    Hey, gregzo!

    I tried using your code on Unity 3.5, and it worked just fine, apparently. However, the audio file created doesnt have the slightest sound, not even background noise. Is it possible that Im forgetting something? I also tried to port your JS code to a C# GUI I had build, with 3 simple buttons (Rec, Stop, Play), but it didnt work as expected (with JS code, the generated file has some hundred bytes of size, but with the C# version, it always has 0 bytes).

    On C#, since I dont need to store the audio into a file while speaking (I only need it when the used presses "Stop"), I didnt use this function:

    Code (csharp):
    1. function OnAudioFilterRead(data : float[], channels : int)
    2. {
    3.     if(recOutput)
    4.     {
    5.         ConvertAndWrite(data); //audio data is interlaced
    6.     }
    7. }
    8.  
    Instead, I got the "audio.clip.GetData()" float array and passed it to the ConvertAndWrite() function.

    By the way, have you tried (or managed) to use the new Microphone.Start() class/method which comes on 3.5? I managed to record audio with the examples on the documentation and play the recorded audio during runtime, but I wasn't able to save it to a .wav file. Any tips on that?

    Thanks in advance!
     
  16. gregzo

    gregzo

    Joined:
    Dec 17, 2011
    Posts:
    795
    @Kolger

    Are you sure your script is attached to the listener object in your scene? And what are you trying to output exactly?

    The posted script outputs the playing audio data to a wav (hence the use of OnAudioFilterRead). To output a non playing clip, prepare the header with StartWriting(), then pass a float[] to ConvertAndWrite(), and finalize with WriteHeader();

    Hope it helps!

    I don't know C#, so can't help there, sorry...
     
    Last edited: Feb 25, 2012
  17. Kolger

    Kolger

    Joined:
    Feb 25, 2012
    Posts:
    7
    If by "listener object" you mean the camera, yes, it is :)
    If that's not the case, then Im no sure. Im fairly new to Unity3D, so I'm not used to the mechanics yet.

    I'm trying to record a user's brief speech into an audio file, but it doesn't need to be recorded as the user speaks. If I can manage to save it to a file when he clicks the "Stop" button, thats alright for me.

    I've followed this same sequence of methods you've described, but all I got was a 0 bytes .wav file (while using C#). Copying your exact code into a JavaScript file, attaching it to the camera and running it (pressing 'r' to start recording and 'r' again to stop) got me a .wav file with some bytes in it, but I couldnt hear any sound at all.

    Perhaps anybody here has a clue on the problem? Or maybe soren has any examples of how to save audio.clip into a file? :)

    Thanks again!
     
  18. Kolger

    Kolger

    Joined:
    Feb 25, 2012
    Posts:
    7
    I've finally managed to write it to a file in C# (posting the code later, in case it helps anyone)!

    However, the file itself sounds weird, like when you speed up a recording and get that thin voice, helium-like :p I'm using the Microphone.Start() to record and the audio.clip variable to store the audio clip. I think the problem might be on that WriteHeader() method you've used. Are you sure of all the values you put in there?

    Thanks!
     
  19. gregzo

    gregzo

    Joined:
    Dec 17, 2011
    Posts:
    795
    Helium pitched voice= sample rate discrepancies. My code is just an example to record 2 channels(stereo) at 44.1khz(samplerate) with a bit depth of 16. Adjust the values accordingly if your samplerate is different!
     
  20. dansav

    dansav

    Joined:
    Sep 22, 2005
    Posts:
    406
    Hi. I'm using gregzo's code to try and record the microphone in 3.5 and save it but I'm not getting anything. I also have an ogg sound playing, it doesn't record that either. I think my mic is working, but am not sure. The code has some meter things in the inspector. Are those level meters? If so I'm not getting anything during the record.

    I have the script on the audio listener. It seems to record a .wav file but nothing is on it. Did anyone ever figure this out.
    Is Gregzo's solution the only one out there.

    Why didn't unity include this function in the api?

    Thanks,
    Dan
     
  21. gregzo

    gregzo

    Joined:
    Dec 17, 2011
    Posts:
    795
    The microphone class is used to generate AudioClip objects. Easiest would be to wait until the recording is finished, use GetData to get the float array from the recorded clip, and feed that in my write function.

    Weird your ogg is not being written in the wav, though... Do post some code!
     
  22. dansav

    dansav

    Joined:
    Sep 22, 2005
    Posts:
    406
    Hi. The code I'm using is yours. I have it attached to the audio listener ( I also tried attaching it to the audio source). I start the microphone with a button (see below), then hit the "r" key to start/stop recording.

    Something records but it has no sound upon playback. I mean it records a new file with some data in it, but I can't hear anything.

    To start/stop the microphone I use this:

    if (GUI.Button (Rect (0,300,100,25), "record mic")){
    audio.clip = Microphone.Start("Blue Snowball", true, 10, 44100);
    audio.Play();
    }

    if (GUI.Button (Rect (100,300,100,25), "stop record mic")){
    Microphone.End("Blue Snowball");
    }


    to start the .ogg I use this
    var mp3WWW = new WWW("file://" + writeablePath + "/testogg.ogg");
    yield mp3WWW;

    var myclip=mp3WWW.audioClip;
    audio.clip=myclip;
    audio.pitch=1;
    audio.Play();


    I get the microphone name from this code:

    function getPathAndDevices () {


    for (var device in Microphone.devices)
    {
    Debug.Log("Name: " + device);
    }

    }

    Thanks for any ideas,

    Dan
     
  23. gregzo

    gregzo

    Joined:
    Dec 17, 2011
    Posts:
    795
    Could you first try out my code in the simplest of situations, to make sure you're implementing it correctly? No www, no mic input, just 2 gameObjects in the scene, one playing a clip and one recording?

    Let me know!

    P.S.: Naming a ogg file mp3WWW is a bit confusing...

    P.S.2: I don't see you using GetAudioClip , which according to the docs, should be called to create an audioClip from an audio file.
     
    Last edited: Mar 13, 2012
  24. sonicviz

    sonicviz

    Joined:
    May 19, 2009
    Posts:
    950
    @Soren

    I have some strange things happening I hope you could explain.

    I converted this code gregzo included before to C# and tested it.
    It works fine with the OnAudioFilterRead, and the recorded wave file played back is aok.
    However, you need to unmute the audiosource in order for it to be picked up, which is a pain using the mic as it feeds back due to the input being played back through the output. In my app I need the input muted. As the microphone defaults to a 2d sound you can't hack a mute by moving the listener beyond the falloff zone, so no way to mute.

    So I decided to let the mic record it into the clip buffer and just grab the data:
    float[] samples = new float[audio.clip.samples * audio.clip.channels];
    audio.clip.GetData (samples, 0);

    I grab the clip and play it back in Unity editor it sounds fine.
    I wrote it using the same wave file routine as above and it plays back twice as fast.
    I don't want my users sounding like chipmunks!

    Is there a bit depth or sample rate issue here?
    What do I need to do to fix this?
    You mentioned before you would include a wave writer example in the docs.
    Could you do this for both use cases as outlined above please?

    ty!
     
  25. sonicviz

    sonicviz

    Joined:
    May 19, 2009
    Posts:
    950
    bumpty bump bumo...bump! *earth to soren...earth to soren....come in soren...please!*
     
  26. Dark Table

    Dark Table

    Joined:
    Nov 25, 2008
    Posts:
    228
    Thanks so much for this script. You saved me a ton of research.

    I made a C# script that saves an AudioClip as a .wav file based on your code.

    https://gist.github.com/2317063

    usage:
    Code (csharp):
    1. SavWav.Save("myfile", myAudioClip);
     
    Mohamed-Mortada, rajeev-c and chelnok like this.
  27. Kolger

    Kolger

    Joined:
    Feb 25, 2012
    Posts:
    7
    Forgot to come back to this topic :p

    gregzo's code works like a charm, btw. I've ported it to C#, and after performing the due modifications (syntax, of course), the procedure is quite simple.

    Code (csharp):
    1.  
    2. // Call this to start recording. 'null' in the first argument selects the default microphone. Add some mic checking later
    3. audio.clip = Microphone.Start (null, false, microphoneMaxRecordingTime, outputRate);
    4. // Call his method to write empty space in the file, later to be used to write the wav header.
    5. StartWriting (fileName);
    6. // Create a float vector big enough to store the recording time times the output rate and copy the audio clip to it
    7. float[] data = new float[(int)Math.Round (((Time.time - startTime) * outputRate), MidpointRounding.AwayFromZero)];     
    8. // Call his method to convert the floats into bytes and write them into the file
    9. ConvertAndWrite (data);
    10. // Finally, call his method to write the wav header into that blank space reserved earlier
    11. WriteHeader ();
    Hope it helps :)
     
  28. Loren-Logic

    Loren-Logic

    Joined:
    Jun 29, 2009
    Posts:
    5
    I'm a long-time lurker and a devout Unity Pro user, but this, my first post, is to celebrate Gregzo's and Soren's solution to input via the new Microphone class and output to a .WAV file on a stand-alone hard drive. I deeply appreciate this solution, guys!

    I attached Gregzo's script to my main camera, which had an Audio Source and was the Audio Listener. In an audio control script I used Microphone.Start() to record an audio clip. When I clicked a STOP button made with GUI my script copied the clip to a global static AudioClip variable in a script named 's'. When a slightly modified version of Gregzo's Update() detected that the global static AudioClip was not null, it played the audio clip I'd just recorded, and it wrote out a perfect .WAV file.

    Microphone.Start() forces me to pre-determine how long the recording is to be, that's still an unsolved inconvenience, but Gregzo's code handled the newly-recorded audio clip as well as it handles any other, and makes a .WAV file out of it. Just perfect. Thanks, Gregzo, and thanks, Soren, for your input too. I don't think it is formatted correctly, and maybe my copy has garbaged up the syntax, but here's my slightly modified Gregzodian Solution.

    Code (csharp):
    1. #pragma strict
    2.  
    3. import System.IO; // for FileStream
    4. import System; // for BitConverter and Byte Type
    5.  
    6. private var pintBufferSize : int;
    7. private var pintNumBuffers : int;
    8. private var pintOutputRate : int = 44100;
    9. private var pstrFilename : String = "recTest.wav";
    10. private var pintHeaderSize : int = 44; //default for uncompressed wav
    11.  
    12. private var pbooOutput : boolean;
    13.  
    14. private var fileStream : FileStream;
    15.  
    16. // Audio Source
    17. private var pausAudio : AudioSource;
    18.  
    19. function Awake(){
    20.    
    21.     AudioSettings.outputSampleRate = pintOutputRate;
    22. }
    23.  
    24. function Start(){
    25.    
    26.     AudioSettings.GetDSPBufferSize(pintBufferSize,pintNumBuffers);
    27.    
    28.     // Make a reference to the attached audio source
    29.     pausAudio = gameObject.GetComponent(AudioSource);
    30. }
    31.  
    32. function Update(){
    33.    
    34.     if (s.clpMicrophone != null){
    35.         // Copy over the clip
    36.         pausAudio.clip = s.clpMicrophone;
    37.         // Clear the global clip
    38.         s.clpMicrophone = null;
    39.         // Is the clip useful?
    40.         if (pausAudio.clip != null){
    41.             // Play the clip
    42.             pausAudio.Play();
    43.            // Transcribe the audio clip to become a .WAV file
    44.            StartWriting(pstrFilename);
    45.            // Flag for subsequent Update() visits
    46.            pbooOutput = true;
    47.         }
    48.     }
    49.    
    50.     // Test the flag to see if an audio clip is being transcribed to be a .WAV file
    51.     if (pbooOutput == true){
    52.         // Is the clip done?
    53.          if (!pausAudio.isPlaying){
    54.             // The output is done
    55.             pbooOutput = false;
    56.             // Write the .WAV file header
    57.             WriteHeader();
    58.         }
    59.     }  
    60. }
    61.  
    62. function StartWriting(name : String){
    63.    
    64.     fileStream = new FileStream(name, FileMode.Create);
    65.     var emptyByte : byte = new byte();
    66.    
    67.     for(var i : int = 0; i<pintHeaderSize; i++){
    68.         //preparing the header
    69.         fileStream.WriteByte(emptyByte);
    70.     }
    71. }
    72.  
    73. function OnAudioFilterRead(data : float[], channels : int){
    74.     if(pbooOutput){
    75.         ConvertAndWrite(data);      //audio data is interlaced
    76.     }
    77. }
    78.  
    79. function ConvertAndWrite(dataSource : float[]){
    80.    
    81.     var intData : Int16[] = new Int16[dataSource.length];
    82. //converting in 2 steps : float[] to Int16[], //then Int16[] to Byte[]
    83.    
    84.     var bytesData : Byte[] = new Byte[dataSource.length*2];
    85. //bytesData array is twice the size of
    86. //dataSource array because a float converted in Int16 is 2 bytes.
    87.    
    88.     var rescaleFactor : int = 32767; //to convert float to Int16
    89.    
    90.     for (var i : int = 0; i<dataSource.length;i++){
    91.         intData[i] = dataSource[i]*rescaleFactor;
    92.         var byteArr : Byte[] = new Byte[2];
    93.         byteArr = BitConverter.GetBytes(intData[i]);
    94.         byteArr.CopyTo(bytesData,i*2);
    95.     }
    96.    
    97.     fileStream.Write(bytesData,0,bytesData.length);
    98. }
    99.  
    100. function WriteHeader(){
    101.    
    102.     fileStream.Seek(0,SeekOrigin.Begin);
    103.    
    104.     var riff : Byte[] = System.Text.Encoding.UTF8.GetBytes("RIFF");
    105.     fileStream.Write(riff,0,4);
    106.    
    107.     var chunkSize : Byte[] = BitConverter.GetBytes(fileStream.Length-8);
    108.     fileStream.Write(chunkSize,0,4);
    109.    
    110.     var wave : Byte[] = System.Text.Encoding.UTF8.GetBytes("WAVE");
    111.     fileStream.Write(wave,0,4);
    112.    
    113.     var fmt : Byte[] = System.Text.Encoding.UTF8.GetBytes("fmt ");
    114.     fileStream.Write(fmt,0,4);
    115.    
    116.     var subChunk1 : Byte[] = BitConverter.GetBytes(16);
    117.     fileStream.Write(subChunk1,0,4);
    118.    
    119.     var two : UInt16 = 2;
    120.     var one : UInt16 = 1;
    121.  
    122.     var audioFormat : Byte[] = BitConverter.GetBytes(one);
    123.     fileStream.Write(audioFormat,0,2);
    124.    
    125.     var numChannels : Byte[] = BitConverter.GetBytes(two);
    126.     fileStream.Write(numChannels,0,2);
    127.    
    128.     var sampleRate : Byte[] = BitConverter.GetBytes(pintOutputRate);
    129.     fileStream.Write(sampleRate,0,4);
    130.    
    131.     var byteRate : Byte[] = BitConverter.GetBytes(pintOutputRate*4);
    132.  // sampleRate * bytesPerSample*number of channels, here 44100*2*2
    133.  
    134.     fileStream.Write(byteRate,0,4);
    135.    
    136.     var four : UInt16 = 4;
    137.     var blockAlign : Byte[] = BitConverter.GetBytes(four);
    138.     fileStream.Write(blockAlign,0,2);
    139.    
    140.     var sixteen : UInt16 = 16;
    141.     var bitsPerSample : Byte[] = BitConverter.GetBytes(sixteen);
    142.     fileStream.Write(bitsPerSample,0,2);
    143.    
    144.     var dataString : Byte[] = System.Text.Encoding.UTF8.GetBytes("data");
    145.     fileStream.Write(dataString,0,4);
    146.    
    147.     var subChunk2 : Byte[] = BitConverter.GetBytes(fileStream.Length-pintHeaderSize);
    148.     fileStream.Write(subChunk2,0,4);
    149.    
    150.     fileStream.Close();
    151. }
    152.  
    Man, I LOVE Unity.

    Loren
     
  29. gregzo

    gregzo

    Joined:
    Dec 17, 2011
    Posts:
    795
    @LorenLogic Yippee!

    It warms my heart ever so deeply,
    To know my code helped a fan of Unity!

    We should really start a Wiki for audio in Unity. I've got plenty more scripts to share, such as extracting floats from wav's at runtime, or synching audio over a network... I'm building a kind of spatialized thingy, where everyone with my app installed can play in sync.

    3.5 really opened many doors, although I still come across quirks on the audio side of things now and then (see my dspbuffersize post).

    Yippeedee!

    Gregzo
     
  30. spartan

    spartan

    Joined:
    Mar 10, 2010
    Posts:
    173
    @LorenLogic something is wrong with your code, "s" is not declared.
     
  31. jen_fmy

    jen_fmy

    Joined:
    Apr 20, 2012
    Posts:
    2
    I also saw that, then I went back and read the actual message:
    From LorenLogic:

    It always helps to read; I know I often forget that. :]

    PS - Thank you IMMENSELY to all the contributors in this thread!
     
    Last edited: May 4, 2012