AVFoundation audio recording with Swift
Swift AVFoundation Recorder
Use AVFoundation to create an audio recording.
Introduction
AVFoundation makes audio recording a lot simpler than recording using Core Audio. Essentially, you simply configure and create an AVAudioRecorder instance, and tell it to record/stop in actions.
Creating a Recorder
The first thing you need to do when creating a recorder is to specify the audio format that the recorder will use. This is a Dictionary of settings. For the AVFormatIDKey there are several
predefined Core Audio data format identifiers such as kAudioFormatLinearPCM and kAudioFormatAC3. Here are a few settings to record in Apple Lossless format.
1 2 3 4 5 6 7 |
var recordSettings = [ AVFormatIDKey: NSNumber(unsignedInt:kAudioFormatAppleLossless), AVEncoderAudioQualityKey : AVAudioQuality.Max.toRaw(), AVEncoderBitRateKey : 320000, AVNumberOfChannelsKey: 2, AVSampleRateKey : 44100.0 ] |
Then you create the recorder with those settings and the URL of the output sound file. If the recorder is created successfully, you can then call prepareToRecord() which will create or overwrite the sound file at the specified URL. If you’re going to write a VU meter style graph, you can tell the recorder to meter the recording. You’ll have to install a timer to periodically ask the recorder for the values. (See the github project).
Swift 1:
1 2 3 4 5 6 7 8 9 |
var error: NSError? self.recorder = AVAudioRecorder(URL: soundFileURL, settings: recordSettings, error: &error) if let e = error { println(e.localizedDescription) } else { recorder.delegate = self recorder.meteringEnabled = true recorder.prepareToRecord() // creates/overwrites the file at soundFileURL } |
Or in Swift 2:
1 2 3 4 5 6 7 8 9 |
do { recorder = try AVAudioRecorder(URL: soundFileURL, settings: recordSettings) recorder.delegate = self recorder.meteringEnabled = true recorder.prepareToRecord() // creates/overwrites the file at soundFileURL } catch let error as NSError { recorder = nil print(error.localizedDescription) } |
Recorder Delegate
I set the recorder’s delegate in order to be notified that the recorder has stopped recording. At this point you can update the UI (e.g. enable a disabled play button) and/or prompt the user to keep or discard the recording. In this example I use the new iOS 8 UIAlertController class. If the user says “delete the recording”, simply call deleteRecording() on the recorder instance.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 |
extension RecorderViewController : AVAudioRecorderDelegate { func audioRecorderDidFinishRecording(recorder: AVAudioRecorder!, successfully flag: Bool) { println("finished recording \(flag)") stopButton.enabled = false playButton.enabled = true recordButton.setTitle("Record", forState:.Normal) // ios8 and later var alert = UIAlertController(title: "Recorder", message: "Finished Recording", preferredStyle: .Alert) alert.addAction(UIAlertAction(title: "Keep", style: .Default, handler: {action in println("keep was tapped") })) alert.addAction(UIAlertAction(title: "Delete", style: .Default, handler: {action in self.recorder.deleteRecording() })) self.presentViewController(alert, animated:true, completion:nil) } func audioRecorderEncodeErrorDidOccur(recorder: AVAudioRecorder!, error: NSError!) { println("\(error.localizedDescription)") } } |
Recording
In order to record, you need to ask the user for permission to record first. The AVAudioSession class has a requestRecordPermission() function to which you provide a closure. If granted, you set the session’s category to AVAudioSessionCategoryPlayAndRecord, set up the recorder as described above, and install a timer if you want to check the metering levels.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
AVAudioSession.sharedInstance().requestRecordPermission({(granted: Bool)-> Void in if granted { self.setSessionPlayAndRecord() self.setupRecorder() self.recorder.record() self.meterTimer = NSTimer.scheduledTimerWithTimeInterval(0.1, target:self, selector:"updateAudioMeter:", userInfo:nil, repeats:true) } else { println("Permission to record not granted") } }) |
Here is a very simple function to display the metering level to stdout, as well as displaying the current recording time. Yes, string formatting is awkward in Swift. Have a better way? Let me know.
1 2 3 4 5 6 7 8 9 10 11 12 13 |
func updateAudioMeter(timer:NSTimer) { if recorder.recording { let dFormat = "%02d" let min:Int = Int(recorder.currentTime / 60) let sec:Int = Int(recorder.currentTime % 60) let s = "\(String(format: dFormat, min)):\(String(format: dFormat, sec))" statusLabel.text = s recorder.updateMeters() var apc0 = recorder.averagePowerForChannel(0) var peak0 = recorder.peakPowerForChannel(0) print them out... } } |
Summary
That’s it. You now have an audio recording that you can play back using an AVAudioPlayer instance.
Hi Gene,
Thanks for the awesome resource on Swift audio.
I’m (very much) a beginner in iOS development, and I would love to learn how to create an app that would have some basic audio functionalities.
I was hoping to learn a bit by taking a look at your example project which I downloaded from GitHub.
Unfortunately I cannot build/run it, I get a bunch of errors http://cl.ly/image/2V2P1b0b0q2V
Could you help me out by pointing out to anything I might be missing?
Thanks!
Ah, thanks.
I looked at your image and saw that it was just that the code was for a Swift beta.
I’ve updated the code to Swift 1.0 and pushed it to the github repo.
Do a pull and you should be good to go.
Thanks so much for the quick response!
Now, I’m still getting a few errors, namely http://cl.ly/image/3q343u241y12
I’m guessing that could be resolved by joining the Apple Dev program. Would there a way to run the example without that, for the moment?
(Changed the ‘Bundle Identifier’ and everything works perfectly; thanks again.)
Hi, I noticed that this code only works on the simulator, and not the device. Any advice on how to get it to work on the iPhone 6? I believe the line “if !session.setActive(true, error: &error)” is causing the compiler error. Please let me know if you figure this out, thank you.
I just tried it on my iPhone 4S and iPad 2 and it works on both devices.
Do a git pull to make sure you have the latest.
I don’t have an iPhone 6. Could you send me a stack dump? I really hope there isn’t a version thing to check.
Very good work
it works only on the simulator not on my iPhone i have the error message :
No provisioning profiles with a valid signing identity (i.e. certificate and private key pair) matching the bundle identifier “com.rockhoppertech.AVFoundation-Recorder” were found.
Xcode can attempt to fix this issue. This will reset your code signing and provisioning settings to recommended values and resolve issues with signing identities and provisioning profiles.
could you help me to correct it please ?
another question: i want to save data on a sql server data base, how can i do it please ?
thanks a lot
That’s my signing identity. Say OK to that dialog or go to project settings and change it to yours. There’s a button there to do this.
How can I get the playback to play on the speaker not just the ear speaker?
I don’t understand. Are you starting with the headphone plugged in and then remove it? If so, you need to handle a route change notification.
Hi,
I am wondering if you can help me with this question:
How can I use a Audio file (like sample.mp3) as one input and having a microphone as another input, but doesn’t record on on top of original sample.mp3 and make another .mp3 (like sample2.mp3) which would be recorded voice on top of that music (sample.mp3)
Hello,
Thank you for that’s a very useful information, could you do us a favor and explain how we can record video along with audio.
Fayez
Hi,
Actually the Code works fine for me, there is one question how can i add an option to email the voice that i record it?
Great Work.!! Nice to share this interesting useful info. I really appreciate you! Keep it up.!!
Thx for the resource of recorder, i just started making IOS program by myself and this is very helpful.
I made one recorder successfully by studying your code, but i believe that my recorder doesn’t work properly. when i run the program, the memory consuming for my recorder is too high. it start with 50mb then increases to 100mb as i recored more. can you tell me what is wrong with it?
Hi, great tutorial!
Is there a way to cut the recorded file, something like:
recordedFile.cut(time interval to cut)
I’m searching for this on the net but i didn’t find anything..
Thank you very much.
I wrote a blog post just for you on how to do this.
Hey there, I’m so happy founding your blog, this is exactly what I was looking for. (Great job!!)
I’m trying to write a small challenge-game app, where two people have to blow into the mic. There should be a paper plane on the screen and each player has to “blow” this plane virtual as far as they can.
Is it possible to measure the meters to determine the winner?
Thank you in advance 🙂
Maybe look at the updateAudioMeter function I provided. That may be a way to do what you want.
Apparently with Swift 2, the kAudioFormatAppleLossless is now an Int32. So for the recordSettings, we need to explicitly convert it to Int as I found from this StackOverflow answer: http://stackoverflow.com/a/32509799/4139760.
I had updated the Github project to Swift 2, but forgot about updating this post.
Thanks for the reminder.
Hi Gene,
i found a issue in the code, in viewdidload you call the function “setSessionPlayback()”, but when you press the record button, you call “setSessionPlayAndRecord” overwriting the playback session. So when you press play the recorded audio are so low in volume. To fix this you need to call the setSessionPlayBack when press the play button.
Hi, this is a fantastic tutorial! I’m interested in processing the samples as they are recorded in real-time but I have not had much success with finding a way to do this. Is there maybe some kind of call-back function for this purpose? I’d like to eventually stream the recording to a server for further processing, but the final missing link is accessing the samples as they are recorded.
you can just get the full code from this website…
http://www.swiftsupport…badsite.foo
hope that helped.
Astonishing comment from “John” here. “you can just get the full code from this crappy Wix website for $25”.
My code is on Github for free and this clown thinks it’s ok to repackage it and sell it for $25 AND THEN advertise it here.
He is my nomination for jerk of the year.
Send him an email sebastianjuan1994@gmail.com
Hi Gene,
Thanks for posting this tutorial, your example code was the best I found on internet (and I’ve seen quite a few sites already). I finally understood the right way to create and close sessions for audio processing.
The only minor thing that I couldn’t make work in the real device is the “setMode” :
try session.setMode(“AVAudioSessionModeVoiceChat”)
For some reason it is never able to set mode successfully, other than that everything works perfectly.
Thanks a lot. You saved my day.
Gi Gene
Fantastic tutorial. Any tips on doing a visual representation of the recording – like a frequency meter
Thank you for the great post.