The Great AVAudioUnitSampler workout
The Great AVAudioUnitSampler workout
How to use the AVAudioUnitSampler using Swift.
I’ve not updated the examples in the post yet.
Table of Contents
Sampler from SoundFont
Sampler from aupreset
Sampler from sound files
Multiple voices
Summary
Resources
Introduction
Little by little, AVFoundation audio classes are taking over Core Audio. Unfortunately, the pace is glacial so Core Audio is going to be around for another eon or so.
The AVAudioUnitSampler is the AVFoundation version of the Core Audio kAudioUnitSubType_Sampler AUNode. It is a mono-timbral polyphonic sampler – it plays audio.
With AVFoundation, we create an AVAudioEngine instead of the Core Audio AUGraph. Where the AUGraph had AUNodes attached, the AVAudioEngine has AVAudioNodes attached.
The class hierarchy is:
AVAudioNode -> AVAudioUnit -> AVAudioUnitMIDIInstrument -> AVAudioUnitSampler
The engine already has a mixer node and an output node attached to it out of the box. We simply need to create the sampler, attach it to the engine, and connect the sampler to the mixer.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 |
class Sampler1 : NSObject { var engine: AVAudioEngine! var sampler: AVAudioUnitSampler! override init() { super.init() engine = AVAudioEngine() sampler = AVAudioUnitSampler() engine.attachNode(sampler) engine.connect(sampler, to: engine.mainMixerNode, format: nil) loadSF2PresetIntoSampler(0) addObservers() startEngine() setSessionPlayback() } ... |
Since we’re playing audio, the AVAudioSession needs to be configured for that and activated.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 |
func setSessionPlayback() { let audioSession = AVAudioSession.sharedInstance() do { try audioSession.setCategory(AVAudioSessionCategoryPlayback, withOptions: AVAudioSessionCategoryOptions.MixWithOthers) } catch { print("couldn't set category \(error)") return } do { try audioSession.setActive(true) } catch { print("couldn't set category active \(error)") return } } |
Starting the engine is straightforward.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
func startEngine() { if engine.running { print("audio engine already started") return } do { try engine.start() print("audio engine started") } catch { print("oops \(error)") print("could not start audio engine") } } |
We probably want to ask for notifications when the engine or session changes. Here is how I do that.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 |
func addObservers() { NSNotificationCenter.defaultCenter().addObserver(self, selector:"engineConfigurationChange:", name:AVAudioEngineConfigurationChangeNotification, object:engine) NSNotificationCenter.defaultCenter().addObserver(self, selector:"sessionInterrupted:", name:AVAudioSessionInterruptionNotification, object:engine) NSNotificationCenter.defaultCenter().addObserver(self, selector:"sessionRouteChange:", name:AVAudioSessionRouteChangeNotification, object:engine) } func removeObservers() { NSNotificationCenter.defaultCenter().removeObserver(self, name: AVAudioEngineConfigurationChangeNotification, object: nil) NSNotificationCenter.defaultCenter().removeObserver(self, name: AVAudioSessionInterruptionNotification, object: nil) NSNotificationCenter.defaultCenter().removeObserver(self, name: AVAudioSessionRouteChangeNotification, object: nil) } func engineConfigurationChange(notification:NSNotification) { etc. |
Once the engine has been started, you can send MIDI messages to it. For note on/note off messages, perhaps you added actions on a button for touch down and touch up.
1 2 3 4 5 6 7 |
func play() { sampler.startNote(60, withVelocity: 64, onChannel: 0) } func stop() { sampler.stopNote(60, onChannel: 0) } |
MIDI messages are useful, but for actually playing music, use the new AVAudioSequencer.
Its init method connects it to your engine. Then you load a standard MIDI file into the sequencer. The functions start and stop work as expected. But if you’ve already played the sequence, then say start again, you will hear nothing because the current position is no longer at the beginning of the sequence. Simply reset it to 0.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 |
var sequencer:AVAudioSequencer! func setupSequencer() { self.sequencer = AVAudioSequencer(audioEngine: self.engine) let options = AVMusicSequenceLoadOptions.SMF_PreserveTracks if let fileURL = NSBundle.mainBundle().URLForResource("chromatic", withExtension: "mid") { do { try sequencer.loadFromURL(fileURL, options: options) print("loaded \(fileURL)") } catch { print("something screwed up \(error)") return } } sequencer.prepareToPlay() } func play() { if sequencer.playing { stop() } sequencer.currentPositionInBeats = NSTimeInterval(0) do { try sequencer.start() } catch { print("cannot start \(error)") } } func stop() { sequencer.stop() } |
Sampler from SoundFont
We need to give the sampler some waveforms to play. We have several options. Let’s start with SoundFonts.
The sampler function loadSoundBankInstrumentAtURL will load a SoundFont.
I use a SoundFont from Musescore. There are many SoundFonts available for download online.
You need to specify a General MIDI patch number or program change number. (See resources.) You also need to specify which bank to use within the SoundFont. I use two Core Audio constants to do this.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 |
func loadSF2PresetIntoSampler(preset:UInt8) { guard let bankURL = NSBundle.mainBundle().URLForResource("GeneralUser GS MuseScore v1.442", withExtension: "sf2") else { print("could not load sound font") return } do { try self.sampler.loadSoundBankInstrumentAtURL(bankURL, program: preset, bankMSB: UInt8(kAUSampler_DefaultMelodicBankMSB), bankLSB: UInt8(kAUSampler_DefaultBankLSB)) } catch { print("error loading sound bank instrument") } } |
Sampler from aupreset
If you don’t have an aupreset file, ready my blog post on how to create one.
1 2 3 4 5 6 7 8 9 10 11 12 13 |
func loadPreset() { guard let preset = NSBundle.mainBundle().URLForResource("Drums", withExtension: "aupreset") else { print("could not load aupreset") return } do { try sampler.loadInstrumentAtURL(preset) } catch { print("error loading preset") } } |
Sampler from sound files
You can have the sampler load a directory of audio files. If the files are in Core Audio Format (caf), you can embed range metadata in each file. A simpler method is to simply name the files with the root pitch at the end of the basename. So, violinC4.wav would map to C4 or middle c.
1 2 3 4 5 6 7 8 9 |
func loadSamples() { if let urls = NSBundle.mainBundle().URLsForResourcesWithExtension("wav", subdirectory: "wavs") { do { try sampler.loadAudioFilesAtURLs(urls) } catch let error as NSError { print("\(error.localizedDescription)") } } } |
Multiple voices
Currently, the sampler is the only subclass of AVAudioUnitMIDIInstrument. There is no equivalent to the multitimbral kAudioUnitSubType_DLSSynth or kAudioUnitSubType_MIDISynth audio units.
What you can do is attach multiple AVAudioUnitSampler instances to the engine.
Something like this:
1 2 3 4 5 6 7 8 9 |
engine = AVAudioEngine() sampler = AVAudioUnitSampler() engine.attachNode(sampler) engine.connect(sampler, to: engine.mainMixerNode, format: nil) sampler2 = AVAudioUnitSampler() engine.attachNode(sampler2) engine.connect(sampler2, to: engine.mainMixerNode, format: nil) |
But what about using a sequencer with that and have the individual tracks use different timbres?
You’d have to create a custom subclass of AVAudioUnitMIDIInstrument perhaps configured as a kAudioUnitSubType_DLSSynth or kAudioUnitSubType_MIDISynth which are multi-timbral audio units.
What a coincidence! My next blog post is about creating a multi-timbral AVAudioUnitMIDIInstrument using kAudioUnitSubType_MIDISynth.
Or you can just use AVMIDIPlayer which uses a sound font and reads a MIDI file.
Summary
The AVAudioUnitSampler is useful, but needs improvement – especially when used with AVAudioSequencer.
Looks like xCode 7.3 is breaking the project. “Terminating app due to uncaught exception ‘com.apple.coreaudio.avfaudio’, reason: ‘error -10851′”
Any ideas what is causing it?
Hmmm… on another machine with xCode 7.3 it works fine. I guess you can ignore the message above.
I’m trying to figure out how to adjust the envelopes for an AVAudioUnitSampler, or to be more specific I have made a sampler out of CAFs that are drum hits, and I want each hit to play to the end, no matter how long the midi note is held. Do you have any idea how to achieve this?
I tried hacky approach of only sending the startNote, and no stopNote but I get loads of clicks and pops as the sample stops by itself about a second in.
I am actually using an AKMidiSampler, but it is essentially just a wrapper for the AVAudioUnitSampler, as far as I know.
Have you had any success attaching multiple samplers to one engine? When I tried it, it only the last one will play using the assigned instrument.
I narrowed the problem down. The multiple samplers *are* being created and added. The reason I only hear one playing is because I’m loading multiple MIDI files, but multiple tracks are not being created by AVAudioSequencer. Any idea how to force that, or is it simply something we don’t have control over?
I can’t tell without looking.
Email me a link to your repo or zip the code and email it.
I’ll take a look.
Here it is. https://github.com/jadar/SwiftMIDI
The aim is to load all the tracks from the 4 midi files and play them simultaneously, being able to adjust each track’s volume independently. I hope the comments are sufficient. I can provide more information if needed.
How can I Build my own Music Sequencer for ios? Are there codes for midi, samplers, synths, drums, bass, mixer for a All I one app? Where do search to find more on this?
Do you do this type of work? I have great ideas for a simple workflow app.
Do you do paid consulting?
Would it be possible to adapt the duet sampler example to be played live using buttons – two sets of six buttons – two octaves apart using strings.exs?
Sure. Email me the details.
Might be easy to draw what the UI looks like and take a photo with your phone.
O K
What is your email?
gene at rockhoppertech.com
(don’t want bots to grab it via the @)
Hello Gene.
I found out, that one of the biggest flaws with *AVAudioSequencer* (even with latest iOS releases) is, that it obviously is NOT multi-timbre/multi-channel in connection with *AVAudioUnitSampler* . Regardless, how many instances you successfully create. These are connected correctly to the mixer chain, but kind of useless if you want to play them via the sequencer..
*AVAudioSequencer* seems to play multi timbre as expected with an inherited *AVAudioUnitMIDIInstrument* that loads a sound bank. But as soon as one connects *AVAudioUnitSampler* instances to it, *AVAudioSequencer* will play only on channel 0. All other channels seems to be ignored!
I guess the behaviour has to do with the missing possibility to explicitely assign a MIDI receive channel to an *AVAudioUnitSampler* instance at creation time. So there would be no control over that in the case of multiple instances.
If one directly plays notes via *AVAudioUnitSampler.startNote()/stopNote()*, the instances of *AVAudioUnitSampler* are playing correctly their sounds of course, but I think the channel number is ignored in that case completely (obviously it does not make any sense with mono-timbre *AVAudioUnitSampler* ).
So finally it seems to remain a problem, if one wants to build a fully featured MIDI Audio Sequencer based on de AVFoundation way with multiple samplers. The *AVAudioUnitMIDIInstrument* may be a way to go for simple General MIDI sequencers… but it has the flaw, that one cannot process the timbres independently, because everything is just bounced onto a stereo output channel with very limited control over additional sound processing. ^^
So *AVAudioSequencer* generally seems just to remain as a very questionable class with very limited use. I guess one has to build her/his own MIDI sequencer to overcome that problem. *AVAudioSequencer* urgently needs a replacement!
Hi Gene, I am using some code based on this to control sound effects played in my iOS game. I am getting crash reports of occasional crashes on the line of code:
try sampler.loadInstrument(at: presetURL)
. Under the hood it seems to be crashing on the AudioToolbox command CreateInterstitialPathString(_CFURL const*). I’ve had 177 crashes on this line and there are over 20,000 installs so its not super common but it would be good to work out what is happening and fix if possible. Any idea what it might be? Will also post to stack overflow but thought would also comment directly here! Thanks!So, it usually works but barfs once in a while?
Are you loading from the Documents directory or the bundle? Those are the “blessed” locations.
Thanks for the reply. Yes that is correct, it normally works fine but occasionally crashes.
I am loading from the bundle – this is the whole loadPreset() function:
func loadPreset() {
guard let presetURL = Bundle.main.url(forResource: "Sounds/zound", withExtension: "aupreset") else {
fatalError("Failed to load preset.")
}
print("loaded preset \(presetURL)")
do {
try sampler.loadInstrument(at: presetURL)
} catch {
print("error loading preset \(error)")
}
}
it is crashing on sampler.loadInstrument() rather than the fatalError()
I’m also seeing this exact problem, down to the crash occurring in CreateInterstitialPathString. I’ve just switched an app that’s in development from using SF2 / loadSoundBankInstrumentAtURL to using AUPreset / loadInstrumentAtURL and I’m now getting occasional crashes at startup as the initial sound set loads. The change has literally been to swap the file formats and associated load methods. As in Sam’s case the .aupreset and associated .aiff files are in the main bundle.
Huh.
Could one of you guys send me either a link to a repo with your project or send me a zip of it?
If you give me an email address I can send you a zip
I have an issue with AVAudioUnitSampler. I use it with Fluid_R3_Gm soundfont. After a audio session interruption, like incoming phone call, AVAudioUnitSampler produces corrupted sound. The only workaround for this issue I found so far is to reload the soundfont in the sampler. It works but it could take noticeable amount of time.
Have you encountered similar issues? May be you know a better solution for problem?
I don’t have a better solution than restart when the interrupt notification comes in.
I filed a Radar bug report years on the slowness of loading some sound fonts.
No progress on that from them.
I usually use the Fluid gm sound font also. It’s not a bad one.