Swift 2 AVAudioSequencer
Swift 2 AVAudioSequencer
There’s a brand new MIDI sequencer class in Swift 2 beta! It’s the AVAudioSequencer.
Introduction
At WWDC15 there was a presentation entitled “What’s New in Core Audio“. If you were able to get past the first 29 minutes of a poorly structured presentation delivered by a robotic mumbling developer who just reads the lines in the slides (just like 90% of other WWDC presentations), you heard about this. But then, just like every other WWDC presentation, there were incomplete code snippets.
So can we get this to work?
Sequencer setup
You can play a MIDI file with the old AVMIDIPlayer class. I published a post on this back in the stone age of Swift.
Here is the old Swift 1.2 code to create one. (Swift 2 has updated the old NSError cha cha.)
1 |
var mp = AVMIDIPlayer(contentsOfURL: contents, soundBankURL: soundbank, error: &error) |
Swift 2 now has a new AVAudioSequencer class.
Woo Hoo!
Ok, let’s make an AVAudioSequencer!
I’ll talk about the AVAudioEngine set up next.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 |
self.sequencer = AVAudioSequencer(audioEngine: self.engine) guard let fileURL = NSBundle.mainBundle().URLForResource("sibeliusGMajor", withExtension: "mid") else { fatalError("\"sibeliusGMajor.mid\" file not found.") } do { try sequencer.loadFromURL(fileURL, options: .SMF_PreserveTracks) print("loaded \(fileURL)") } catch { fatalError("something screwed up while loading midi file.") } sequencer.prepareToPlay() do { try sequencer.start() } catch { print("cannot start") } |
So, I load a standard MIDI file that I created in Sibelius, tell the sequencer to read it, then start the sequencer. The API doesn’t look too bad at this point.
AVAudioEngine setup
Let’s create the engine. According to the presentation, there doesn’t need to be much more than a sampler in the engine and the totally groovy new AVAudioSequencer will find it.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 |
(self.engine, self.sampler) = engineSetup() ... func engineSetup() -> (AVAudioEngine, AVAudioUnitSampler) { let engine = AVAudioEngine() let output = engine.outputNode let outputHWFormat = output.outputFormatForBus(0) let mainMixer = engine.mainMixerNode engine.connect(mainMixer, to: output, format: outputHWFormat) let sampler = AVAudioUnitSampler() engine.attachNode(sampler) engine.connect(sampler, to: mainMixer, format: outputHWFormat) print(engine) return (engine, sampler) } |
That’s all – according to the ‘AudioEngine’er’s presentation.
Good to go, right?
Wrong.
That should be it. But it’s not.
What do you get?
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 |
GraphDescription ________ AVAudioEngineGraph 0x7fdc0d01ff00: initialized = 0, running = 0, number of nodes = 3 ******** output chain ******** node 0x7fdc0d1a2630 {'auou' 'rioc' 'appl'}, 'U' inputs = 1 (bus0) <- (bus0) 0x7fdc0d0338e0, {'aumx' 'mcmx' 'appl'}, [ 2 ch, 44100 Hz, 'lpcm' (0x00000029) 32-bit little-endian float, deinterleaved] node 0x7fdc0d0338e0 {'aumx' 'mcmx' 'appl'}, 'U' inputs = 1 (bus0) <- (bus0) 0x7fdc0d035670, {'aumu' 'samp' 'appl'}, [ 2 ch, 44100 Hz, 'lpcm' (0x00000029) 32-bit little-endian float, deinterleaved] outputs = 1 (bus0) -> (bus0) 0x7fdc0d1a2630, {'auou' 'rioc' 'appl'}, [ 2 ch, 44100 Hz, 'lpcm' (0x00000029) 32-bit little-endian float, deinterleaved] node 0x7fdc0d035670 {'aumu' 'samp' 'appl'}, 'U' outputs = 1 (bus0) -> (bus0) 0x7fdc0d0338e0, {'aumx' 'mcmx' 'appl'}, [ 2 ch, 44100 Hz, 'lpcm' (0x00000029) 32-bit little-endian float, deinterleaved] ______________________________________ |
The ‘rioc” is the outputNode. See it?
See that the mixer, ‘mcmx’, is an input to it?
See that the sampler, ‘samp’, is connected to the mixer?
See that the formats are all the same?
The processing graph looks ok. Right?
But then….
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 |
2015-06-12 15:16:26.396 Swift2AVFoundFrobs[32200:683571] 15:16:26.396 ERROR: AVAudioEngineGraph.mm:3649: GetDefaultMusicDevice: required condition is false: outputNode 2015-06-12 15:16:26.400 Swift2AVFoundFrobs[32200:683571] *** Terminating app due to uncaught exception 'com.apple.coreaudio.avfaudio', reason: 'required condition is false: outputNode' *** First throw call stack: ( 0 CoreFoundation 0x000000010ad4c885 __exceptionPreprocess + 165 1 libobjc.A.dylib 0x000000010c8d5df1 objc_exception_throw + 48 2 CoreFoundation 0x000000010ad4c6ea +[NSException raise:format:arguments:] + 106 3 libAVFAudio.dylib 0x000000010d1c4efe libAVFAudio.dylib + 98046 4 libAVFAudio.dylib 0x000000010d1d3119 libAVFAudio.dylib + 155929 5 AudioToolbox 0x000000010a988ab3 _ZN8Sequence27HandleAudioUnitStatusChangeEP28OpaqueAudioComponentInstancei + 499 6 libAVFAudio.dylib 0x000000010d1d2607 libAVFAudio.dylib + 153095 7 libAVFAudio.dylib 0x000000010d1cac04 libAVFAudio.dylib + 121860 8 libAVFAudio.dylib 0x000000010d1cd5ff libAVFAudio.dylib + 132607 9 libAVFAudio.dylib 0x000000010d20c5f2 libAVFAudio.dylib + 390642 10 libAVFAudio.dylib 0x000000010d1fda35 libAVFAudio.dylib + 330293 11 libAVFAudio.dylib 0x000000010d20a2f8 libAVFAudio.dylib + 381688 12 libAVFAudio.dylib 0x000000010d20c1a0 libAVFAudio.dylib + 389536 13 libAVFAudio.dylib 0x000000010d209e8c libAVFAudio.dylib + 380556 14 libobjc.A.dylib 0x000000010c8e9ade _ZN11objc_object17sidetable_releaseEb + 232 15 Swift2AVFoundFrobs 0x000000010a324650 _TFC18Swift2AVFoundFrobs9Sequencerd + 48 16 Swift2AVFoundFrobs 0x000000010a324601 _TFC18Swift2AVFoundFrobs9SequencerD + 17 ... |
So, “required condition is false: outputNode”.
BTW, not to be a grammar nazi, but where is the predicate in that sentence? outputNode what? It’s nil? It’s not there? It’s drunk? outputNode what?
I see the node in the graph. So, what’s the problem?
I have no idea. There is no place to look either.
I’ve tried loading the soundbank – or not.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 |
func loadSF2PresetIntoSampler(preset:UInt8) { guard let bankURL = NSBundle.mainBundle().URLForResource(self.soundFontName, withExtension: "sf2") else { fatalError("\(self.soundFontName).sf2 file not found.") } do { try self.sampler.loadSoundBankInstrumentAtURL(bankURL, program: preset, bankMSB: UInt8(kAUSampler_DefaultMelodicBankMSB), bankLSB: UInt8(kAUSampler_DefaultBankLSB)) } catch { print("error loading sound bank instrument") } } |
I’ve set up the session for playback – or not.
I’ve tried with the engine running – or not.
I’ve tried with different MIDI files.
I’ve tried just connecting the sampler to the outputNode. No luck. Shouldn’t have to do that anyway.
Bleah.
AVMusicTrack
Ok, let’s try the spiffy new AVMusicTrack which is so full of grooviosity that we can retire the old worn out MusicTrack from the AudioToolbox.
1 2 3 4 |
for track in sequencer.tracks { // do something with the track track.destinationAudioUnit = self.sampler } |
Right. No such luck.
1 2 3 4 5 |
Undefined symbols for architecture x86_64: "_OBJC_CLASS_$_AVMusicTrack", referenced from: type metadata accessor for ObjectiveC.AVMusicTrack in Sequencer.o ld: symbol(s) not found for architecture x86_64 clang: error: linker command failed with exit code 1 (use -v to see invocation) |
I see it defined right there in AVAudioSequencer.h.
Yes, the frameworks are in the project.
Show stopper.
I see that AVAudioSequencer’s definition is preceded by the available macro, but AVMusicTrack doesn’t have that.
Is that the problem?
I’m guessing and do not have the access to try it.
1 2 |
NS_CLASS_AVAILABLE(10_11, 9_0) @interface AVAudioSequencer : NSObject { |
So, once that’s fixed, is there an easy API to add/remove all kinds of MIDI events? You know, channel messages, sysex, controllers, metatext etc.?
Nope.
Nothing like that.
Mute, solo, length, set a destination, looping.
Sigh.
Update
Table of Contents
In beta 3 it finally stopped crashing. The Github project runs.
The API for AVMusicTrack is still a waste of time though.
Summary
So now we can do away with the AudioToolbox MusicSequence?
Nope.
So now we can do away with the AudioToolbox MusicTrack?
Nope.
So now we can connect to an AVAudioEngine without messing around with AudioUnits?
Nope.
So can we code a simple hello world with this API?
Nope.
Now that’s what I call progress.
Hi Gene, a few months ago I wrapped in Swift the four AudioToolbox classes for necessary MIDI sequencing. Here’s a link to it on GitHub.
https://github.com/thomjordan/MidiToolbox
It doesn’t yet include a way to connect to MIDI devices and endpoints. For this I currently use a version of the VVMIDI framework which I made some minor changes to. It may still work directly with the current version of VVMIDI from vvopensource:
https://github.com/mrRay/vvopensource
If it turns out you or anyone tries it and it doesn’t, let me know by replying here and I can publish a fork of VVMIDI with my changes. I basically just added a timestamp field to the VVMIDIMessage class, and modified a few calls that use the class.
I’ve considered additionally wrapping a few classes from CoreMIDI that should provide the MIDI connections functionality, and adding it to MidiToolbox. It’s been low on my to-do list, since I’m currently using an approach that works. I might end up adding it if it looks like Apple is not going to do something like it first. I thought this was the case when I found this blog post, but it doesn’t surprise me really that it doesn’t yet work right.
BTW I just checked the CoreMIDI docs again now, and there’s some minor Swift info there, but seemingly not as complete as what’s usually included for the majority of the Cocoa API. There may not be a need to truly “wrap” any CoreMIDI functionality in Swift, although a more straightforward approach using supplemental code could be useful, especially as part of “MidiToolbox”. It remains to be seen..hopefully soon none of this will be needed..
Hi,
I’m in the process of writing a MIDI based music game and your blog entries helped me quite a few times when scouring the net for tips when dealing with the CoreAudio/CoreMIDI and AudioToolbox frameworks and their related errors (ugh).
So as you did, when I discovered the new AVAudioSequencer/Engine and AVMusicTrack promises I decided to try them out… And it didn’t work.
I first got the same error as you got:
AVAudioEngineGraph.mm:3649: GetDefaultMusicDevice: required condition is false: outputNode
.Then while rearranging my class I got a second one when trying to attach an AVAudioUnitSampler to the engine, which confused me even more:
AVAudioEngine.mm:275: AttachNode: required condition is false: !nodeimpl->HasEngineImpl()
.I finally managed to make the whole thing work and even play midi files with a SoundBank. But only by declaring the engine, sequencer and any related functions at global level.
If you have any insights on why this works, I’d be delighted to know!
Thanks!
The problem I had was fixed in a later beta. Does updating Xcode fix your problem?
If not, let me see your code. Maybe I can spot the problem.
Howdy Gene
Really appreciate your sample code and head-first dive into this new technology that doesn’t seem so well documented yet.
I found (and fixed) a crash you were having.
When you do…
track.destinationAudioUnit = self.sampler //this crashes
…that’s only because you started the sequence already. If you do the sequence start AFTER the track setup/diagnosis loop, it works fine.
Also, I found this is a helpful thing to set on the tracks:
track.loopRange = AVBeatRange(start: 0,length: 4)
…or whatever length you want. Right now you’ve got some interesting polyrhtyhms going on ;]
Thanks again.
Also, this crash:
// this crashes
//print("track timeResolution \(track.timeResolution)")
…seems to be because you can only do this on the tempo track.
let tempo = sequencer.tempoTrack
print("tempo lengthInBeats \(tempo.lengthInBeats)")
print("tempo lengthInBeats \(tempo.timeResolution)")
Sorry for the delay in approving your comment. I was cleaning up damage (and hardening the site) due to a crack attack. I don’t know why people think this is funny – it justs wastes the time of small-time guys like me.
Thanks for finding the destinationAudioUnit problem. Most of the time it’s an Apple (lack of) documentation problem, and sometimes it’s a PBCAC 🙂
Have you seen any more improvements in AVAudioSequencer? Is there anyway to control playback tempo ?
Thanks.
You can set the rate property.
Is there a way to clear the sequence and load a different midi file with AVAudioSequencer? For instance having a user play different midi files with AVAudioSequence?
Thanks.
I was able to find another solution. Thanks.
Thank you for this blog! It’s been really helpful!
Hi Gene,
Thanks so much for your blog and for sharing – it really has been so useful when trying to work with Core Audio. I have bought you some well earned pet food!
One question for you if you have time – I think I know the answer. Is there any way yet to manipulate a midi sequence running in an AVAudioSequencer once it is playing? Say I have a four bar looped midi track running, which I can create load and loop fine, once I start the sequencer is that it? Or is there any method that whilst it is playing I can delete or add a note.
Thanks!
AVAudioSequencer really doesn’t have this. I’ve filed a Radar and have spoken to the guy responsible (we’ve been friends for 30 years). If someone else files a Radar, it will be moved “up the queue”.
I’ve pretty much just done this via the old AudioToolkit’s MusicSequencer. Clunky, but it works.
Hmm – well I can see looking at the source that the AudioKit guys seem to have worked out a way, so feel free not to release my comments!
AudioKit has a lot of my MIDI code – but not with AVAudioSequencer. Unless someone added that? What code are you using?
Yes you are right – it’s actually MusicSequencer at work, I was mistaken. I’be got what I needed going with that now thank you, amd will file a Radar. It’s quite a mess isn’t it?!
Hi Gene. Thanks for these posts – they are still a huge help, even several years later.
I’m also experiencing the “required condition is false: outputNode” error when trying to produce something similar; however, mine is happening when performing a segue back to any other view controller, apparently during the deinit process. Is this something you’ve encountered before?
Thank you.
Show me how you set things up. Have a public repo? Email me if not.