Commission Music

Commission Music
Bespoke Noise!!

Sunday, 31 May 2009

A learning experience

This weekend is full of BEAST concerts at the CBSO centre in Birmingham. These events always involve copious consumption of alcohol, so I'm writing this while not entirely sober, but I think drinking was called for.

Friday night was the student pieces and I actually played something, my first gig with a huge speaker array. There were more than 80 speakers around the room. There's some special software written which people can use to disperse their tape pieces. It works well with stereo or 8 channel pieces, but can handle more inputs. I've never used this software and I'm inexperienced with how to diffuse stuff. I have no idea how one would map a stereo file to 80 speakers, so I thought it would be easier if I just used the hardware interfaces and not the software. I wrote a piece that sounded ok in stereo and then went in to uni last week to figure it out in 8 channels and got a rough map of what I wanted and went home and implemented it. When I got the list of what speakers were going where for this weekend, I changed my code so that it would use ones of them that seemed reasonable.

I had half an hour allocated for the dress rehearsal. I was pretty sure my code was going to work with only minor changes needed for amplitudes. It wouldn't run at all. I made a change. The first minute of it ran. I made another change, I got 2 minutes out. I was out of time and the rest was not working at all. Only a few sections even made noise. The rest were silent or missing crucial bits. And I had no more time on the computer.

I went to hammer it on my laptop. Scott, my supervisor, gave me some advice and showed me how to get a bunch of level meters, so I could see if channels had audio, even if I couldn't hear it from my laptop. I made changes, just trying arbitrary approaches to see what helped. And then just trying to hold it together, like it was made of gaffer tape and blue tack. Scott said I could have a few minutes to check it during the interval between concerts, so it was a last second test.

I got on the system during the interval and ran my piece and it sounded like hell. The first few bits were ok, but then it just degraded. It was playing the wrong samples and the sound quality was crap, as if the frequency for the filter had been set to zero. It would have been hypothetically ok that it wasn't what I wrote, if it had worked musically at all. I turned to Scott and said it sounded like the Synth library wasn't loading correctly. I had 5 minutes before the concert was scheduled to start, and they wanted to start on time. I was trying to decide if it was worth just playing the first 3 minutes and then fading out. He squinted at my code and said, "oh, it's not loading correctly, because you can't do it that way" and told me what to change. I did the change and then ran about 30 seconds of that part and it worked.

They started letting people in just then. I realized that his fix meant that a lot of the other missing stuff was also going to come back, along with all the crap I added trying desperately to get the thing to work. I didn't know what it was going to sound like, just that end might be really really loud.

So, I had a piece that I had cobbled together from wreckage over the course of the afternoon, that I had never heard before that I was going to play in front of a live audience. I was ready to fade it out whenever it started wheezing towards death.

So I started playing it and had control only over the master volume, so when volumes turned out to be wildly wrong, I could only turn the whole thing up or down. And some stuff came out of speakers that weren't the ones that I should have picked. And the very last section was extrodinarily loud because of desperate repairs that I had tried to make that suddenly all started working at once. Everytime something started playing, I breathed a sigh of relief. And a few bits were missing (some glitches also strangely vanished) but it was 90% of what I wrote. Again, with 3 minutes to spare, we found the bug and were barely able to test and it played ok, through to the end.

It was not the most stressful performance experience I've ever had, but it was close. It's the most stressful one I've ever been able to pull off.

I learned some stuff from this. Incidentally, my horoscope last week said I shouldn't just write how-to documents, I should share stories of how I came to want or need such a howto and I wondered how this astrological advice might apply to me. I am not asking the stars for a demonstration again!


SuperCollider, by default, can only deal with a certain number of busses and UGen-type connections. You have to set a ServerOption when you have a huge speaker array: s.options.numWireBufs. I set it to 1024 and that fixed that problem. Incidentally, this shortage of WireBufs did not give me error messages until I tried moving the master fader up and down. Then I got a bunch of node not founds and another one that seemed more topical. This one explained a large portion of why my piece wasn't working.

You can get a bunch of graphical meters (one for every defined output channel) by clicking on the server window to make it active and then typing the letter 'l'. This is apparently undocumented, but it is really very helpful.

If you want to change amplitudes with a fader or whatever in a live situation, the best way to do it is with busses. Normally, I would change it in a pbind, so that the next one would be louder or softer, but if you're playing longer grains, the feedback isn't fast enough to respond to issues that come up in real time and slow changes are hard to hear so you can't tell if your change is even working or not. Live control busses are good.

Don't write any amplitude changes that cause peaking. I happen to think that digital peaking sounds good with floating points numbers. However, the BEAST system is cranked and so are many other systems. If you want to peak, you've got to either turn all the speakers down (yeah right) or fake it. Without the peaking, what was supposed to be a modest crescendo became stupidly huge.


When I was working on the piece, I started to suspect that I was having issues with writing my SynthDefs to disk. I thought my older versions were persisting, so I decided not to write them at all, but just load them into memory and send them to the server. So I did SynthDef(\blahblah, { . . .}).send(s); You cannot do that. You must memStore it. If you send it to the server, your pbinds don't know about what the variables are for the synthdef and instead of just sending it with the ones you provide, it sends crap.

This is a bug in SuperCollider. Yeah, it's documented, but just because a bug appears in the help files doesn't mean it's not a bug. There are too many different ways to save synthdefs. There should be two. One should write it to the disk and do all the right things. One should keep it in memory and do all the right thing. I doubt there is a compelling argument as to why you would want to be able to send a synthdef to the server but not be able to use it with a pbind. Any argument in favor of that has to be pretty fucking compelling, because it really makes no sense. It's confusing and nonsensical. Frankly, sending a synthdef that you can't use in every case should be an exception and not the norm, so perhaps it could be set as an obscure option, not the method shown most often in helpfiles. The send/pbind is broken and needs to be fixed.

And while we're at it, pbinds are not innocent in this. If I set a bunch of environment variables as if they were arguments to a synth, what good reason is there for not sending them as arguments?

I'm fine with there being more than one way to do things and every language has it's obscurities, but supercollider is overdoing it a bit. These common things that you do all the time trip up students, not just because they're inexperienced, but because these idiosyncrasies are errors in the language. This isn't French! It doesn't help noobs or old timers to have weird crap like this floating around. Flush it! My whole piece was almost sunk by this and I am having trouble believing that whatever advantage it might provide is worth the exposure to accidental error it causes. The need is obscure, make the invocation obscure. It's like setting traps otherwise.

But hey, it al kind of worked. I might do a 5.1 mix next, especially, if I can find such a system to work with. Otherwise, look out for stereo.


Daniel Wolf said...

In the words of Jerry Hunt: "Uh, sound system? Mono, up." (See this interview:

Dave Seidel said...

Whew! Glad you pulled it off. Talk about trial by fire.

The closest I came to this situation was when I had a (pre-rendered) piece played at a concert in Montreal, at Concordia. It was an all all-eight-channel concert. I had no way to try it out beforehand except in stereo, and I didn't get a chance to do a sound check. They had a really excellent sound system that included a big sub-woofer in addition to the eight powered speakers. When my piece played, I found that although the multi-channel aspect worked just fine, one of the tracks had a low-frequency component that I had never been able to hear before, but that came out quite strongly through the sub-woofer as a regular "beat" that continued throughout the piece.

The lesson I learned was to be much more diligent about mixing/mastering my sound files -- headphones are not enough! I have since acquired some decent close-field powered monitors for that purpose, as well as a lot more knowledge.