This blog has moved

This blog is now at http://www.celesteh.com/blog

Friday 24 September 2010

Nick Collins: Acousmatic

continuing live blogging the SC symposium

He's written Anti-aliasing Oscillators: BlitB3Saw - BLIT derived sawtooth. Twice as efficient as the current band-limited sawtooth. There's a bunch of Ugens in the pack. The delay lines are good, apparently

Auditory Modelling plugin pack - Meddis models choclear implants.(!)

Try out something called Impromptu, which is a good programming environment for audio-visual programming. You can re-write ugens on the fly.(!)

Kling Klang

(If Nick Collins ever decided to be an evil genius, the world would be in trouble)

{ SinOsc.ar* ClangUgen.ar(SoundIn.ar)}.play

The Clang Ugen is undefined. He's got a thing that opens a C editor window. He can write the Ugen and then run it. Maybe, I think. His demo has just crashed.

Ok, so you can edit a C file and load it into SC without recompiling, etc. Useful for livecoding gigs, if you're scarily smart, or for debugging sorts of things.

Auto acousmatic

Automatic generation of electroacoustic works. Integrate machine listening into composition process. Algorithmic processes are used by electroacoustic composers, so take that as far as possible. Also involves studying the design cycle of pieces.

the setup requires knowing the output number of channels the duration and some input samples.

In bottom up construction, sources files are analysed to find interesting bits, those parts are processed and the used again as input. The output files are scattered across the work. Uses onset detection, finding dominant frequency, excluding silence, other machine listening ugens.

Generative effect processing like granulations.

top down construction imposes musical form. Cross-synthesis options for this.

this needs to run in non-real time, since this will take a lot of processing. There's a lot of server-> language communication, done w/ Logger currently.

How to evaluate the output: Tell people that it's not machine composed and play it for people, and then ask how they like it. It's been entering electroacoustic compositions. Need to know the normal probability of rejection. He normally gets rejected 36% of the time (he's doing better than me).

He's sending things he hasn't listened to, to avoid cherry picking.

Example work: fibbermegibbet20

A self-analysing critic is a hard problem for machine listening

this is only a prototype. The real evil plan to put us all out of business is coming soon.

The example work is 55 seconds long ABA form. The program has rules for section overlap to create a sense of drama. It has a database of gestures. The rules are contained in a bunch of SC classes, based on his personal preferences. Will there be presets, i.e., sound like Birmingham? Maybe.

Scott Wilson is hoping this forces people to stop writing electroacoustic works. Phrased as "forces people to think about other things." He sees it as intelligent batch processing.

The version he rendered during the talk is 60 seconds long, completely different than the other one and certainly adequate as an acousmatic work.

Will this be the end of acousmatic composing? We can only hope.

No comments:

Commission Music

Commission Music
Bespoke Noise!!