This blog has moved

This blog is now at

Friday 4 March 2005

a "brief" statement

Pleased write a brief background statement regarding your interests/experiences in electronic & computer music. Include information on compositional styles in which you have worked, software/hardware with which you're familiar, publications (if any) and the name and institutional affiliation of the electronic music/composition teachers with which you've worked.

My interest in computer music began as soon as my family got a computer in the 1980’s. I wrote programs in BASIC that were early attempts at algorithmic composition. I did not return to electronic music until I attended Mills College in Oakland, California from 1994 -8. I went to Mills College to study Computer Science, but I quickly found myself gravitating toward the Center for Contemporary Music. I studied electronic music and composition with Maggi Payne. She taught synthesis techniques on a large Moog Modular Synthesizer. I fell in love with the Moog and the intuitive approach to sound creation that went with it. I graduated with a double-major in Computer Science and Electronic Music.

While at Mills, I also studied MAX with John Bischoff. I no longer use MAX, but Bischoff’s primary lasting influence on me was his emphasis that the weaknesses of any musical system were also its strengths. While I disagree in the case of MAX, I’ve found this to be a very useful way to explore any tool.

After I graduated from Mills in 1998, I went to work in the computer industry and found time for music on evenings and weekends. Music was a hobby for me at the time. I used a MOTM analog modular synthesizer at home with protocols on my Macintosh to create experimental electronic and noise pieces. All of my work at the time was for tape. I recorded source sounds, such as field recordings or interesting synthesizer patches and mixed them together, so that mixing was much composing as finding or creating the source sounds. Often, there was a metaphor or idea that tied all of the source sounds together, but sometimes I just recorded interesting patches until I had "enough" of them and looked for interesting ways to mix them together. I loved doing this because of the focus on pure sound and also because of its tactility. I had tapes played at a few festivals and released a noise / experimental electronic CD that is now out of print.

At my job, I was web programmer and a release engineer. However, when the American economy began to downturn, it first hit the technology sector. I was laid off from my job in 2001. I took a few months to travel and then decided to do freelance programming and dedicate as much of my time as possible to music. I played bass guitar in several art rock bands, including one that was entirely improvisational and I created more tape music. I learned some new methods, like data bending (playing a piece of computer data as if it were a sound file) and built a few musical electronic kits, including a theremin.

I began to see my intuitive approach to composition as a limitation. I sought a theoretical underpinning. I experimented with some algorithmic compositional ideas in MAX, but I was frustrated by it’s inability to easily manipulate lists. Some of my friends and I collaborated on a few installations, including two that were installed for an evening at the Exploratorium in San Francisco, California. One of them used long delay lines to create feedback loops and find resonances and the other used contact mics attached to moving parts of the exhibits in the museum.

Meanwhile, I was reading many books on compositional theories of composers, like Philip Glass’ book Music by Philip Glass, John Cage’s book Silence and Pauline Oliveros’ Software for People. I also got interested in tuning theory. I found that the Java version of the Just Intonation Calculator had been open-sourced and then abandoned, so I adopted the project. I fixed some bugs and added new features and one day I hope to have time to clean up the graphics.

I decided that I would benefit most from a structured, school environment. I applied to Wesleyan University based on my admiration for the faculty. I was accepted and started in the fall of 2003.

Since being here, I have taken classes with and received composition lessons from the entire composition faculty: Alvin Lucier, Anthony Braxton, Ron Kuivila and Neely Bruce. Ron Kuivila is my advisor and my primary composition teacher. I took a class from Kuivila in Supercollider my first semester and have used it as my primary compositional tool since. Kuivila teaches a conceptual and theoretical approach to composition and time structures, mostly based on the theories of John Cage.

I study free improvisation and composition with Anthony Braxton. I am working with him to develop Supercollider programs that will respond to his improvisations in real time, so that he can play along with the computer.

I studied Just Intonation and tuning informally with composer Ellen Fullman, who is not affiliated with a school.

My work recently has been in Supercollider. About half of my pieces use just intoned computer-generated sounds. The other half use granular synthesis techniques to create Text Sound Poetry. Charles Amirkhanian and Paul de Marinis and the recently released OU archives have influenced me in this genre. I use the recorded voices of right-wing pundits as source material. My programs analyze the audio files to find breaks in phrases and re-order the words of the pundits, or find the pitches of their voices. I use repetition in these pieces because subtexts of violence and fascism are clearer after hearing the same phrase several times.

Aside from Supercollider, MAX and Protools, I also use Soundhack, and Audacity. I have used OSC to communicate between a Perl script I wrote and a Supercollider patch. My primary non-musical computer languages are Perl, Bash and Java. I have avoided doing sound programming in Java, except for a small part of the jJICalc, as most Java sound libraries are not cross-platform. I programmed in C and C++ when I was an undergrad and have dabbled in many additional languages. I have Linux/Unix experience, including some system administration. My main system now is OS X. I can navigate in Windows, but am not a power user. I have experience with PC hardware, including assembling systems and doing repairs. I know how to read a schematic and use a soldering iron and a volt meter.

I feel like SuperCollider offers a vast ocean of possibilities and I’ve just been wading close to the beach. Studying Xenakis’ theories will help me more fully engage with computer music, give me a deeper understanding and help me form my own compositional style. I hope that you accept me into your program.


1 comment:

Anonymous said...

i'm not a good essary writer so take what i say w/ a grain of salt.

I'd say, too many "I's" in some of your paragraphs.

is your statement supposed to have a point? Like, are you supposed to summarize what they asked you to write about in the beginning and end of the essay? it seemed like you were just listing all your experiences throughout, and then at the last paragraph, tried to tie thing together. but it was rather abrupt, esp the last sentence....

that's all i can comment on, cuz i don't know what of what you were talking about since i don't know music...


Commission Music

Commission Music
Bespoke Noise!!