James is a composer and artist, based in nearby New Cross, at whose talent and prolificness I never cease to marvel. He's got his digital act together with a long-overdue web portfolio at jamesjbulley.com, including his recent piece "Equal Temperance", a composition based on recordings from the 29 pianos that comprised Luke Jerram's fine Play Me, I'm Yours. I'm currently working on a remix based on these half-environmental/half-performative recordings, which we're discussing transforming into a somewhat larger project...
Attached is the audio recording of my segment of Songs for Dynamical Systems, the workshop-plus-gig I recently mentioned, part of the Live Algorithms for Music network. The workshop itself was enormously stimulating, bringing up many interesting issues that only emerge when a software system is forced to improvise entirely autonomously with a human performer -- mostly the class of judgments which are entirely intuitive to a musician, and broadly without generalisable rules. Without any symbolic input, for example, how can an autonomous system gauge when to cease playing?
The gig itself, which was the culmination of a feverish few days' collaborative coding (using the P/F/Q = machine-listening/algorithmic-sequencing/machine-synthesis trio that constitutes the core Live Algorithm philosophy of modularity), showcased an impressively wide range of approaches and sonic palettes. It was also great to see collaborative performances pulled off with such panache, with OSC used for communication between participants over a wireless network.
audible in the recording above, was to develop a minimal responder that would roughly mimic Eddie's playing along a number of axes, each of which corresponds to a high-level musical feature (density of onsets, loudness, roughness/smoothness, frequency centroid, etc). Due to the noisiness of these measurements, and a certain degree of chaos inherent within the sequencing system, this is never remotely as linear as it sounds on paper! I also adopted a vaguely agent-based approach to positioning the sounds in stereo space, with a notional 9 agents responding autonomously to Eddie's playing.
Eddie himself was applying extended techniques to his tam-tam, with bowing and scraping supplemented by a motorised set of wire-beaters that he has affixed to it. It was an enthralling performance to watch, despite the nerves inherent in watching a bunch of piecemeal code attempting to play along with such a marvellous performer.
It's been a wild couple of months. Despite the first nice British summer in recent memory, I've barely had a chance to enjoy a single idle afternoon in Brockwell Park, though I may be experiencing some tanning-like effects from CPU glare.
After a hectic few days at NIME, with whirlwind 15-minute talks interspersed by stunning demos from the likes of Madrona Labs' stunning SoundPlane A (which is the future of digital instruments -- believe!), Roger Dannenberg's robot-bagpiping McBlare and a stunning performance by the Princeton Laptop Orchestra, I was fortunate enough to make it to the 5-day symposium on "Computation and Creativity" at the Dagstuhl centre for informatics, nestled in the German countryside. This meant exchanging ideas and banter with the likes of Jon McCormack, Philip Galanter, Peter Cariani, Maggie Boden, Frieder Nake and countless other luminaries of the evolutionary art/science/theory/AI world. Insane, but wonderful. My brain is still somewhat aching.
The one concrete outcome that I was involved with was the development of a play world for multi-participant visual creation, a continuous shape manipulation environment for 9 participants. It's intended to exemplify, in the simplest manner possible, an emergent visual language that appears through the interplay of a handful of forms.
We'll hopefully be developing some of these ideas further in the near future, incorporating some of the ideas of François Pachet's beautifully elegant Continuator.
Virtually straight off the back of this, I'm now hurriedly working on some code for the 2009 Live Algorithms workshop taking place here at Goldsmiths. The premise is to create a series of completely autonomous software agents which can respond to a purely auditory input from a human performer -- in this case, the brilliant percussionist Eddie Prevost, who is leading the performative sections of the workshops. This is culminating in a gig tomorrow night (6 August) at Cafe Oto, where each of the algorithms will be appearing in a duet with Eddie. It's somewhat unusual to feel nervous about a gig in which I won't be playing a single note.
Recordings and code will be up shortly. In the meantime, here's a visual scrapbook of other things I've been working towards over this manic summer.