r _Web.log

Variable 4 Portland Bill

The fourth edition of Variable 4 will be appearing this autumn, taking meteorology and generative music to the Jurassic coast of the South-West: Variable 4 Portland Bill.

We have also finished a comprehensive overhaul of the overall Variable 4 site, including documentation archives of previous locations and an improved news archive featuring extended miscellanea on weather, art and sound.

Berlin sound art gallery Singuhr ceases regular operations

Sad to belatedly discover that Singuhr, the venerable sound art gallery sited in an underground former reservoir in Prenzlauer Berg, has wound up its operations due to a shortage of funding. The internal architecture of the Wasserturm -- a network of concrete alcoves, with a large circular tunnel around the perimeter -- made for a unique and powerful acoustic environment, exploited by many of the artists who exhibited there.

Hearing Gordon Monahan's Resonant Platinum Records there was one of the most singular listening experiences I can remember, lightly resonating through the vaults. Its quietness in the cavernous space gave a unique and unexpected kind of intimacy.

Gordon Monahan, Resonant Platinum Records (photo: Gordon Monahan)

It sounds like Singuhr intend to continue programming events elsewhere on a project-by-project basis. Let's hope that upping sticks will give rise to new locations and new sonic possibilities.

Living Symphonies

A new year always seems like an appropriate time to push new projects out into the bright lights of the world. So, after almost a year of R&D, I'm very happy to be able to announce a major new work that will be occupying much of my 2014.

Living Symphonies is the latest collaborative work by Jones/Bulley. It is a sound installation based on the dynamics of a forest ecosystem, growing, adapting and flourishing in the same way as a real forest's flora and fauna. Modelling the real-world behaviours of over 50 different species, it will be installed in a series of English forests over the course of summer 2014, adapting to the inhabitants and live atmospheric conditions of each site.

In the heritage of Variable 4, it will be heard as a multi-channel musical composition of indefinite duration, with precomposed and generative elements intertwined through a web of algorithmic processes. Here, however, the dynamic model underlying the composition is quite beyond anything we've done before. It is based upon a simulation developed in conjunction with Forestry Commission ecologists, extending models produced as part of my evolutionary dynamics PhD work. And because each forest has a drastically different ecological makeup, the resultant composition will sound completely unique at each location — site-specific by its very nature.

We are in the process of mapping out the precise ecological makeup of a bounded (30x20m) area of each forest, charting its wildlife inhabitants with a 1m² resolution. This map is then used to seed an agent-based simulation, which links each species to behavioural and musical properties, spatialised across a network of weatherproof speakers embedded throughout the canopy and forest floor.

We'll next be dedicating a great deal of studio time to recording thousands of musical fragments, with orchestral musicians playing short motifs corresponding to particular kinds of ecological processes. These will then be processed by the compositional system and linked to the ecological model's current state, supported by further generative processes to create live interactions between each musical element.

Thetford Forest

In September, we carried out a successful outdoor prototype of the project in East Anglia's Thetford Forest. Though still in its embryonic stages, it was pretty enthralling to hear these sonic organisms roving amongst the undergrowth.

Supported by Sound And Music and Forestry Commission England, and with the support of an Arts Council Strategic Touring grant, Living Symphonies will be touring four different forests between May and September 2014:

Thetford Forest (Norfolk/Suffolk), 24 — 30 May 2014
Fineshade Wood (Northamptonshire), 20 — 26 June 2014
Cannock Chase (Staffordshire), 26 July — 1 August 2014
Bedgebury Pinetum (Kent/Sussex), 25 — 31 August 2014

Much more news will be available on the forthcoming Living Symphonies website, launching imminently.

Archived Sounds

I have been digging through my sound archives and posting a few old works to my SoundCloud profile, some dating back to 2009.

Here is a new piece: an edit of Jürgen Müller's Sauerstoff Blasen (Oxygen Bubbles) with vocals from Miley Cyrus' Wrecking Ball.

The Markup Melodium

I was recently invited by Mozilla to be a fellow on their Webmaker program, an excellent initiative to foster web literacy. As part of the fellowship, I was asked to create something which exploited the affordances of their maker tools.

I was drawn to the immediacy of Thimble, a browser-based interface to write web code and immediately see the results. I began pondering the potential for using Thimble as a kind of live coding environment: could an HTML document be translated into a piece of music which could be edited on-the-fly, hearing an immediate reflection of its structure and contents?

The outcome is this: The Markup Melodium. Using jQuery and Web Audio, it traverses the DOM tree of an HTML page and renders each type of element in sound. In parallel, it does likewise for the text content of the page, developing the phoneme-to-tone technique we used in The Listening Machine.

In way of example, hear Lewis Carroll's Jabberwocky as rendered by the Melodium. To explore the basic elements, here is a short composition for the Melodium. And the really exciting part: using Thimble's Remix feature, you can clone this basic composition and immediately begin developing your own remix of it in the browser, before publishing it to the world.

As the Markup Melodium is implemented through pure JavaScript, it's also available as a bookmarklet so that you can sonify arbitrary web pages.

Drag the following link to your browser's bookmark toolbar: Markup Melodium.

And, of course, all of the code is available on github.

The name is in tribute to The Melodium, a 1938 musical instrument created by German physicist Harald Bode, whose pioneering modular designs anticipated today's synthesizers by many decades

Jones/Bulley portfolio

James and I have just finished a radical overhaul of the portfolio for our collaborative practice.

JONES-BULLEY.COM

It now features full documentation of our existing works, plus some previews of our schedule heading into 2014.

Chronovisor: Prologue

I have been working with the excellent South Kiosk developing a new exhibition at Peckham's FoodFace artist-led studio space.

The concept is the creation of a Chronovisor, a device which allows the viewer to see into different parts of time and space. For Chronovisor: Prologue, the concept is realised as a cybernetic light and sound response system, with artists curating audio and visual elements which collectively create a reactive, nonlinear whole.

I am curating a series of abstract cinema which will be projected on a split R/G/B display, prototyped in the image below. This begins at Viking Eggeling's abstract Constructivist cinema ('Symphonie Diagonale', 1924), through to the rise of the analog video synthesizer ('The Rutt-Etra Video Synthesizer', 1970), and on to the silver-screen computer graphics work of Abel Image Research ('Cans', 1984).

This video selection, alongside the selections of 9 other artists, will be processed by the lightness-ordering algorithm I developed for Sans/Soleil (2011), fracturing the film's linear timeline into a series of splintered fragments.

The source code for this processing app is available on Github: LuminOrder, developed using Cinder.

Chronovisor: Prologue runs from 6.30pm - 10pm at Foodface.

A-Z.js

Quell the ceaseless anguish of unordered sequences with A-Z.js, a Javascript bookmarklet to immediately sort any webpage into alphabetical order.

ubuweb alphabeticized

The source code is also available on github: A-Z.js

Generative Notation and Hacking The Quartet

The last few years have seen a proliferation of hack days, in which participants spend a day or two sketching and building prototypal ideas with code. For me, the most appealing are those that deal with a specific concept, with participants given free reign to explore a small zone of creative ideas -- often a more inspiring starting point than a series of data sets.

Thus, it was impossible to resist the allure of Hack The Quartet: a two-day event hosted by Bristol's iShed, which gave guests the rare opportunity to spend two days working closely with a world-class string quartet. The event brief sums up part of the appeal really nicely:

A quartet is like a game of chess; simple in its make up and infinite in its possibility. So how can new technologies be used to augment performance of and engagement with chamber music?

In my mind, there's a perfect balance in the relative constraints of this ensemble size, coupled with the opportunity to link the richness of virtuoso musicianship with the possibilities for algorithmic augmentation. I've been thinking a lot about these ideas since writing The Extended Composer but it's rare to be able to put them into practice in a live environment, particularly with players of the calibre of the Sacconi Quartet.

Generative Notation

I went into Hack The Quartet with an unusually well-formed idea: to create a tablet-based system to render musical notation in real-time, based on note sequences received over a wireless network. Though there are plenty of iPad score display apps out there, the objective here was to begin with a set of empty staves, onto which notes materialise throughout the performance.

The potential uses for this are manifold. Imagine the situation in which a dancer's movements are tracked by camera, with a piece of software translating their motions into musical shapes. These could be rendered as a set of 4 scores - one for each musician - and performed by the quartet in real-time, creating a musical accompaniment which directly mirrors the dancers' actions.

Of course, we can substitute dancers' movements for sensor inputs, web-based data streams, or any kind of real-time information source. The original motivation for the project came out of discussions surrounding The Listening Machine, which translated online conversations into musical sequences based on an archive of around 50,000 short samples, each representing a single word or syllable. Creating a sonification system based on fragments of pre-recorded audio was all very well, but imagine the fluidity and richness of interpretation if The Listening Machine's sentence-derived scores were performed live by skilled musicians: almost as if the instrument itself were speaking a sentence.

For Hack The Quartet, I worked closely with the all-round sonic extraordinaire Nick Ryan to devise a set of compositional processes that we could realise over the two days, which we continued to discuss in depth with the Sacconi players. Given the boldness and risk inherent with playing a score that is being written at the moment it is played, the quartet's confidence and capability in performing these generative sequences was quite remarkable. The resultant piece included complex, shifting polyrhythms, with algorithmically-generated relationships between note timings, which didn't phase the players in the slightest.




Visually, the notated outcome is surprisingly crisp and satisfactory. With Socket.io for real-time communications, isobar for algorithmic pattern generation, plus a quartet of Retina iPads, we now have a framework that is sufficiently stable and flexible to explore the next steps of live score generation.


for n in range(16):
   p = PWChoice(notes, [ n ] + ([ (15-n) ] * (len(notes)-1) ))
   events = PDict({ "note" : p, "dur" : 6, "amp" : 64 })
   event = events.next()

   if n == 0:
     event["annotation"] = "Arco pp, slow trill"
   else:
     event["annotation"] = "%d" % bar_number

   output.event(event)
   bar_number += 1

And the sense of hearing a nuanced rendition of a living score, never before heard, was simply something else. Having only just got my breath back from last-minute technical challenges (never, ever use Bluetooth in a demo setting. Ever.), it was just gripping to hearing our code and structures materialise as fluttering, breathy bowed notes, resonating through the bodies of the Quartet's antique instruments. Despite the mathematical precision of the underlying processes, the results were brought to life by the collective ebb and flow of the performers' pacing and dynamics.

With so many elements open to exploration, it is an approach that could bear a seemingly endless number of further directions. It feels like the start of a new chapter in working with sound, data and performance.

Thanks to Peter Gregson for his invaluable advice on score engraving, and Bruno Zamborlin, Goldsmiths' EAVI group and the iShed for iPad loans. Special thanks to all at the Watershed for hosting Hack The Quartet, and to the Sacconi Quartet for their exemplary patience and musicianship.

Cube Interpolations

Draw a cube object (A) in Adobe Illustrator CS6
Duplicate the cube object (B)
Using Blend tool, interpolate between one of A's outer points and each of B's outer points

cube interpolations

See also:
Sol LeWitt, Variations of Incomplete Open Cubes (1974)
Manfred Mohr, Cubic Limit I (1973-1975)