r _Web.log

tag: generative

The Weather Cafe

I have recently been working with the brilliant Leeds-based artist and director David Shearing on The Weather Café, an immersive café-based installation whose interior shifts gradually to reflect the real-time weather conditions outside. As the wind, rain, light and humidity fluctuate over the day, so too do the conditions indoors. The overhead lights brighten and dim, mirroring the brightness of the sky; an array of fans blow gusts of wind across the space in response to the wind picking up outside. On hazy days, mist billows beneath the tables.

This strangely permeable building is the backdrop for a rich and diverse set of oral stories that David has recorded with Leeds locals, which themselves shift in response to the weather conditions, grouped by atmospherics: still, light, fragile, unsettled. Backed by a generative score by my long-time friend and collaborator James Bulley, the different strands drift amongst one another in unexpected ways, coalescing in moments of real beauty.

Photo: Leanne Buchan

I realised the software infrastructure that tied together the different elements of the piece, sensing data from an Ultimeter weather station installed outside via a Python-based serial interface. This is aggregated in real-time with further readings from the fantastic Met Office API, and relayed to a bank of DMX controls - for lighting and internal effects - and to Ableton Live via pylive to interface with the responsive score.

The Weather Café can be found opposite Leeds Art Gallery, closing on 20th March.

More info: The Weather Café

The Markup Melodium

I was recently invited by Mozilla to be a fellow on their Webmaker program, an excellent initiative to foster web literacy. As part of the fellowship, I was asked to create something which exploited the affordances of their maker tools.

I was drawn to the immediacy of Thimble, a browser-based interface to write web code and immediately see the results. I began pondering the potential for using Thimble as a kind of live coding environment: could an HTML document be translated into a piece of music which could be edited on-the-fly, hearing an immediate reflection of its structure and contents?

The outcome is this: The Markup Melodium. Using jQuery and Web Audio, it traverses the DOM tree of an HTML page and renders each type of element in sound. In parallel, it does likewise for the text content of the page, developing the phoneme-to-tone technique we used in The Listening Machine.

In way of example, hear Lewis Carroll's Jabberwocky as rendered by the Melodium. To explore the basic elements, here is a short composition for the Melodium. And the really exciting part: using Thimble's Remix feature, you can clone this basic composition and immediately begin developing your own remix of it in the browser, before publishing it to the world.

As the Markup Melodium is implemented through pure JavaScript, it's also available as a bookmarklet so that you can sonify arbitrary web pages.

Drag the following link to your browser's bookmark toolbar: Markup Melodium.

And, of course, all of the code is available on github.

The name is in tribute to The Melodium, a 1938 musical instrument created by German physicist Harald Bode, whose pioneering modular designs anticipated today's synthesizers by many decades

Generative Notation and Hacking The Quartet

The last few years have seen a proliferation of hack days, in which participants spend a day or two sketching and building prototypal ideas with code. For me, the most appealing are those that deal with a specific concept, with participants given free reign to explore a small zone of creative ideas -- often a more inspiring starting point than a series of data sets.

Thus, it was impossible to resist the allure of Hack The Quartet: a two-day event hosted by Bristol's iShed, which gave guests the rare opportunity to spend two days working closely with a world-class string quartet. The event brief sums up part of the appeal really nicely:

A quartet is like a game of chess; simple in its make up and infinite in its possibility. So how can new technologies be used to augment performance of and engagement with chamber music?

In my mind, there's a perfect balance in the relative constraints of this ensemble size, coupled with the opportunity to link the richness of virtuoso musicianship with the possibilities for algorithmic augmentation. I've been thinking a lot about these ideas since writing The Extended Composer but it's rare to be able to put them into practice in a live environment, particularly with players of the calibre of the Sacconi Quartet.

Generative Notation

I went into Hack The Quartet with an unusually well-formed idea: to create a tablet-based system to render musical notation in real-time, based on note sequences received over a wireless network. Though there are plenty of iPad score display apps out there, the objective here was to begin with a set of empty staves, onto which notes materialise throughout the performance.

The potential uses for this are manifold. Imagine the situation in which a dancer's movements are tracked by camera, with a piece of software translating their motions into musical shapes. These could be rendered as a set of 4 scores - one for each musician - and performed by the quartet in real-time, creating a musical accompaniment which directly mirrors the dancers' actions.

Of course, we can substitute dancers' movements for sensor inputs, web-based data streams, or any kind of real-time information source. The original motivation for the project came out of discussions surrounding The Listening Machine, which translated online conversations into musical sequences based on an archive of around 50,000 short samples, each representing a single word or syllable. Creating a sonification system based on fragments of pre-recorded audio was all very well, but imagine the fluidity and richness of interpretation if The Listening Machine's sentence-derived scores were performed live by skilled musicians: almost as if the instrument itself were speaking a sentence.

For Hack The Quartet, I worked closely with the all-round sonic extraordinaire Nick Ryan to devise a set of compositional processes that we could realise over the two days, which we continued to discuss in depth with the Sacconi players. Given the boldness and risk inherent with playing a score that is being written at the moment it is played, the quartet's confidence and capability in performing these generative sequences was quite remarkable. The resultant piece included complex, shifting polyrhythms, with algorithmically-generated relationships between note timings, which didn't phase the players in the slightest.

Visually, the notated outcome is surprisingly crisp and satisfactory. With Socket.io for real-time communications, isobar for algorithmic pattern generation, plus a quartet of Retina iPads, we now have a framework that is sufficiently stable and flexible to explore the next steps of live score generation.

for n in range(16):
   p = PWChoice(notes, [ n ] + ([ (15-n) ] * (len(notes)-1) ))
   events = PDict({ "note" : p, "dur" : 6, "amp" : 64 })
   event = events.next()

   if n == 0:
     event["annotation"] = "Arco pp, slow trill"
     event["annotation"] = "%d" % bar_number

   bar_number += 1

And the sense of hearing a nuanced rendition of a living score, never before heard, was simply something else. Having only just got my breath back from last-minute technical challenges (never, ever use Bluetooth in a demo setting. Ever.), it was just gripping to hearing our code and structures materialise as fluttering, breathy bowed notes, resonating through the bodies of the Quartet's antique instruments. Despite the mathematical precision of the underlying processes, the results were brought to life by the collective ebb and flow of the performers' pacing and dynamics.

With so many elements open to exploration, it is an approach that could bear a seemingly endless number of further directions. It feels like the start of a new chapter in working with sound, data and performance.

Thanks to Peter Gregson for his invaluable advice on score engraving, and Bruno Zamborlin, Goldsmiths' EAVI group and the iShed for iPad loans. Special thanks to all at the Watershed for hosting Hack The Quartet, and to the Sacconi Quartet for their exemplary patience and musicianship.

Cube Interpolations

Draw a cube object (A) in Adobe Illustrator CS6
Duplicate the cube object (B)
Using Blend tool, interpolate between one of A's outer points and each of B's outer points

cube interpolations

See also:
Sol LeWitt, Variations of Incomplete Open Cubes (1974)
Manfred Mohr, Cubic Limit I (1973-1975)

The Listening Machine

The Listening Machine is an orchestral sonification of the online activity of several hundred (unwitting) UK-based Twitter users. Created with cellist Peter Gregson and Britten Sinfonia, it has been a vast adventure combining studio recordings with a chamber ensemble, countless hours of coding towards a growing generative compositional toolkit, and delving into the mechanics of linguistics, prosody, and natural language processing.

Key to the compositional process is a system to translate the flow and rhythm of a text passage into a musical score, based on ordering the formant frequencies of the human voice, which characterise the qualities of each vowel sound. We determine the piece's musical mode via sentiment mapping, and then generate individual note-wise patterns by translating syllables into notes in the current scale. As several Twitter users are typically active at the same time, the result is multiple, intertwining melody lines, tonally related but structurally distinct.

The Listening Machine launched at the start of last month as part of The Space, a great new BBC/Arts Council initiative encouraging National Portfolio organisations into the realm of online content. With a team of BBC broadcast technology ninjas, our contribution is a piece of music which lasts 6 months and is quintessentially digital: using data sourced from internet discussions, and streamed solely over the web.

But maybe the most exciting part has been the combination of algorithmic processes with thousands of fragments of orchestrally-recorded refrains. The objective was always to create a piece of music which sounded organic, and -- in spite of its metronomic pulse -- the results aren't too far from what we envisaged. See the website for information about the compositional process.

The other integral part of the project is the graphic design, created by the excellent Joe Hales. Joe is more typically found creating design for print, and we wanted to translate this page-based aesthetic to the screen, presenting the project almost as if it were a textbook.

With some judicious JSON and HTML 5 <canvas> voodoo, we animated his cog-and-dial visualisation to present a continuous representation of The Listening Machine's state at any point. The collective's mood, activity rate and topics of conversation are displayed live on thelisteningmachine.org, similarly reflected in the musical output.

The Python code behind the algorithmic composition parts is available on github.com/ideoforms/isobar; the text analysis framework will be released in due course.

The Listening Machine can also be found on Twitter @listenmachine and facebook.com/thelisteningmachine.

Untitled (Digital Photographs, 2002-2011) at Cats and Kittens, N16

Yes, another one. For those around East London over the coming weeks, I have a digital print in the exhibition Cats and Kittens, opening at Barden's (N16) on 14th July.

Untitled (Digital Photographs, 2002-2011) compresses 11,000 digital photos taken over 10 years into a single poster-sized image, incorporating a line of pixels from each in sequence.

It's not reproducible online due to its large scale and detail, but can be seen from late next week (details on poster below).

Procedural HTML5 drawing with Harmony

K http://mrdoob.com/lab/javascript/harmony/

This online procedural drawing interface (click + drag in the big white space beneath the bar) simultaneously inspires awe and unease in its instant transformation of scribbles into intricate drawings.

It's made possible by the new HTML version 5 and its <canvas> tag, which allows for images to be dynamically created and modified within a web browser -- here using the JavaScript port of Processing. Exciting (and occasionally unnerving) times ahead.

Variable 4

In an abominable act of oversight, one of the major projects keeping me occupied in 2010 has yet to receive an official announcement here. So, I'm belatedly pleased to herald Variable 4, an environmental installation taking place on the other-worldly shingle plains of Dungeness in May 2010.

In partnership with James Bulley, and with kind support from the PRSF and Campbell Scientific, we're building a system which will be embedded into the desolate landscape and equipped with an array of meteorological sensors. Using algorithmic compositional techniques, it will then respond sonically to the real-time weather conditions, transforming and recombining a bank of precomposed movements and recordings via a multi-channel all-weather soundsystem.

It is taking place over a single 24-hour period, from noon till noon on 22-23 May, and so encompasses one complete daily cycle of solar and environmental conditions. For those not living in the Romney Marsh area, there will be a couple of coaches operating from London - booking info coming soon.

It's been a bit of a baptism of fire as far as project administration goes; who'd have thought that licensing and insurance concerns could occupy so much time? Current top of the anxiety checklist is ensuring that local fisherman aren't somehow entangled in wiring as they begin their 3am working days. Anyhow, we're finally well into the composition phase - leveraging Max For Live and the endless generative musical possibilities that it offers.

We'll be documenting the compositional and technical development on the Variable 4 blog and twitter @variable4, releasing relevant sourcecode and patches wherever possible.

Prime Composition

A short sound study in the structure of the number sequence. Each positive integer is broken down into its prime factors, with each factor corresponding to a harmonic partial. We then proceed to count upwards, for each integer only playing those harmonics which correspond to its factors.

More info and full source code (Processing, SuperCollider) available on the Prime Composition project page (coming very shortly).

Lovelace on creativity: An addendum

Just been reading through parts of the PhD thesis of Rob Saunders, one of the previous members of the stem cell modelling research group, on "Curious Design Agents and Artificial Creativity". Lots of interesting ideas, which follow on nicely from a talk I recently saw by Alex McLean on mapping creative exploration to geometric spaces (cf Peter Gärdenfors).

The introduction aptly reins back something that I overstated in my recent piece on Jane Prophet: Ada Lovelace's views on computational creativity. She in fact stated that:

“The Analytical Engine has no pretensions whatever to originate anything. It can do [only] whatever we know how to order it to perform” (emphasis added by Boden, 1990)

As Rob comments, therefore, the credit for the creative products of the machine should remain with its engineer, rather than construing the machine itself as having creativity. His paper goes on to investigate such notions of synthetic creativity. It also brings to the fore Turing's ideas about machines exhibiting "surprising" behaviour courtesy, in the paper that introduces the Turing test, anticipating Cariani's emergence-relative-to-a-model.

Interesting, and relevant after a morning spent encountering some highly surprising behaviour from some swarms driven by Perlin noise (below).

Incidentally, Leafcutter John -- who we are off to see play tonight as part of Polar Bear -- has also been doing some brilliant things with Processing and particle systems. On the "unexpected" tip, check out his awesome moth wings...