r _Web.log

tag: works


The Markup Melodium

I was recently invited by Mozilla to be a fellow on their Webmaker program, an excellent initiative to foster web literacy. As part of the fellowship, I was asked to create something which exploited the affordances of their maker tools.

I was drawn to the immediacy of Thimble, a browser-based interface to write web code and immediately see the results. I began pondering the potential for using Thimble as a kind of live coding environment: could an HTML document be translated into a piece of music which could be edited on-the-fly, hearing an immediate reflection of its structure and contents?

The outcome is this: The Markup Melodium. Using jQuery and Web Audio, it traverses the DOM tree of an HTML page and renders each type of element in sound. In parallel, it does likewise for the text content of the page, developing the phoneme-to-tone technique we used in The Listening Machine.

In way of example, hear Lewis Carroll's Jabberwocky as rendered by the Melodium. To explore the basic elements, here is a short composition for the Melodium. And the really exciting part: using Thimble's Remix feature, you can clone this basic composition and immediately begin developing your own remix of it in the browser, before publishing it to the world.

As the Markup Melodium is implemented through pure JavaScript, it's also available as a bookmarklet so that you can sonify arbitrary web pages.

Drag the following link to your browser's bookmark toolbar: Markup Melodium.

And, of course, all of the code is available on github.

The name is in tribute to The Melodium, a 1938 musical instrument created by German physicist Harald Bode, whose pioneering modular designs anticipated today's synthesizers by many decades

xtet

Next month at the Barbican, James Bulley and I are debuting a new piece of work which harnesses the tiny sound-systems that 6 billion people carry around with them each day.

xtet uses mobile phone handsets to create an ephemeral multichannel sound system, which only exists for as long as the event itself:


By broadcasting real-time audio to the audience's wireless mobile devices (smartphones, tablets, mp3 players, etc), the audience itself becomes a temporary speaker system comprised of countless distributed sound sources, forming a uniquely spatial and participatory experience. The movements of listeners cause the music's spatial formation to shift and grow, akin to the reactive motions of a shoal of fish.

It is both a platform (as a method for streaming multiple unique audio streams over HTTP, with HTML5 display) and a series of works; we are composing a number of pieces of music for xtet as a medium, considering the unusual set of constraints that it imposes. These include not knowing ahead of time how many speakers will be present, and writing for highly treble-weighted playback.

xtet I (α, β, γ, δ, θ, μ) is the first in the series, commissioned by the Barbican and Wellcome Trust for Wonder: Art and Science on the Brain. It is modelled on the patterns and characteristics of neural activity, taking the relative lengths of key types of neural oscillation (alpha waves, beta waves, delta waves...) and using them to determine the structure and timings of musical events.

It's a much looser, higher-level interpretation of cognitive patterns than something as rigorous as the neural nets of The Fragmented Orchestra, but basing the piece on the emergent properties of thought seems like an apt way to start writing for an installation which is itself wholly dependent on collective activity.

We're also excited to be incorporating xtet into the Marcus du Sautoy performance lecture on consciousness, using it to diffuse James Holden's trance-inducing musical material across the audience.

We prototyped it for the first time yesterday, with much assistance from a generous throng of Barbican volunteers, and it was quite magical to hear James's analogue sounds splinter and shimmer across the auditorium.

More information: xtet