r _Web.log

Archives: 2012-06


Commissioning in the Age of Digital Distribution, Third Ear Symposium

Digital sound pieces like The Listening Machine and Maelstrom raise lots of interesting questions about rights, access and commissioning. Authorship and the constituent materials of an artwork are suddenly distributed: should we be crediting the Twitter users, whose behaviours serve to organise the piece, as joint conductors? When access is no longer a geographical issue, but one of cross-platform compatibility and usability, should commissions be sought to target deprived sectors such as Flash-deficient Linux users? How can work be owned and collected when it is fundamentally immaterial? What attitude should we take to support, when a piece of work is subject to the same maintenance needs as a piece of software engineering? And should traditional artworks and organisations be uncritically diving into the digital realm, or does curatorial care need to be taken over the appropriate presentation for each kind of media?

third ear

The Third Ear Symposium (13 July 2012, South Bank Centre) seeks to address these questions, with a day-long schedule of talks on "Commissioning and Patronage in a Digital Age". Peter Gregson and I will be talking on The Listening Machine, on a panel with Matthew Herbert and The Space's Susanna Simons.

There are some great-sounding panels later in the day, with a session on "Commissioning & Collecting Sound and Performance in the Visual Arts", and the creative role of the commissioner themself.

Tickets here.

Chirp: A platform for audible data

Over the past few months, I've had my head down working at Animal Systems on a tremendously exciting new platform by the name of Chirp. In a nutshell, Chirp is a way to treat sound as data, enabling devices to communicate with each other using short packets of audio. A sender emits a series of tones; a receiver hears and decodes them, translating them into a code which can point to a picture, text, URL, or even another piece of sound.

Chester

Chester, the bird-robot-hybrid avatar of Chirp

My work has been focused on developing an iOS app which will very shortly be seeing the light of day, App Store pending. The experience is simple: Alice want to send a picture to Bob, so she imports it into Chirp, hits a button, and the device chirps it (a sound like this). Bob's phone, and any nearby devices within earshot, can then decode the chirp and display the image. No painful Bluetooth pairing, no typing of email addresses, no USB-stick fiddling.

Of course, the system isn't breaking the laws of entropy and cramming a large JPEG into a second of audio: behind the scenes, the data itself is transferred to a cloud infrastructure and translated into a "shortcode", which is then sent over sound, decoded and resolved. There's an inherently low bitrate in a noisy sonic environment. But then, the bitrate of human speech is estimated at less than 100bps, and spoken language has turned out to be quite a useful feature.

One of the big lessons for me has been the sheer amount of engineering required for a magically simple transaction. Developed from conversations about the information-theoretic properties of avian linguistics, Chirp screenshot Chirp's audio system has been honed over countless months by a team of DSP gurus based in Barcelona, with an array of simulations operated from UCL's Legion supercomputing cluster, rendering it resilient to hostile reverberant and noisy conditions; the underlying network consists of an infinitely-scalable REST API that we have designed over many iterations, developed by a team of inveterate network architects and now residing in the cloud. The inverse correlation between intuitive simplicity and actual complexity, in the tech domain at least, couldn't be clearer here.

The app is an exploratory first step, and there are almost too many next steps to contemplate. Anything that can transmit sound can send a chirp, so we've been experimenting with all sorts of lo-fi devices: the joy of sending a YouTube video link via a dictaphone is pretty much unrivalled. Throw an Arduino into the equation and suddenly there's an explosion of possibilities of conversing machines.

And there's an equal amount of philosophical potential in this research. Suddenly, the dumb alert tones produced by phones, lorries and fire alarms seem absurd. Why aren't these designed for machine as well as human ears, conveying valuable information about the state of the world? Why is the visual given default primacy as an information medium? And what happens when the typical silence of network communications are suddenly tangible, embodied, and broadcast?

Chirp will be free on the Apple iOS App Store.

The Listening Machine

The Listening Machine is an orchestral sonification of the online activity of several hundred (unwitting) UK-based Twitter users. Created with cellist Peter Gregson and Britten Sinfonia, it has been a vast adventure combining studio recordings with a chamber ensemble, countless hours of coding towards a growing generative compositional toolkit, and delving into the mechanics of linguistics, prosody, and natural language processing.

Key to the compositional process is a system to translate the flow and rhythm of a text passage into a musical score, based on ordering the formant frequencies of the human voice, which characterise the qualities of each vowel sound. We determine the piece's musical mode via sentiment mapping, and then generate individual note-wise patterns by translating syllables into notes in the current scale. As several Twitter users are typically active at the same time, the result is multiple, intertwining melody lines, tonally related but structurally distinct.

The Listening Machine launched at the start of last month as part of The Space, a great new BBC/Arts Council initiative encouraging National Portfolio organisations into the realm of online content. With a team of BBC broadcast technology ninjas, our contribution is a piece of music which lasts 6 months and is quintessentially digital: using data sourced from internet discussions, and streamed solely over the web.

But maybe the most exciting part has been the combination of algorithmic processes with thousands of fragments of orchestrally-recorded refrains. The objective was always to create a piece of music which sounded organic, and -- in spite of its metronomic pulse -- the results aren't too far from what we envisaged. See the website for information about the compositional process.

The other integral part of the project is the graphic design, created by the excellent Joe Hales. Joe is more typically found creating design for print, and we wanted to translate this page-based aesthetic to the screen, presenting the project almost as if it were a textbook.

With some judicious JSON and HTML 5 <canvas> voodoo, we animated his cog-and-dial visualisation to present a continuous representation of The Listening Machine's state at any point. The collective's mood, activity rate and topics of conversation are displayed live on thelisteningmachine.org, similarly reflected in the musical output.

The Python code behind the algorithmic composition parts is available on github.com/ideoforms/isobar; the text analysis framework will be released in due course.

The Listening Machine can also be found on Twitter @listenmachine and facebook.com/thelisteningmachine.