r _Web.log

tag: music


Archived Sounds

I have been digging through my sound archives and posting a few old works to my SoundCloud profile, some dating back to 2009.

Here is a new piece: an edit of Jürgen Müller's Sauerstoff Blasen (Oxygen Bubbles) with vocals from Miley Cyrus' Wrecking Ball.

Generative Notation and Hacking The Quartet

The last few years have seen a proliferation of hack days, in which participants spend a day or two sketching and building prototypal ideas with code. For me, the most appealing are those that deal with a specific concept, with participants given free reign to explore a small zone of creative ideas -- often a more inspiring starting point than a series of data sets.

Thus, it was impossible to resist the allure of Hack The Quartet: a two-day event hosted by Bristol's iShed, which gave guests the rare opportunity to spend two days working closely with a world-class string quartet. The event brief sums up part of the appeal really nicely:

A quartet is like a game of chess; simple in its make up and infinite in its possibility. So how can new technologies be used to augment performance of and engagement with chamber music?

In my mind, there's a perfect balance in the relative constraints of this ensemble size, coupled with the opportunity to link the richness of virtuoso musicianship with the possibilities for algorithmic augmentation. I've been thinking a lot about these ideas since writing The Extended Composer but it's rare to be able to put them into practice in a live environment, particularly with players of the calibre of the Sacconi Quartet.

Generative Notation

I went into Hack The Quartet with an unusually well-formed idea: to create a tablet-based system to render musical notation in real-time, based on note sequences received over a wireless network. Though there are plenty of iPad score display apps out there, the objective here was to begin with a set of empty staves, onto which notes materialise throughout the performance.

The potential uses for this are manifold. Imagine the situation in which a dancer's movements are tracked by camera, with a piece of software translating their motions into musical shapes. These could be rendered as a set of 4 scores - one for each musician - and performed by the quartet in real-time, creating a musical accompaniment which directly mirrors the dancers' actions.

Of course, we can substitute dancers' movements for sensor inputs, web-based data streams, or any kind of real-time information source. The original motivation for the project came out of discussions surrounding The Listening Machine, which translated online conversations into musical sequences based on an archive of around 50,000 short samples, each representing a single word or syllable. Creating a sonification system based on fragments of pre-recorded audio was all very well, but imagine the fluidity and richness of interpretation if The Listening Machine's sentence-derived scores were performed live by skilled musicians: almost as if the instrument itself were speaking a sentence.

For Hack The Quartet, I worked closely with the all-round sonic extraordinaire Nick Ryan to devise a set of compositional processes that we could realise over the two days, which we continued to discuss in depth with the Sacconi players. Given the boldness and risk inherent with playing a score that is being written at the moment it is played, the quartet's confidence and capability in performing these generative sequences was quite remarkable. The resultant piece included complex, shifting polyrhythms, with algorithmically-generated relationships between note timings, which didn't phase the players in the slightest.




Visually, the notated outcome is surprisingly crisp and satisfactory. With Socket.io for real-time communications, isobar for algorithmic pattern generation, plus a quartet of Retina iPads, we now have a framework that is sufficiently stable and flexible to explore the next steps of live score generation.


for n in range(16):
   p = PWChoice(notes, [ n ] + ([ (15-n) ] * (len(notes)-1) ))
   events = PDict({ "note" : p, "dur" : 6, "amp" : 64 })
   event = events.next()

   if n == 0:
     event["annotation"] = "Arco pp, slow trill"
   else:
     event["annotation"] = "%d" % bar_number

   output.event(event)
   bar_number += 1

And the sense of hearing a nuanced rendition of a living score, never before heard, was simply something else. Having only just got my breath back from last-minute technical challenges (never, ever use Bluetooth in a demo setting. Ever.), it was just gripping to hearing our code and structures materialise as fluttering, breathy bowed notes, resonating through the bodies of the Quartet's antique instruments. Despite the mathematical precision of the underlying processes, the results were brought to life by the collective ebb and flow of the performers' pacing and dynamics.

With so many elements open to exploration, it is an approach that could bear a seemingly endless number of further directions. It feels like the start of a new chapter in working with sound, data and performance.

Thanks to Peter Gregson for his invaluable advice on score engraving, and Bruno Zamborlin, Goldsmiths' EAVI group and the iShed for iPad loans. Special thanks to all at the Watershed for hosting Hack The Quartet, and to the Sacconi Quartet for their exemplary patience and musicianship.

ABSOLUTE ABSOLUTE

ABSOLUTE ABSOLUTE is a live radio station playing every Absolute Radio broadcast simultaneously (Absolute 60s, Absolute 70s, Absolute 80s, Absolute 90s, Absolute 00s, Absolute Classic Rock).

Described by critics as "So ontologically terrifying it reminds you how incredible life is" (Paul Bennun), "Radio for those who understand the horror of existence" (Tulta Behm), and "The end of history" (Hestia Peppé), it will be broadcasting for a limited time only. Don't miss it.

ABSOLUTE ABSOLUTE

(To play in iTunes, right-click and Copy Link Location; in iTunes, hit cmd-U and paste the URL.)

Below is a 5-minute excerpt (recorded 2013-03-08, 17:14)

Radio Reconstructions (2013) at LimeWharf

An extended version of Radio Reconstructions is installed at new art/science space LimeWharf over the next six weeks. It's the first in a series of temporary residencies hosted there, and resonates nicely with their general ethos:

LimeWharf is an evolving project that aspires to immerse guests and practitioners alike in thematic journeys. The core values of our programming are centred around building a positive relationship to the future, connecting the old and the new, meshing crafts with technology all in a non-market driven process-led series of experiments...

It's been a good opportunity to reflect on how the piece links together the history and nostalgia of analogue radio with the futurist technology of digitally-controlled tuners and algorithmic analysis. I expect that when the Mac Mini controlling the installation has gasped its last bits, the venerable radios distributing the audio will be still going strong.

Radio Reconstructions at LimeWharf

It has also given us the opportunity to think about the separation between the physical apparatus of the installation, and the sound that is heard through it.

We have started considering the installation itself to be akin to a semi-autonomous instrument, which has a particular space of timbres and behaviours associated with it -- in this case, the space of locally-receivable radio broadcasts, and the capability to record, arrange and analyse those broadcasts into pitched fragments.

We can then compose scores for the piece which determine the dynamics of these behaviours over time. Here, we are scoring for grain amplitude, duration and diffusion, and two EQ parameters.

Separating score from instrument means that we can write multiple distinct scores for the installation, exploring different capacities and approaches. We have composed two new 30-minute scores for Radio Reconstructions, which are designed to be played at specific times and capitalise on the fact that we know in advance what is scheduled on major FM stations -- so, we can navigate between programmes with an awareness of the kind of content that will be played, juxtaposing talk radio chatter with distant shortwave broadcasts with local Citizens Band static...

Both of these scores will be broadcast on art radio station Resonance FM. Listen in on their website at 8pm on Tuesday 12 March and Tuesday 19th March.

xtet

Next month at the Barbican, James Bulley and I are debuting a new piece of work which harnesses the tiny sound-systems that 6 billion people carry around with them each day.

xtet uses mobile phone handsets to create an ephemeral multichannel sound system, which only exists for as long as the event itself:


By broadcasting real-time audio to the audience's wireless mobile devices (smartphones, tablets, mp3 players, etc), the audience itself becomes a temporary speaker system comprised of countless distributed sound sources, forming a uniquely spatial and participatory experience. The movements of listeners cause the music's spatial formation to shift and grow, akin to the reactive motions of a shoal of fish.

It is both a platform (as a method for streaming multiple unique audio streams over HTTP, with HTML5 display) and a series of works; we are composing a number of pieces of music for xtet as a medium, considering the unusual set of constraints that it imposes. These include not knowing ahead of time how many speakers will be present, and writing for highly treble-weighted playback.

xtet I (α, β, γ, δ, θ, μ) is the first in the series, commissioned by the Barbican and Wellcome Trust for Wonder: Art and Science on the Brain. It is modelled on the patterns and characteristics of neural activity, taking the relative lengths of key types of neural oscillation (alpha waves, beta waves, delta waves...) and using them to determine the structure and timings of musical events.

It's a much looser, higher-level interpretation of cognitive patterns than something as rigorous as the neural nets of The Fragmented Orchestra, but basing the piece on the emergent properties of thought seems like an apt way to start writing for an installation which is itself wholly dependent on collective activity.

We're also excited to be incorporating xtet into the Marcus du Sautoy performance lecture on consciousness, using it to diffuse James Holden's trance-inducing musical material across the audience.

We prototyped it for the first time yesterday, with much assistance from a generous throng of Barbican volunteers, and it was quite magical to hear James's analogue sounds splinter and shimmer across the auditorium.

More information: xtet

The Listening Machine

The Listening Machine is an orchestral sonification of the online activity of several hundred (unwitting) UK-based Twitter users. Created with cellist Peter Gregson and Britten Sinfonia, it has been a vast adventure combining studio recordings with a chamber ensemble, countless hours of coding towards a growing generative compositional toolkit, and delving into the mechanics of linguistics, prosody, and natural language processing.

Key to the compositional process is a system to translate the flow and rhythm of a text passage into a musical score, based on ordering the formant frequencies of the human voice, which characterise the qualities of each vowel sound. We determine the piece's musical mode via sentiment mapping, and then generate individual note-wise patterns by translating syllables into notes in the current scale. As several Twitter users are typically active at the same time, the result is multiple, intertwining melody lines, tonally related but structurally distinct.

The Listening Machine launched at the start of last month as part of The Space, a great new BBC/Arts Council initiative encouraging National Portfolio organisations into the realm of online content. With a team of BBC broadcast technology ninjas, our contribution is a piece of music which lasts 6 months and is quintessentially digital: using data sourced from internet discussions, and streamed solely over the web.

But maybe the most exciting part has been the combination of algorithmic processes with thousands of fragments of orchestrally-recorded refrains. The objective was always to create a piece of music which sounded organic, and -- in spite of its metronomic pulse -- the results aren't too far from what we envisaged. See the website for information about the compositional process.

The other integral part of the project is the graphic design, created by the excellent Joe Hales. Joe is more typically found creating design for print, and we wanted to translate this page-based aesthetic to the screen, presenting the project almost as if it were a textbook.

With some judicious JSON and HTML 5 <canvas> voodoo, we animated his cog-and-dial visualisation to present a continuous representation of The Listening Machine's state at any point. The collective's mood, activity rate and topics of conversation are displayed live on thelisteningmachine.org, similarly reflected in the musical output.

The Python code behind the algorithmic composition parts is available on github.com/ideoforms/isobar; the text analysis framework will be released in due course.

The Listening Machine can also be found on Twitter @listenmachine and facebook.com/thelisteningmachine.

Variable 4

In an abominable act of oversight, one of the major projects keeping me occupied in 2010 has yet to receive an official announcement here. So, I'm belatedly pleased to herald Variable 4, an environmental installation taking place on the other-worldly shingle plains of Dungeness in May 2010.

In partnership with James Bulley, and with kind support from the PRSF and Campbell Scientific, we're building a system which will be embedded into the desolate landscape and equipped with an array of meteorological sensors. Using algorithmic compositional techniques, it will then respond sonically to the real-time weather conditions, transforming and recombining a bank of precomposed movements and recordings via a multi-channel all-weather soundsystem.

It is taking place over a single 24-hour period, from noon till noon on 22-23 May, and so encompasses one complete daily cycle of solar and environmental conditions. For those not living in the Romney Marsh area, there will be a couple of coaches operating from London - booking info coming soon.

It's been a bit of a baptism of fire as far as project administration goes; who'd have thought that licensing and insurance concerns could occupy so much time? Current top of the anxiety checklist is ensuring that local fisherman aren't somehow entangled in wiring as they begin their 3am working days. Anyhow, we're finally well into the composition phase - leveraging Max For Live and the endless generative musical possibilities that it offers.

We'll be documenting the compositional and technical development on the Variable 4 blog and twitter @variable4, releasing relevant sourcecode and patches wherever possible.

Complexity and Networks meeting on music, beauty and neuroscience

[icon] Prog_19_5_10.pdf

Imperial's Complexity and Networks group are hosting a day-long meeting on music, beauty perception and neuroscience this coming May (Wednesday 19th). With a focus on the neural correlates of creative and aesthetic processes, and the complex dynamics thereof, it's one not to miss for art-and-emergence junkies.

See the attached list of talks (PDF) for more info.

Hackpact 2009/09/#25: Dagstuhl creativity writeup

multi.1024.jpg

Final push on getting together this belated report on the Dagstuhl seminar Computational Creativity: An Interdisciplinary Approach I attended a couple of months ago. Features such diagrammatic gems as the attached, depicting ongoing interactions in group improvisation.

Hackpact 2009/09/#21: Analogue tomfoolery with EMS VCS3: 'The Putney'

[icon] hackpact_21.mp3

The first (and, most likely, last) analogue hackpact entry: a gleeful hour spent playing with "The Putney", more commonly known as the Electronic Music Studios VCS3 (courtesy of James). The unit's matrix-based routing is quite unlike any modular or integrated interface I've seen before, more akin to a strange, solitary game of Battleships.

It's also pretty well-equipped for an antiquated monosynth, with a sexy radiophonic spring reverb, ring modulation and input channels to modulate arbitrary audio inputs. Capable of some screaming 60s tones, and very, very addictive.

(audio CC available as by-nc)