A strange truism of television and cinema is that the most realistic of scenes are often deceptively elaborate fictions, constructed from countless layers of careful staging, props, computer graphics and post-production effects (see the insights into visual artifice of Framestore’s “Gravity" installation at Digital Revolution).
The same equally applies to sound. The crunching steps of an actor’s coastal walk are most likely not recorded on set, but the work of a foley artist, pacing on the spot through a tray of gravel, underlaid with pre-recorded environmental atmos. An auditory scene may be made up of dozens of interwoven elements, often drawn from sound libraries designed to evoke a particular setting.
Having long been interested by the narrative power of non-spoken sound, it made a natural focal point for a couple of days that I recently spent working at the BBC R&D labs on behalf of the New Radiophonic Workshop, thinking about new ways in which the BBC’s vast media archives could be reimagined and repurposed. Housing several petabytes of media in racks of dedicated servers, it’s a virtually unrivalled archive of broadcast history, and has an increasingly sophisticated range of APIs to query and access it.
One of the strengths of this archive is that it is highly multifaceted, cross-linking the various elements of a programme, making subtitle transcripts available alongside audio and video recordings transcoded to different formats. This means that it is possible to search the archives for a given word or sentence, and watch every instance of that sentence being spoken over the past years of BBC broadcasts. A corollary of this is that we can do the same for sound effects: sound effects are subtitled just as speech utterances, meaning that searching the archive for every instance of [PHONE RINGS] lets us go back and listen to every instance of a telephone ringing heard on the BBC.
Some rapid prototyping later, alongside the expert developers from the R&D team, I had arrived at the below: an autonomous system capable of delving into the BBC’s media archive in search of certain foley effects, deconstructing the artifice of television back into its constituent parts. Pre-loaded with a particular search term, it spiders the archive, iterating backwards through time for instances of a particular kind of sound effect, downloading the relevant media, and extracting the specific timestamp referenced by the subtitle. It then re-composites them to create a generative collage, structured by chance based on when a particular kind of sound has appeared on-screen.
The impact of this process is that it amplifies the connotations of a particular kind of sound effect. Footsteps are used as a byword for suspense, movement, tension, fear, following or being followed. This footsteps auto-montage is thus a sort of distillation of how this trope is deployed as signifiers within TV, joining together suspenseful moments from disparate broadcasts to create an endless stream of anticipation.
Many thanks to Andrew Nicolaou, Dan Nuttall, Pete Warren and BBC R&D for their generous support in the development of this work.