Over the past few months, I've had my head down working at Animal Systems on a tremendously exciting new platform by the name of Chirp. In a nutshell, Chirp is a way to treat sound as data, enabling devices to communicate with each other using short packets of audio. A sender emits a series of tones; a receiver hears and decodes them, translating them into a code which can point to a picture, text, URL, or even another piece of sound.
Chester, the bird-robot-hybrid avatar of Chirp
My work has been focused on developing an iOS app which will very shortly be seeing the light of day, App Store pending. The experience is simple: Alice want to send a picture to Bob, so she imports it into Chirp, hits a button, and the device chirps it (a sound like this). Bob's phone, and any nearby devices within earshot, can then decode the chirp and display the image. No painful Bluetooth pairing, no typing of email addresses, no USB-stick fiddling.
Of course, the system isn't breaking the laws of entropy and cramming a large JPEG into a second of audio: behind the scenes, the data itself is transferred to a cloud infrastructure and translated into a "shortcode", which is then sent over sound, decoded and resolved. There's an inherently low bitrate in a noisy sonic environment. But then, the bitrate of human speech is estimated at less than 100bps, and spoken language has turned out to be quite a useful feature.
One of the big lessons for me has been the sheer amount of engineering required for a magically simple transaction. Developed from conversations about the information-theoretic properties of avian linguistics, Chirp's audio system has been honed over countless months by a team of DSP gurus based in Barcelona, with an array of simulations operated from UCL's Legion supercomputing cluster, rendering it resilient to hostile reverberant and noisy conditions; the underlying network consists of an infinitely-scalable REST API that we have designed over many iterations, developed by a team of inveterate network architects and now residing in the cloud. The inverse correlation between intuitive simplicity and actual complexity, in the tech domain at least, couldn't be clearer here.
The app is an exploratory first step, and there are almost too many next steps to contemplate. Anything that can transmit sound can send a chirp, so we've been experimenting with all sorts of lo-fi devices: the joy of sending a YouTube video link via a dictaphone is pretty much unrivalled. Throw an Arduino into the equation and suddenly there's an explosion of possibilities of conversing machines.
And there's an equal amount of philosophical potential in this research. Suddenly, the dumb alert tones produced by phones, lorries and fire alarms seem absurd. Why aren't these designed for machine as well as human ears, conveying valuable information about the state of the world? Why is the visual given default primacy as an information medium? And what happens when the typical silence of network communications are suddenly tangible, embodied, and broadcast?
Chirp will be free on the Apple iOS App Store.