A short sound study in the structure of the number sequence. Each positive integer is broken down into its prime factors, with each factor corresponding to a harmonic partial. We then proceed to count upwards, for each integer only playing those harmonics which correspond to its factors.
Four mystifying glitches gleaned from this (warning: applet) fine Processing/OpenGL applet. Turns out that, on my machine at least (Firefox 3.5/OS X 10.6.1/jvm1.6), using the mouse scroll wheel whilst such an applet is playing causes a neverending stream of such oddities, pulling texture data from all over the shop (spot the eBay logo and frames from a Michael Jackson promo that I was watching earlier today). Eery, anomalistic manifestations of a half-remembered visual history.
Finally caught Between The Folds, crossover art-science documentary which I was sad to miss at the Brooklyn Film Festival whilst I there. It's a charming film; predictably full of breathless connectionism and unrepentant renegades, but with sufficient wow-shots and geometry porn to let all of this slide. Interesting points, too, about folding being simply transformative (where painting is additive and sculpture is subtractive), and hitting (and subsequently rebounding from) the complexity ceiling.
As regular visitors will be aware, I am strongly in favour of a nonlinear approach to musical composition: organising sounds in space via models of nondeterministic systems and processes, such as AtomSwarm's flocking ecosystem, The Fragmented Orchestra's soundscape neural-net, and some more recent experiments in self-organized sound (more to be published very soon).
I'm co-ordinating a workshop on such topics next Thursday (15th October) at London's Space Studios, part of OpenLab Workshops's Fall 2009 series. It presupposes basic working knowledge of Processing or Java; some experience of SuperCollider would also be useful but inessential.
Here's the blurb:
Simulating and Sonifying Natural Systems
An increasingly popular practice in digital arts is creating sonic representations of dynamical systems -- simulating natural phenomena such as insect swarms, tree growth, wind turbulence and neural networks, and translating such phenomena into sound, to create organic, dynamic audio-visual works.
Working from a basic knowledge of the Java-based Processing environment, Daniel Jones explains how to create such a simulation and subsequently connect it to the open-source SuperCollider synthesis engine, providing a valuable addition to a digital artist's toolkit.
6pm, £10 entry (or £15 also including the following Arduino Basics workshop), at:
Space Studios, 129 – 131 Mare St, Hackney E8 3RH.
Nearest transport: Bethnal Green (tube), London Fields (Overground), Buses – see http://www.spacestudios.org.uk/contact/SPACE/
As a digital artist working with motion graphics, it's vital to have some method of recording high-quality videos of work for posterity - as a primary form of documentation, and an engaging way to disseminate work for feedback on Vimeo feeds and the like. Processing has recently incorporated the MovieMaker frame-by-frame video recording library into its core, and OS X Snow Leopard has introduced full-screen movie recording via QuickTime X. The shareware Snapz Pro has also provided OS X users with flexible movie recordings since the dawn of time.
So, a solved problem? Not quite. For artists and filmmakers working with CPU-heavy real-time interactive A/V work, each of these approaches has a critical flaw. Screencast tools such as Snapz Pro and QuickTime X have CPU and GPU requirements such that they can drop frames under heavy strain; moreover, QuickTime X's capture seems to be limited at around 10 frames per second, insufficient to demonstrate the whizziness of graphical fireworks. MovieMaker and other internal frame-by-frame grabbers, conversely, won't ever miss a frame, but their encoding process can slow down the framerate of the sketch itself beyond acceptable levels, which is lethal when dealing with real-time interactivity and synchronized audio/video streams.
Up until recently, I've been combating this by connecting a video camera via a Firewire/DV connection, taping the video, then capturing it back to computer (in real-time) before overdubbing the audio and compressing. Functional, but too much hassle to do regularly.
However, there is a better approach. The bad news? it's OS X only, and requires a second Mac for the recording...
A Solution: Virtual DV over Firewire
So, here it is: DV screencasting through Firewire. By rigging up some freely-available software on two Macs, connected by Firewire, it's possible to simulate the DV camera method and record the video output straight into QuickTime X (or Final Cut Pro, etc). Minimal overheads, no framerate or quality loss, straight into a digital video file ready for upload.
I gave it a shot with my personal laptop wired up to an office Mac Mini (running Snow Leopard and Leopard respectively), shooting out 1024x768px video from a Processing sketch that completely saturated the host's CPU and GPU - and lo, out came a 30FPS .mov.
Notes and caveats:
the Firewire data transmission is video-only, so you'll either want to use a 3.5mm jack lead to send your audio output to the recording computer's input, or overdub your audio afterwards (using Soundflower or suchlike to record the host computer's output).
Note that the QuartzComposerLiveDV process should be running on the host computer (ie, not the one doing the recording). I didn't, and encountered much confusion. Also be aware that the VirtualDV instances should be left in their paused state, and not switched to "play".
Here's the video in its re-recording form; compare to the original, created using Snapz Pro and suffering from a low framerate. Sadly, Vimeo's encoding has not been favourable towards it (compare with original .mov); next time, I will see how an .mp4 works out.