My Kinect, Quartz Composer, and PureData Recipes

Finally, I’ve self-published a modest paper on my approach to using Kinect in live performance. Read or download it here.

kinectKombo

Advertisements

The Lavender Spray: Performance with Kinect

It happened. Some people got hurt, bad. Some may have even wandered home afterwards and hung themselves from silk stockings or rough rope they bought at Home Depot, despondent over the current state of the arts . . . .

(of course, I jest . . . . )

meme™ —media experimental ensemble™—provided a momentary distraction from that state at the FAU Faculty Biennial exhibition last Friday (September 20, 2013) on the Boca campus. It was a well attended event, and much curiosity was aroused by a certain crazed mad scientist in black conducting his torso-controlled remix of  Strauss’s Also Sprach Zarathustra ( seen in video above the final scrolling credits).

Additional Quartz compositions, videojam sets, and film remixes of the work of Ariel Baron-Robbins were provided by Anna Torlen, James Ko, David Wichinsky, Rosena Francois, and August DeWinkler.

Video Feedback Jam, Digital Folk Music, and Studio Nostalgia

Submitted for your perusal, in the eye candy/ear candy category, is this latest meme™ jam, a meditation on video feedback set to an unpolished, unsequenced electronic jam from 1991.

Visuals: two channels of Rutt-Etra video synthesizer set against a blurred video background. Input video is pointing at the screen for feedback. Created in Quartz Composer.

Music: Bad Mind Time™ Digital Symphonies, first few minutes of Volume I, part 1. The entire collection—over 12 hours of electronic and acoustic sketches and improvisations—was created between 1991 and 2005, and can be streamed, downloaded, or remixed here. BMTDS is one big, online musical sketchbook: lots of thumbnails, doodles, and scribbles in sound. Some sketches are very rough, some are more developed. There are multiple experiments with samplers, sequencers, keyboards, live and extended instruments, and voices.

You may hear echoes of your own electronic studio: Ensoniq ESQ-1 synth/sequencer, Roland S-10 sampler, Roland R-8 drum machine, the Casio VL-Tone, the Kawai G-Mega and Emu Proteus Orchestral midi modules, Alesis Microverb, Mackie 1202 mixer, Pioneer RT-2044 2/4 track open reel deck, Tascam DAT deck, Tascam 788 digital 8-track; Opcode Studio Vision Pro software run on a Mac Iici (25 mHz processor!), and/or the PowerBook 5300cs (100 mHz !). Ah, good times! And p.s., I’ve sold or given away almost all that vintage equipment (as well as all but 8 or ten LPs from a collection that was once maybe a couple thousand records), because, well, I’ve moved around a lot.

This is the sixth in a aeries of meme™ jams, short videojams to document how I’ve approached the form. The other five can be downloaded on iTunes.

Kinect + Quartz + Oscar Wilde

For my first in-class presentation this semester, I performed “My Profundis”, a little visual-sonic-textual-movement essay where my alterego SkyRon™ channels Oscar Wilde, while generating trippy visuals via Kinect. The video cube location and rotation is controlled by the left hand, and its dimension, along with the position of the fuzzy halo-balls (particle system), is controlled by the right hand. A live camera feeds visuals to the background and the cube, and is pointed at the screen or monitor, creating a very hazy style of video feedback. There’s also a patch I found online that motion-blurs the blurry video background according to the position of your body.

This is my first project using Kinect, which is running a patch I created in Quartz Composer (with a big assist from middleware Synapse, and a patch made in Max MSP). Currently, I can’t find a whole lot of documentation on Kinect-Quartz, other than the Synapse site. If you’re interested in a more detailed exploration of Kinect (with Processing, Arduino, MakerBot, etc.), I suggest the O’Reilly text Making Things See by Greg Borenstein, which uses the NI library to create interaction with Processing.

“My Profundis” is at a conceptual growth-point, and may take a number of different paths as it develops. I will have a greater idea about how to incorporate Kinect into performance once I further connect the sensor to other software, especially PureData.(sound design/composition) and Unity, and possibly Processing, once I overcome my Terminal (Unix) Anxiety!

What you need to recreate this (besides a Mac Book Pro running Lion, a separate video camera, and a Kinect sensor) is Synapse (downloads here) and my Quartz composition. Enjoy!

Channeling Cage, Cunningham, and Paik in Quartz Composer

So this work—Trialog and Interludes— began as a formal idea for a visualist presentation by meme™ (media experi mental ensemble): audiovisual compositions by individual meme-bers of the group would function as interludes between three “takes” of a larger, 17 minute work for visualists, dancers, and electronic music, the Trialog.

The Trialog is expressed in two ways—first, as a set of  three-voice chantings of micronarratives (this is done in Flash, and the score is below) played as bookends to the entire performance (in an ideal performance; didn’t happen this time), and secondly as perhaps an imaginary 3-way dialog between the works of John Cage, Merce Cunningham, and Nam June Paik, representing their progressive approaches to music, dance, and electronic visual media, and giving the work its aesthetic frame of reference, but re-imagined through contemporary means. Are we good with that?

This was meme™’s first performance exclusively using Apple Quartz Composer to manage all the visuals and interactions. The fantastic Rutt-Etra Video Synth plug-in by Vade was a key visual element: it enfolded the dancers in a 3D mesh that responded in real-time to their movement.

Each dancer was asked to develop a vocabulary of 12 events, and she was given an instruction track she listened to over a clip-on iPod mini.

 

Special thanks to dancers Stacee Lanz and Kori Epps, and to my graduate class, Creating Interactive Culture: Cynthia Gutierrez, Michelle Hipps, Chandra Maldonado, Miguel Oubina, Steven Wang, and Xuan Zhang.

And a word on the sound component: This was also the first time meme™ performed without a pre-produced sound track (I know, high time). Compositionally, each of the visual sketches had a set of samples associated with it, and those samples were mixed live via Kevin Holland’s wonderful Sapling software (free, for those of you running Mac OS). The dance video above used a bunch of samples I created from the waterphone; other sections remixed Tallis (Spem in Allium—see earlier post—with editorial voices provided live by Bebot, the Singing Robot), as well as SkyRon’s Bister Badgent, my instrumental track Echo Mic (appearing on the American Sock™ soundtrack), and music I wrote for an interactive presentation of humanitarian work in Cambodia (based on indigenous melodies and instruments). Samples to be available soon!