Herakles: Gestural Surround Mixing for Real-Time Applications

In Orange Theatre’s unique multimedia reimagining of the play, live video and sound effects help tell the story of Euripides' Herakles, the famous hero who returns home from his adventures to save his family from a tyrant, only to be stricken by a madness that causes him to murder his wife and children. This new work incorporates contemporary texts, live music, and the company’s personal material into the original story, offering a striking, modern take on the age-old tragedy.

The design for Herakles I was born out of the Orange ensemble's need to break from the end-stage style of their last few works- which meant I had to redefine the auditory world Orange productions live in.

When an audience moves around a set- perspective shifts depending on where you were standing. The sound and media systems were designed to handle this problem of perspective. 18 screens and 32 individually addressed speakers lined the outer edge of the set, giving us the ability to send specific streams of audio and video to different groups of the audience around the venue.

I mixed the show in multiple mono streams that were processed through a custom Ambisonics patch in Max/MSP. This let me choose to address the speakers as point source or like a multi-channel array, where we could move any of the mono mixes like 3D sound objects in the space.

The content of the show was driven by our interest in using live foley to focus in on the emotional center of each character in the current moment. I would improvise and perform live foley along with a single performer, recording samples into Ableton Live to loop and layer as the action continued. In this way, I was able to create a rich, subtle, and human soundscape that helped pull and push the action as the show progressed.

A Graphical Approach to Mixing Engines

This Graphical Mixer is a collaboration between Mathew RaganDr. Lance Garahavi, and myself. It is a system of TouchDesigner and Max/MSP networks that use video streams to mix and/or effect audio streams for multi-channel speaker arrays. It was commissioned as part of arsRobotica at ASU's Emerge Festival, 2015.

The Wonder Dome project led me to ask in what other ways are we able to manipulate signal mixed in large arrays of speakers. One approach that caught our eye was using the combination of visual and tactile input to generate patterns across mixing matrices.

This system is inherently different than any automated mixing/ambisonic processes- as it assumes new responsibilities for the live engineer and designer. Matthew Ragan and I designed the mixer to function much like an analog A/B video switcher that lets the operator apply video effects to both pre-recorded and live video before being down-scaled to a low, 96 pixel output. Color and intensity are sent over a network to another machine running a Max/MSP patch that convert the information to audio effects and mixing gain control.

I designed and built a 96 channel speaker array to hang overhead, blanketing the 100'x50' space with high quality audio.

During the installation I created a multi-channel mix of music and sound effects, while Matthew manipulated video that controlled the mix in real-time. We found ourselves asking ourselves how it would change the mixing environment if a live video input were used instead of standard playback.

Gestural Mixing in Live Performance

After working on a number of projects with large channel count sound systems I decided to focus the designer/live-engineer relationship- specifically the way in which typically mono/stereo content is translated to a multi-point sound system. While this is not novel or interesting in itself, the standard ways of doing this are extremely time consuming and the accepted technology that allows for flexible mixing control in real time typically does not led itself to such a creative collaborative environment devised and experimental performance. The ability to make quick, expressive choices that allow for the continuation of creative conversation is paramount.

I wanted a director, choreographer, or conductor to be able to express and idea for the mix of an instrument or sound through gesture or another physical means.

I developed and used the first iteration of this idea for the piece romeoandjuliet/VOID at Arizona State University, directed by Stephen Wrentmore.

Wonder Dome: Live Surround Mixing with Digital Puppets

Leo the Geodesic Dome and his friend The Storyteller are eager to spin some tales. Together they team up to help Pinky, one of the Three Little Pigs, get back to his brick house. BUT they end up in all the wrong stories. Will they find their way back to Pinky’s home?

Mixing hand puppets, digital puppets, live actors, movies, video games, and interactive visuals, lights and sound, this show gets the kids out of their seats and moving in order to help set the story straight.

Wonder Dome premiered at The SPARK! Festival of Creativity at Mesa Center for the Arts. It was commissioned to perform twenty-five shows, five shows a day, over the five day festival.

In 2014, Daniel Fine and Adam Vachon asked if I would sound design their master’s thesis project Wonder Dome. On track to receive their MFAs in Interdisciplinary Digital Media and Performance, and Performance Design from Arizona State University, they were looking to create a project that was both large in scope and would push their limits as designers. They were both interested in working in immersive environments and after some research chose to pursue working with a geodesic dome structure.

The project partnered with Vortex Immersion, a company with a history in immersive technology- specifically developing projection software for geodesic domes. The team- Daniel, Adam, Matthew Ragan, Alex Oliszewski, and I traveled to Los Angeles to meet with Vortex at their headquarters and get introduced to how they deal with media in 360 degrees.

While the partnership with Vortex was extremely helpful in developing Wonder Dome’s understanding of how to map and blend on curved surfaces- it did not give us any leads on how to work with the acoustics of a geodesic space nor how to mix for this kind of immersive environment.

The mixing engine control was heavily influenced by an ambisonics control system while the mixing itself was a rudimentary multichannel gain-control system. The media team had decided on using digital puppets controlled remotely via Wi-motes by actors in a tent next-door to the dome. The actors could move their characters freely around the dome with their position data streamed in real time to the audio mixing engine, effectively moving the live audio from the actors to match the position of the character. We used this same technique to control the location of the majority of the sound design in the piece- with Qlab as our playback software and Max/MSP as our mixing engine.