This Graphical Mixer is a collaboration between Mathew Ragan, Dr. Lance Garahavi, and myself. It is a system of TouchDesigner and Max/MSP networks that use video streams to mix and/or effect audio streams for multi-channel speaker arrays. It was commissioned as part of arsRobotica at ASU's Emerge Festival, 2015.
The Wonder Dome project led me to ask in what other ways are we able to manipulate signal mixed in large arrays of speakers. One approach that caught our eye was using the combination of visual and tactile input to generate patterns across mixing matrices.
This system is inherently different than any automated mixing/ambisonic processes- as it assumes new responsibilities for the live engineer and designer. Matthew Ragan and I designed the mixer to function much like an analog A/B video switcher that lets the operator apply video effects to both pre-recorded and live video before being down-scaled to a low, 96 pixel output. Color and intensity are sent over a network to another machine running a Max/MSP patch that convert the information to audio effects and mixing gain control.
I designed and built a 96 channel speaker array to hang overhead, blanketing the 100'x50' space with high quality audio.
During the installation I created a multi-channel mix of music and sound effects, while Matthew manipulated video that controlled the mix in real-time. We found ourselves asking ourselves how it would change the mixing environment if a live video input were used instead of standard playback.