I’d like to share my recent experiences from performing live with surround sound. The system dealt with is the diamond shaped surround field of eight full-range speakers. I will not go into discussing 5.1 or other Dolby based formats targeting DVD home theatre systems. These are supported by several DAWs but not a suitable tool for preparation of a partly playable full-range octaphonic live setup. Therefore I decided to roll my own, patching away from scratch. Here’s the story.
Click here for printer friendly PDF!
I had been dreaming about live surround sound for decades but never had a chance to try it out in the real world, as few venues see a point in multiplying their PA system rental cost for putting on just one “experimental concert”. The option finally came up when my duo together with Erdem Helvacıoğlu was booked for Présences Electronique 2011 in Paris by French radio station and software developer ina-GRM. GRM wanted us to play our album Sub City 2064 and since we are only two musicians the concert would have to be performed as an interaction between Erdem, me and pre prepared sounds remixed out of the album. I immediately contacted the GRM sound engineers and learned that a diamond shaped octaphonic system would be provided on location. Speakers were to be addressed as four stereo pairs fed by eight mono channels with a circular numbering; beginning with speakers 1/2 on the stage, followed by 3/8 acting side fills for the part of the audience sitting close to the stage, 4/7 a little more back in the venue and finally 5/6 behind the audience and a little closer to each other.
Picking a strategy: Taste in music and public presentation
Then began the process of deciding what instruments to play live and what parts of the album to prepare for playback or as interactive electronic elements. I had access to all my files from mixing the stereo album so I didn’t have to worry about anything not being possible to implement technically. Instead I focused on imagining the surround concert just like planning your playing or composing; by taste in music and public presentation.
Generally I tried to put myself at the audiences position and come up with ideas of what would sound really cool that I’d like to experience myself from hearing a surround concert. In the academic world of electronic music it is common to present a piece of octaphonic surround music as plain playback of eight recorded channels but I wanted to stay away from that and put focus on the two live musician’s playing on stage. So the decision of which instruments to play live was fundamental for the rest of the project; we both played many different instruments on the album and that’s not an option on stage and definitely not if flying in to Paris from Stockholm and Istanbul. The combination of a Erdem playing the Guitarviol and me playing the Stick seemed optimal. The Stick can also play electronics over MIDI and if choosing the smaller Stick Guitar I could make room for also bringing an alto flute for live playing.
Selecting the most exciting parts to be played live on stage
Next task was to identify parts in the music, in the album mix or specific album effect treatments that would make an interesting experience for the audience if performed with the live instruments. So I made a list of all that and filled it up with some extra things that can be added in surround; exciting things that won’t work in stereo like for example having two or four reverbs surrounding the audience and simulating a larger room by sending more or less from certain parts to these reverbs. Another example is to make sounds appear as flying out from the stage over the heads of the audience by using a stereo reverb in speakers 1/2 and a time delayed stereo reverb in the rear 5/6 (plus lots of delicate tweaks in the diffusion and frequency response areas).
Utilizing specific surround expression
In the past I have done some surround mixing for recordings to be finalized on DVD video and from that I learned that compared to normal stereo you can have a lot more frequency intensive material in a surround mix since you are not forced to define resolution by timbre, as by “the crowded stereo format”. Surround opens up a much wider canvas of 360 degrees circular directional sound resolution and you can combine fat sound layers that would just not fit into the physical restrictions of stereo sound transmition. Because of this my work from mixing the album could not be directly applied to the preparatory phase of this surround concert project.
In general I strived for keeping the experienced room ambiences from the album but for the surround implementation I spread them out into the three physical dimensions, rather than trying to fool the listener mind to “hear 3-D sound from two speakers”. I also created some new live effects specifically for the surround field, to play with as a performance. One example is a three dimensional tap delay using eight delay units, one “in each speaker”, all set to 100% wet and sending one delay tap to the next delay unit in line. This way, when sending signal into the delay effect every played note would bounce one full circle around the audience. On my station I kept an expression pedal assigned to the “freeze loop” function in all these eight delays. In Paris we used Logic on my laptop for all this and the delay was the Tape Delay plugin of Logic’s. I did set the eight Tape Delay instances to quite heavy tape flutter to cause a minimal pitch discrepancy in each delay bounce and a degradation of the signal as it jumped around one full circle.
Finally hitting the stage in Paris
When we arrived in Paris to soundcheck we found that there was also an inner circle of smaller speakers surrounding the center core of the big surround field placed like a fence around the live sound engineer’s booth. These small speakers were aiming outwards to the audience, so the audience were actually sitting inside two circles of speakers. As the artists had not been informed about this in advance and because it isn’t traditional “surround comme il faut” we were asked if we wanted them to turn off the inner circle, but we decided to keep those on. People in the audience later told us the inner circle of speakers added an exciting dimension to the show, and we also trusted the engineers at ina-GRM to collaborate with an interesting on-the-fly use of anything at hand.
Choosing software platform – the need for a Graphical Visual Conductor
Another important decision was what platform to use for surround files playback. Since I also play live electronics hosted in a laptop it would be comfortable to use an application that can handle both these tasks. After having created the general surround concept and created the actual eight mono speaker sound files I tried it all out in Apple Logic, in Ableton Live, in Apple Mainstage and in Plogue Bidule. There was also a second aspect to this: the need for visual cues on stage, “an on-screen graphical conductor”. Some pieces contain key and scale breaking chord changes where there is not rhythm and we wanted to improvise rather freely over these melodic structures with the Guitarviol and Stick/Flute. Mainstage would be the platform best equipped to provide a good “visual screen conductor” function but unfortunately it could not handle the setup in a stable way (back in year 2011). Bidule would also tax the CPU too much since I would have to cable up a lot of “hungry” third-party plugins to realize the setup. Live was not stable enough in general back in 2011 so that left me with Logic. Being the most CPU effective DAW Logic let me implement both my own playable live electronics and the eight surround channels prepared as four stereo files. But I had to think a little extra about avoiding latency because Logic is designed to produce recordings and not like Ableton Live designed as a good compromise between sound design accuracy and live performance playability. The solution to this was to use direct input monitoring in the RME Fireface400 audio interface for Stick and Flute input and stay away from using any live instrument treatments that produce sharp attack transients that would interfere with the natural instrument sound attack. Same goes for software synth sounds; all slow attack sounds leaving room for the RME direct monitoring of “flute air spit” or string tap attack.
The eight outputs from my RME Fireface400 were patched into the PA stage box, targeting the eight surround speakers. Erdem on his side had brought a suitcase with Eventide Eclipse, AxeFx Ultra, Kaoss Pad and similar gear to cable up with a borrowed sixteen channel mixer on a table. From his on-stage mixer bus groups were going into the stage box for the surround speaker channels.
Building an Octaphonic Surround Channel Mixer in Logic
For the duo’s second surround concert at Borusan Music House in Istanbul I had done a little more preparations. For one piece that uses element of a guitar based metal style music there is a hysterical synth line throbbing around and I had taken that part and mixed it to sway around rapidly in a full circle. I did this by signal routing in Logic’s mixer using an environment object called “X/Y Vector”. The X/Y Vector pad routing I created for this were simple cross faders of four stereo channels. On one axis I set up arithmetic rules (in a Transformer object) for morphing between the four stereo channels and on the other axis I already had Left and Right stereo as the two crossfade poles. The Vector Pad object data is cabled through a number of Transformer objects where the data stream is transformed to control the four send knobs of Aux channel 11. Each of the four send knobs represents a stereo channel matching one pair of surround speakers in the diamond shaped setup. As you see I have set Aux 11 to “no output” so the send knobs are the only active audio outputs. I was using a joystick on my Faderfox LV3 hand mixer to play the surround field movements of the audio passing through this Aux 11 channelstrip, recording automation and tweaking that to perfection during the general playback files preparation process. As the result the source audio was dynamically distributed over the eight speaker channels to imply a sound source that is circling around the listener.
An important piece of Logic specific information here is which MIDI CC# numbers that are hardwired in Logic to specific channel send knobs. As you see on the image (click img for bigger size) the incoming CC#2 is being transformed into outgoing CC#28 and that matches the the channelstrip’s first send knob. Second send knob listens to CC#29 and vice versa.
When we arrived at the venue in Istanbul it turned out the stage was in the center surrounded by the audience, and I must say it was really great to play and hear the complete surround field as the audience was hearing it. Paris only offered flat stage monitors in mono because the stage was outside the actual surround field. One issue turned up in Istanbul though: the eight surround channels were not all surrounding us directly; only four speakers were, while the other four speakers were placed in an similar circle four meters up in the air where a round balcony was surrounding the stage on the ground floor. Luckily I had kept reverb channels rather free from not reverb treated parts (following the approach to use reverb as an “answer” to indicate space) so at the soundcheck we could re-direct the reverb channels to be coming “from above”. This was not planned but turned out to fit very well in with the scenario of doing an instrumental under-water opera suggesting a soundtrack for life in a submarine city, as room ambience were now experienced “above” just as you experience the surface of the sea when diving (click image for big size).
In Istanbul we used Ableton Live on my laptop, but that was not so good as Logic due to the lack of stable “visual graphical conductor function” in Live. Erdem got an external monitor on his side of the table to be able to follow arrangements but as you might know Live only shows the audio wave file of the selected track and as I was goofing around to process things live in Live this display kept disappearing and reappearing on both my MBP screen and Erdems externally added 17″ screen.
Mainstage at North Sea Jazz – the most superior Visual Conductor Screen
The third concert we did on the Sub City 2064 album material was booked by the North Sea Jazz festival in Rotterdam. This is a very big annual festival with no room for surround performance but I just want to mention it briefly here because at that time, July 2012, Mainstage had been updated and we could benefit from the awesome visual conducting leads it can provide. Doing surround in Mainstage is simply a matter of directing live processing and the eight surround speaker files, handled by the Playback plugin, to separate outputs – but for this stereo gig I routed them all to one stereo output.
As for the visual conductor aspect, Mainstage is totally configurable so I could pick the waveform that kind of shows best where the crescendi are coming up and I was also able to name text objects with the chord names and short reminders for us how to play. On the Mainstage screen I put two counters and text objects; one that displays the name of and counts down the beats (eight notes) to the next cue and another that displays the name of the current cue. This worked much better than in Ableton Live and Logic. Before that gig I snatched screenshot videos of the the Mainstage screen display and uploaded to YouTube with only permission for Erdem to watch, so that he would be able to rehearse at his Instanbul studio and prepare his live effects setup. We were not given any rehearsal or soundcheck time in Rotterdam.
I think that was about everything I learned in the process, and the typical stuff I was wondering about myself three years ago and wished there would have been someone to spell out for me :-)
Addendum – Octaphonic surround preparation tools for your DAW
This article was about creating your own tools as you go, by basic traditional signal addressing. But there are indeed appropriate specialized software tools available. The good guys at ina-GRM in Paris offers a nice option as part of their GRM-Tools plugins suit. Delays, Doppler, Reson and Shuffling are the specific GRM-Tools plugins supporting this 7.1 non-standard. For an AU DAW channels correspond as on this image. You need to switch your DAW to 7.1 surround support and then the plugins will output audio for octaphonics through the DAWs 7.1 channels. This means that the sub bass channel [LFE] will become one of the eight full-range speaker channels, so you need to make sure your DAW doesn’t by default apply any low pass filtering to that channel. Another fairly recent new option for Ableton Live users is to seek out Max for Live patches for octaphonic surround processing.
Here’s a link to read or download a printable PDF of this article!
One Reply to “Producing octaphonic surround concerts”