“PLATES” — My Approach to Immersive Music Recording & Mixing for Cinema & Home Entertainment
When looking at a single shot in a film, what appears to be a single, striking image is often constructed from multiple plates (e.g. background, midground, foreground). It’s a loose analogy, but it reflects how I approach mixing for immersive formats.
I keep all sources and microphones organised into a small number of distinct “plates,” and I generally avoid placing elements between plates unless there is a very specific musical or narrative reason to do so.
As humans, we have remarkable stereophonic acuity, especially in the frontal plane. Outside of that plane, however, our ability to localise sound becomes significantly less precise. Phantom imaging relies on having a source on either side of the head; attempting to position a sound between, for example, the left channel and left side channel often produces a vague or unstable result unless there is a speaker exactly at that position.
Even then, localisation away from the frontal plane is imprecise without turning to face the source - which is obviously not something we want audiences doing while watching a film. Add to this the enormous variability between cinema layouts and home playback systems, and the potential for unintended surprises increases rapidly.
My approach, derived from three decades of working in multichannel formats, is designed to minimise those surprises while still delivering a large, impactful soundstage. This applies across cinema, home entertainment, and also stereo playback. While I’m an advocate for immersive formats, stereo remains the dominant listening format for most end users outside of theatrical presentation.
For film soundtracks (excluding objects for the moment - I’ll return to those in a future post), I work primarily in 7.1.2, which I treat as three distinct stereo fields, or plates:
Front (LCR)
Side / Top (Lss Rss Ltc/Rtc) - another LCR
Rear (Lsr Rsr)
Within this framework, I avoid panning material to intermediate positions between plates. I also tend to avoid the “wides”. If, for example, the screen width is only half the room width and the proscenium speakers / wides are a significant distance from the screen edge, image focus can easily be destabilised in ways that are highly room-dependent.
My primary concern is that each plate functions as its own cohesive stereo image, while correlation between plates is kept to a minimum (correlation here being meant in a broad musical and perceptual sense, encompassing phase relationships, timbre, tonal balance, colour, and content). This is for two reasons.
First, when the three plates are collapsed into a single stereo image, width and a strong sense of depth are retained with minimal colouration. Second, this approach produces an expansive soundfield while avoiding the sensation that everything is sitting between the loudspeakers and the listener - an effect that can be powerful when used intentionally, but when unintentional can feel somewhat claustrophobic.
These principles are also the foundation of my anamorphic microphone array, which is listed on the Echo Project site under P3H Arrays. The array translates into three functional plates:
• A front plate, providing precision and definition, keeping focus anchored to the screen
• A mid plate, expanding width and height while maintaining a high proportion of direct sound: the goal is scale, not reverberation
• A rear plate, introducing highly diffuse energy that enlarges the soundfield without creating the impression that specific instruments or sources are located behind the audience
There are, of course, narrative moments where placing an element behind the listener is appropriate. When that’s required, I’ll typically address it using objects - which is a separate discussion.
A question that often arises is why I favour 7.1.2 rather than 7.1.4. The primary reason is theatrical translation. In cinema playback systems bed channels (arrays) and objects behave quite differently, and array delays are applied as part of the room calibration process - dependent on room size and geometry. I want the height information to remain as part of the same spatial architecture as the side and rear arrays, rather than becoming detached from them. In practice, moving height information entirely into objects can change perceived scale and width in theatres from what one might expect when working in a music mix room.
It’s also worth noting that this approach translates very well to consumer 7.1.4 environments. While the production format is 7.1.2, the spatial relationships remain coherent and scale effectively when rendered into typical home immersive layouts.
For me, the plates approach is ultimately about preserving narrative focus, musical intent, and spatial scale - not just in one type of playback environment, but across the full range of ways audiences experience film music.