Conceived for the unique acoustic of San Francisco's Davies Symphony Hall, Brant's virtually unclassifiable work places two conductors and countless musicians in multiple locations within the hall. SFS Media's binaural issue is the first time I've been able to close my eyes, focus on the music, and begin to make sense of Ice Field's divine chaos.

— Stereophile

Mark Willsher Mark Willsher

Authorship in Immersive Music

A reflection on authorship in immersive music, exploring the differences between authored and exploratory approaches, and why informed choice matters.

When it comes to immersive music production, much of the current discussion centres on whether object-based or channel-based approaches should form the basis of master deliverables. Proponents exist for each, but in practice there is rarely a single “best” option. The appropriate approach depends on the project at hand, both technically and artistically, but also on something less frequently discussed: authorship. In many cases, the most effective solution is a considered combination of the two approaches.

For clarity, I’m drawing a distinction between mixes where the experience is intentionally authored; with instruments and vocals combined with their effects, balanced against other elements, and presented within a deliberate spatial framework — and approaches where instruments and effects are delivered separately; allowing space, balance, and perspective to shift dynamically based on listener position. In the latter case, moving closer to a source may change early reflections, alter the direct-to-reverberant ratio, and rebalance elements relative to one another.

As immersive delivery expands across platforms, particularly in virtual and augmented reality, there has been increasing advocacy for fully object-based production and delivery. Much of this enthusiasm reflects very real platform and technology needs, especially in contexts where the listener is expected to move freely through a virtual space. Those priorities are valid. They do not, however, always align perfectly with the priorities of artists creating authored musical works, and it’s important that artists understand the implications of choices made during production.

Dolby Atmos, while adaptable to many uses, was originally developed for cinema, for narrative storytelling, with a key goal of maintaining the creator’s intent across a wide range of playback environments. By contrast, many newer immersive technologies are conceived from the outset for virtual or interactive experiences. A loose comparison might be this: traditionally, music is presented much as one might hear an ensemble perform. The listener sits back, and placement, balance, and timbre are shaped by the performers and the space. In virtual environments, the goal is often the opposite. The listener may walk into the ensemble, move between sections, or place their ear next to a single instrument. Achieving this convincingly requires not only significant processing, but also a high degree of control. There is nothing inherently wrong with this, provided it aligns with the intent of the work and is understood by everyone involved.

Object-based masters are essential for exploratory experiences: environments that allow audiences to navigate freely and encounter a work from multiple perspectives. That does not mean they should be the default choice for all projects.

When an artist delivers a true object-based master as the primary representation of a work, they are implicitly granting permission for that work to be reassembled, rebalanced, and re-presented in contexts far removed from the original intention.

That may be desirable, but it should be a conscious choice.

Choosing an exploratory format as the primary master isn’t just a technical decision; it’s a decision about authorship. It affects how much control an artist retains over how their work is experienced, both now and in the future.

It’s understandable that different practitioners emphasise the approaches they specialise in. What matters is that artists are given a clear picture of the implications of those approaches, rather than being led to believe that one method is universally “best” or inherently future-proof.

There is room for both authored and exploratory experiences. Production methods and deliverables can, and should, adapt to the type of experience being created, with the creator fully aware of what those choices entail. And while advances in stem-splitting and re-rendering technologies may eventually blur some of these boundaries, that doesn’t mean we should unknowingly deliver a de facto multitrack master by default.

Read More
Mark Willsher Mark Willsher

“PLATES” — My Approach to Immersive Music Recording & Mixing for Cinema & Home Entertainment

An outline of my approach to immersive music recording and mixing for film, using a “plates” framework to think about spatial intent, translation, and collaboration with composers across cinema, home, and stereo playback.

When looking at a single shot in a film, what appears to be a single, striking image is often constructed from multiple plates (e.g. background, midground, foreground). It’s a loose analogy, but it reflects how I approach mixing for immersive formats.

I keep all sources and microphones organised into a small number of distinct “plates,” and I generally avoid placing elements between plates unless there is a very specific musical or narrative reason to do so.

As humans, we have remarkable stereophonic acuity, especially in the frontal plane. Outside of that plane, however, our ability to localise sound becomes significantly less precise. Phantom imaging relies on having a source on either side of the head; attempting to position a sound between, for example, the left channel and left side channel often produces a vague or unstable result unless there is a speaker exactly at that position.

Even then, localisation away from the frontal plane is imprecise without turning to face the source - which is obviously not something we want audiences doing while watching a film. Add to this the enormous variability between cinema layouts and home playback systems, and the potential for unintended surprises increases rapidly.

My approach, derived from three decades of working in multichannel formats, is designed to minimise those surprises while still delivering a large, impactful soundstage. This applies across cinema, home entertainment, and also stereo playback. While I’m an advocate for immersive formats, stereo remains the dominant listening format for most end users outside of theatrical presentation.

For film soundtracks (excluding objects for the moment - I’ll return to those in a future post), I work primarily in 7.1.2, which I treat as three distinct stereo fields, or plates:

Front (LCR)
Side / Top (Lss Rss Ltc/Rtc) - another LCR
Rear (Lsr Rsr)

Within this framework, I avoid panning material to intermediate positions between plates. I also tend to avoid the “wides”. If, for example, the screen width is only half the room width and the proscenium speakers / wides are a significant distance from the screen edge, image focus can easily be destabilised in ways that are highly room-dependent.

My primary concern is that each plate functions as its own cohesive stereo image, while correlation between plates is kept to a minimum (correlation here being meant in a broad musical and perceptual sense, encompassing phase relationships, timbre, tonal balance, colour, and content). This is for two reasons.

First, when the three plates are collapsed into a single stereo image, width and a strong sense of depth are retained with minimal colouration. Second, this approach produces an expansive soundfield while avoiding the sensation that everything is sitting between the loudspeakers and the listener - an effect that can be powerful when used intentionally, but when unintentional can feel somewhat claustrophobic.

These principles are also the foundation of my anamorphic microphone array, which is listed on the Echo Project site under P3H Arrays. The array translates into three functional plates:
A front plate, providing precision and definition, keeping focus anchored to the screen
A mid plate, expanding width and height while maintaining a high proportion of direct sound: the goal is scale, not reverberation
A rear plate, introducing highly diffuse energy that enlarges the soundfield without creating the impression that specific instruments or sources are located behind the audience

There are, of course, narrative moments where placing an element behind the listener is appropriate. When that’s required, I’ll typically address it using objects - which is a separate discussion.

A question that often arises is why I favour 7.1.2 rather than 7.1.4. The primary reason is theatrical translation. In cinema playback systems bed channels (arrays) and objects behave quite differently, and array delays are applied as part of the room calibration process - dependent on room size and geometry. I want the height information to remain as part of the same spatial architecture as the side and rear arrays, rather than becoming detached from them. In practice, moving height information entirely into objects can change perceived scale and width in theatres from what one might expect when working in a music mix room.

It’s also worth noting that this approach translates very well to consumer 7.1.4 environments. While the production format is 7.1.2, the spatial relationships remain coherent and scale effectively when rendered into typical home immersive layouts.

For me, the plates approach is ultimately about preserving narrative focus, musical intent, and spatial scale - not just in one type of playback environment, but across the full range of ways audiences experience film music.

Read More
Mark Willsher Mark Willsher

Immersive Audio for Film Scores: What Actually Matters?

A surprising number of film scores—especially outside major studio features—are still mixed or premixed in 5.1. For composers, this is no longer ideal, and there are important reasons why: both for the film and for the soundtrack album.

There are many ways to approach this topic, but I want to focus on what actually matters for composers, since in most cases, the composer is my client.

A surprisingly large number of film scores - especially outside major studio features - are still mixed/premixed in 5.1. In my view, this is no longer ideal for composers, and there are meaningful reasons why: both for the film and for the soundtrack album.

1. Why Immersive Mixing Matters for the Film (Even If the Deliverables Say 5.1)

A question I get often is:

“Why bother mixing beyond 5.1 if the film’s delivery spec is only 5.1?”

For most projects, my preference is to deliver a set of 7.1.2 stems (more on that in a future post).
To clarify a few points that often come up in this discussion:

  • I always discuss formats with the re-recording mixer first.

  • I am not advocating for casually sending object-based mixes to a dub stage.

When 5.1 Becomes a Liability

A few years ago, on two different films mixed at two different facilities (in different countries), the deliverables were explicitly 5.1.
So I mixed the score in 5.1 and delivered 5.1 stems.

Later, when I heard the 5.1 printmasters, I noticed - in both cases - strange phasing artifacts in the music. After some investigation, I discovered:

  • the films had actually been mixed in Atmos,

  • the final 5.1 deliverables had been generated as re-renders,

  • and my 5.1 stems had been upmixed to 7.1.2 using an upmix plugin during the film mix.

Suddenly the phase anomalies made perfect sense.

The deeper discovery was this:

I found that quite a few post facilities now run their entire workflow through the Atmos Renderer for all projects, even if the project isn’t officially an Atmos deliverable.

*I’m not suggesting that every facility works this way, but I’ve encountered it often enough to consider it a fairly common workflow.

Why?

  • It simplifies multi-format deliverables via the Dolby Renderer.

  • It future-proofs the mix if the film later receives an Atmos release.

  • It allows them to “upsell” an Atmos version without redoing the entire mix.

Is this bad practice?

Not at all, but it is something composers and score mixers should be aware of as it affects how their work translates downstream.

Why 5.1 Music Is a Missed Opportunity

If the final stage is working in Atmos, strict 5.1 stems are a limitation:

  • reduced spatial clarity,

  • less stable imaging,

  • the score may blend less elegantly with dialogue and FX,

  • extra work required on the dub stage (and rarely enough time for it).

A re-recording mixer once told me this:

“If you’re not going to deliver an immersive premix, just send stereo stems - it’s easier to work with in Atmos than 5.1.”

That tells you everything. How significant this is will vary with the style of the score, but the underlying point remains.

2. Why Immersive Mixing Matters for the Soundtrack Album

Regardless of anyone’s personal feelings about Atmos for music:

Dolby Atmos matters for soundtrack albums in 2025.

Not in a hype-driven way—in a practical, business-driven way.

Here’s why:

  • Apple playlist placement

More likely to be added to Apple editorial playlists if a Dolby Atmos/Apple Spatial version is available.

  • Higher royalties for the artist/composer

Apple pays higher per-stream royalties to releases with an Atmos version
even for plays in stereo.

  • Labels prefer (or require) immersive

Many labels now strongly prefer Atmos deliverables, and some will only take a release if an Atmos version exists.
And if you are already mixing immersively for the film, then creating an Atmos album version is a zero-friction value add.

“But theatrical Atmos and Apple Spatial aren’t the same.”

Correct - there are significant differences.
But you’re already making small adjustments when creating the stereo master, and the incremental work for an album-ready immersive master is minimal.

3. The Practical Reality for Composers

From a time and workflow perspective:

Mixing immersively for the film and creating both stereo and immersive album masters generally takes no longer than mixing 5.1 for the film and delivering a stereo master—aside from the need to QC additional versions.

But the results are meaningfully better:

  • Greater clarity and impact in the cinema

  • A more emotionally engaging spatial mix

  • More attractive soundtrack deliverables for labels

  • Better discoverability and visibility on Apple Music

In short:
Immersive mixing future-proofs your score - creatively, technically, and commercially.

Read More