How Orbit Renders ADM Audio
A high-level overview of the signal path from ADM file to the listener's ears.
What is ADM?
Orbit works with ADM BWF files — but ADM is often conflated with Dolby Atmos, so it’s worth a quick clarification.
The Audio Definition Model (ADM) is an open international standard (ITU-R BS.2076) for describing spatial audio scenes. ADM BWF (EBU Tech 3364) is the file format: a standard Broadcast Wave file with embedded ADM metadata. These are open standards maintained by the ITU and EBU — not proprietary to any commercial platform.
Dolby Atmos adopted ADM BWF as its interchange and delivery format, so the master file you export from a Dolby Atmos session is a standard ADM BWF. But ADM exists independently and is used across broadcast, film, and music production well beyond Dolby’s ecosystem. Orbit works with the open standard directly — any ADM BWF can be opened, rendered, and inspected regardless of where it was created.
QC, Not Mixing
Orbit is a playback and quality control tool — not a mastering suite. It’s designed for what happens after the mix is done: verifying, monitoring, and reviewing ADM masters.
In a typical immersive audio workflow, the mix engineer authors the spatial mix inside a DAW and the final deliverable is an ADM BWF file — containing the audio alongside embedded metadata describing every bed, object, and automation move in the spatial scene.
The challenge is what happens next. Too often, the only way to share or review an immersive mix outside the mixing room is to bounce it to an MP4 or a binaural render. Even when the MP4 carries multichannel audio, the spatial metadata is gone — there’s no way to inspect individual objects, verify spatial positioning, check loudness on the actual master, or confirm that the ADM metadata is correct. The person reviewing the mix is listening to a baked-down render, not the real thing.
Orbit exists to solve this. It opens the ADM file directly, renders it in real time, and gives you the tools to actually inspect the spatial scene — object-by-object isolation, binaural monitoring with head tracking, loudness metering on the master itself, and a 3D visualiser showing exactly where everything is. No DAW, no renderer licence, no MP4 compromise.
The rest of this post walks through how that rendering works under the hood.
Why Does Orbit Sound Different to the Dolby Renderer?
If you compare Orbit’s output to the Dolby Atmos Renderer side by side, you may notice subtle differences. This is expected — and it’s not a problem.
The ADM standard (ITU-R BS.2076) defines what the spatial scene is — which objects exist, where they are, how they move — but it doesn’t mandate exactly how a renderer should turn that into speaker feeds. Different renderers can interpret the same ADM metadata using different panning algorithms, gain laws, and signal processing, and still be faithful to the spatial intent of the mix.
Dolby’s renderer uses proprietary rendering algorithms that aren’t publicly documented. Orbit uses its own rendering based on established, published techniques — primarily VBAP for object panning. Both read the same metadata; both aim to reproduce the same spatial scene. The differences are in the rendering detail, not in the spatial intent.
In practice, beds will sound identical — they’re routed directly to fixed speakers with no position-based processing. Object rendering is where subtle differences can appear, depending on how each renderer calculates speaker gains for a given position.
For QC purposes, this is the right trade-off. Orbit’s job is to let you verify that objects are where they should be, automation behaves correctly, loudness is within spec, and the spatial scene reads as intended. Small differences in panning gain curves don’t affect your ability to do that — just as a mix engineer can QC a mix on different speaker systems and still identify the same issues, even though the monitors themselves sound different.
The Engine — Designed for ADM Workflows
Orbit is built on a custom C++ audio engine designed specifically for professional ADM QC and monitoring. Rather than relying on a general-purpose spatial audio SDK, all spatial rendering, binaural processing, loudness measurement, and real-time scheduling are handled internally — giving tight control over accuracy, performance, and behaviour under load.
Why Not Use an Existing Renderer?
There are established ADM rendering tools — notably Dolby’s Atmos Renderer and the EBU ADM Renderer (EAR) — but they’re designed for different workflows. Dolby’s renderer is a content creation tool built for mixing and mastering inside a DAW. The EBU ADM Renderer is an open-source reference implementation of ITU-R BS.2127 — primarily an offline, file-based renderer written in Python, with related projects like BEAR and the EAR Production Suite offering real-time and DAW-integrated rendering.
These are valuable tools in their respective roles, but QC workflows have distinct requirements: fast transport control, instant switching between monitoring modes, per-object isolation and metering, head-tracked binaural monitoring, and integrated spatial visualisation — all working together in real time. Building a purpose-built engine means these features are tightly coupled by design rather than layered onto a renderer built for a different context.
The JUCE framework provides a thin platform abstraction layer for audio device access, file decoding, and basic system services. Everything above that — the ADM-aware rendering pipeline, spatial panning, binaural renderer, loudness meter, and all DSP — is Orbit’s own.
There’s also a forward-looking reason: we have plans for Orbit that wouldn’t be achievable without full control over the rendering engine. Owning the rendering and DSP layer gives us the freedom to take the product in directions that wouldn’t be possible if we were building on top of someone else’s renderer.
Real-Time Architecture
The engine is designed around a multi-stage, non-blocking pipeline to ensure stable, glitch-free audio output. Audio is prepared ahead of playback where possible, with rendering work separated from the system audio callback. Pre-rendered audio blocks are passed between stages using lock-free mechanisms, so the output path never waits on blocking operations or dynamic allocation.
This keeps latency low and output reliable, even when working with large ADM masters containing many simultaneous objects and complex automation.
Reading the ADM
Orbit works directly with ADM BWF files (Broadcast Wave Format with embedded Audio Definition Model metadata, per ITU-R BS.2076 and EBU Tech 3364). These are standard .wav files containing embedded AXML and CHNA chunks that describe the spatial audio scene.
On load, the engine parses the ADM metadata to reconstruct the spatial scene:
- Programme structure — title, duration, sample rate, bit depth, loudness metadata
- Bed channels — fixed-position audio tied to specific speakers in the 7.1.4 layout
- Object channels — audio sources with time-varying position, gain, and width automation
Object automation is stored sparsely in ADM files as keyframes at irregular intervals. During playback, Orbit interpolates this data to produce smooth, continuous motion, avoiding audible artefacts even when keyframes are widely spaced.
Streaming Playback
By default, audio is read incrementally from disk using background read-ahead rather than loading entire files into memory. Only a sliding window around the playback position is buffered at any given time, keeping memory usage predictable even for long-form or high-channel-count ADM masters.
Beds — Fixed Speaker Content
Bed channels represent audio that is pre-mixed to fixed speaker positions in the 7.1.4 layout — typically ambience, music stems, or dialogue locked to a screen position.
Orbit recognises a range of common speaker labelling conventions (ADM standard labels, Dolby room-centric labels, and human-readable names) and maps each bed channel to the correct output in the 7.1.4 array. In the default routing mode, beds are sent directly to their assigned speaker — a one-to-one mapping. LFE content is always routed directly to the subwoofer channel.
Objects — Dynamic Spatial Rendering
Audio objects are the dynamic elements of an immersive mix — dialogue, sound effects, Foley — that can move freely in 3D space. Orbit renders objects using Vector Base Amplitude Panning (VBAP), a well-established spatial audio technique (Pulkki, 1997).
Each object’s position is converted into a set of speaker gains based on its direction in 3D space. Speakers closer to the object receive more energy, while distant speakers receive less, with gains normalised to preserve overall energy.
Elevation is handled by smoothly blending between the ear-level speaker layer and the overhead layer. Objects at intermediate heights are distributed across both layers to maintain a coherent vertical image. The LFE channel is excluded from position-based panning and only receives content explicitly authored for it.
To ensure perceptually smooth motion, gain changes are continuously interpolated so that objects can move freely without clicks or abrupt shifts.
A Speaker-First Signal Flow
All rendering in Orbit follows a speaker-first architecture. Regardless of the final monitoring format, every bed and object is first mixed into a virtual 12-channel 7.1.4 speaker layout:
ADM File
│
├─ Metadata ─► ADM Parser ─► Position / Gain Automation
│ │
│ ├──────────────────────────┐
│ ▼ ▼
├─ Beds ──────► Direct speaker routing ──┐ 3D Spatial Visualiser
│ ├──► 7.1.4
└─ Objects ──► VBAP spatial panning ─────┘ Speaker Mix
│
┌───────────────────┼───────────────────┐
▼ ▼ ▼
7.1.4 Speakers Binaural (HRTF) Stereo Fold-Down
(direct out) (headphones) (2-channel)
▲
│
Head Tracking
(optional)
This mirrors how immersive content is typically authored, and ensures that spatial intent is preserved regardless of how the mix is monitored. The spatial rendering is done once, and all monitoring modes derive from the same speaker feeds.
Monitoring Modes
7.1.4 Speakers
The internal speaker mix is routed directly to physical outputs, with support for standard 7.1.4 channel orders. Per-group gain, mute, and solo controls allow focused QC of specific speaker groups (Main L/R, Centre, LFE, Surrounds, Heights).
Binaural (Headphones)
For headphone monitoring, the virtual speaker feeds are rendered to binaural stereo using Head-Related Transfer Functions (HRTFs) — direction-specific filters that recreate the spatial cues of a real speaker setup over headphones.
Each virtual speaker’s audio is convolved with an HRTF filter pair (one for each ear) and summed to produce the final stereo output. Orbit uses a dense HRTF grid with interpolation to ensure smooth spatial transitions across all directions.
Custom HRTFs can be loaded using SOFA files (AES69), the open standard for HRTF data — allowing a more accurate spatial impression tailored to individual ear geometry.
When head tracking is available, speaker directions are adjusted relative to the listener’s orientation so the virtual soundstage remains stable as the listener moves — significantly improving externalisation and spatial accuracy.
Stereo Fold-Down
For stereo monitoring, the speaker feeds are folded down into a two-channel image based on each speaker’s spatial position, with attenuation applied to prevent clipping from the summation of multiple channels.
Loudness and Metering
Orbit includes loudness measurement compliant with ITU-R BS.1770-4, the international broadcast standard. The meter has been cross-validated against ffmpeg/libebur128 with integrated LUFS agreeing within 0.05 dB across all test signals, and within 0.03–0.17 dB on real Dolby Atmos production content. The meter provides:
- Momentary loudness (M) — 400ms sliding window
- Short-term loudness (S) — 3-second sliding window
- Integrated loudness (I) — gated measurement per the standard
- True Peak (TP) — 4x oversampled polyphase FIR for per-channel true peak detection
Additional meters provide per-speaker peak levels and per-object activity, giving clear insight into mix balance and object behaviour during playback.
3D Spatial Visualiser
Orbit includes a real-time 3D visualiser that shows audio objects moving through a virtual room during playback, giving engineers an immediate visual read on the spatial intent of the mix.
Each audio object appears as a numbered sphere positioned in 3D space according to its ADM coordinates. Objects are audio-reactive — colour intensity and size respond to audio level, making it easy to see which elements are active in the mix. Objects with an ADM width parameter display a translucent outer sphere showing their spatial extent.
The visual environment includes orientation cues, configurable room proportions, and an interactive camera to inspect the spatial layout from any angle. The visualiser is driven by the same spatial metadata as the audio engine, providing an accurate visual reference without influencing the audio path.
Built on Open Standards
- ITU-R BS.2076 (ADM) — Audio Definition Model, the spatial metadata format at the heart of every ADM file
- EBU Tech 3364 (BWF) — Broadcast Wave Format container for ADM audio and metadata
- VBAP (Pulkki, 1997) — Vector Base Amplitude Panning for object spatialization
- HRTF / Binaural — Head-Related Transfer Function rendering for headphone monitoring
- AES69 (SOFA) — Spatially Oriented Format for Acoustics, for loading custom HRTFs
- ITU-R BS.1770-4 — International broadcast standard for loudness measurement
Orbit is built for professional QC of ADM masters, designed to faithfully reproduce the spatial intent of the mix across speaker, headphone, and stereo monitoring environments.
Want to hear it for yourself? Orbit is available as a free 14-day trial — no credit card required. Open your own ADM masters, explore the 3D visualiser, and hear the binaural renderer on your headphones. Start your free trial.