Protocols & Specifications Glossary & Term Definitions
HDMI eARC is an enhanced audio return channel defined in the HDMI 2.1 specification, designed to transport high-bitrate, uncompressed audio from a display back to an audio system.
From a technical perspective, eARC replaces the bandwidth-limited ARC channel with a dedicated data path capable of carrying formats such as Dolby TrueHD, DTS-HD Master Audio, and object-based audio like Dolby Atmos without compression. It also introduces mandatory lip-sync correction and improved device discovery.
In AVR and sound system architectures, eARC fundamentally changes signal flow. The display can become the primary source hub while still delivering full-quality audio downstream to the receiver or amplifier.
For high-end home theaters, eARC is critical. Without it, modern streaming apps built into TVs cannot deliver reference-grade immersive audio to external audio systems.
EDID is a data structure used by displays to communicate their capabilities to source devices over digital interfaces such as HDMI and DisplayPort.
Technically, EDID defines supported resolutions, refresh rates, color formats, HDR capabilities, and audio formats. It is exchanged during the HDMI handshake and directly influences how sources output audio and video signals.
In AV receivers and signal chains, EDID management is one of the most common sources of system instability. Mismatched or poorly propagated EDID data can lead to incorrect resolutions, disabled HDR, or loss of immersive audio formats.
In advanced AV system design, EDID is not a passive detail—it is an active control point. Proper EDID handling ensures predictable interoperability across complex source–receiver–display chains.
CEC is an HDMI feature that allows devices connected via HDMI to control each other using a shared control bus.
From a technical standpoint, CEC operates as a low-speed, single-wire communication channel embedded within HDMI connections. It enables commands such as power on/off, input switching, and volume control across devices from different manufacturers.
In AV receivers and smart home systems, CEC can simplify user interaction by reducing the need for multiple remotes. However, inconsistent implementations often lead to unpredictable behavior.
For premium AV installations, CEC is a double-edged sword. When carefully managed, it enhances usability; when left uncontrolled, it can undermine system stability and user confidence.
Dolby TrueHD is a lossless audio codec developed by Dolby Laboratories for high-fidelity, multi-channel audio reproduction.
Technically, TrueHD preserves the full resolution of studio master recordings using advanced lossless compression. It supports high channel counts and serves as the core carrier for Dolby Atmos metadata in disc-based media.
In AV receivers, Dolby TrueHD decoding requires sufficient processing power, bandwidth, and correct HDMI configuration. Bitstream integrity is essential to preserve lossless playback.
For high-end home theaters, Dolby TrueHD represents the benchmark for uncompromised audio quality. It delivers maximum dynamic range, clarity, and spatial accuracy when paired with properly configured systems.
eAC-3, commonly known as Dolby Digital Plus, is a lossy audio codec optimized for streaming and broadcast environments.
From a technical perspective, eAC-3 improves upon legacy AC-3 by supporting higher bitrates, more channels, and optional Dolby Atmos metadata. It balances efficiency with perceptual quality.
In streaming platforms and smart TVs, eAC-3 is the dominant format for delivering surround sound and Atmos over limited bandwidth connections.
For modern AV systems, understanding eAC-3 is essential. While not lossless, it enables immersive audio delivery at scale and explains why streaming Atmos behaves differently from disc-based Atmos.
HDR (High Dynamic Range) is a display and signal framework designed to represent a wider range of luminance and color than traditional SDR systems.
Technically, HDR expands peak brightness, shadow detail, and color depth by combining higher bit depth, wider color primaries, and perceptual transfer functions (PQ or HLG). HDR is not a single format but a family of standards that share these goals.
In AV receivers and signal chains, HDR requires end-to-end compatibility. Sources, cables, processors, and displays must all support the required signaling, metadata, and color spaces to preserve HDR intent.
For high-end home theaters, HDR is foundational. Without correct HDR handling, even the best displays cannot reproduce modern cinematic content accurately.
HDR10 is an open HDR standard widely adopted across UHD Blu-ray, streaming services, and consumer displays.
From a technical perspective, HDR10 uses static metadata (MaxCLL and MaxFALL) to describe content brightness characteristics. It relies on the PQ (Perceptual Quantizer) transfer function and BT.2020 color container with 10-bit depth.
In AV receivers, HDR10 is the baseline HDR format that must be correctly passed through without alteration. Static metadata means tone mapping decisions are primarily handled by the display.
For premium AV systems, HDR10 provides broad compatibility. However, its static nature limits scene-by-scene optimization compared to dynamic HDR formats.
HDR10+ is an extension of HDR10 that introduces dynamic metadata to improve tone mapping accuracy on a per-scene or per-frame basis.
Technically, HDR10+ transmits dynamic metadata alongside the video signal, allowing displays to adjust brightness and contrast dynamically as content changes. It retains the open, royalty-free model while enhancing performance.
In AV signal chains, HDR10+ requires correct metadata passthrough and display support. Receivers must avoid stripping or misinterpreting the dynamic metadata.
For high-end displays that support HDR10+, the format delivers more consistent brightness and detail across varied scenes, narrowing the gap with proprietary dynamic HDR solutions.
Dolby Vision is a proprietary HDR format that uses dynamic metadata and Dolby-managed content pipelines to optimize visual reproduction.
From a technical standpoint, Dolby Vision supports higher bit depths, scene-level metadata, and sophisticated tone mapping algorithms. Dolby Vision content is mastered with detailed instructions for display rendering.
In AV receivers and playback devices, Dolby Vision requires licensed decoding and precise signal handling. Incorrect implementation can result in fallback to HDR10 or visual artifacts.
For premium home theaters, Dolby Vision represents the highest level of HDR fidelity. It closely preserves creative intent from studio mastering to final display output.
HLG (Hybrid Log-Gamma) is an HDR standard developed for broadcast environments by the BBC and NHK.
Technically, HLG combines SDR and HDR information into a single signal without metadata. It relies on a gamma-based curve rather than PQ, making it backward-compatible with SDR displays.
In AV receivers and TVs, HLG is commonly used for live broadcasts and streaming events. It simplifies distribution but offers less precise control than metadata-driven HDR formats.
For modern AV systems, HLG is essential for broadcast HDR compatibility. It ensures that live HDR content remains accessible across diverse display capabilities.
VRR (Variable Refresh Rate) is a display protocol that allows the screen’s refresh rate to dynamically match the frame rate output of a source device.
From a technical perspective, VRR eliminates frame tearing and reduces stutter by synchronizing display refresh cycles with real-time rendering output. HDMI VRR is part of the HDMI 2.1 feature set and operates over the high-bandwidth link.
In AVR signal chains, VRR introduces strict timing and pass-through requirements. Any device in the chain that mishandles VRR signaling can break synchronization, forcing the system to fall back to fixed refresh rates.
For gaming-focused home theaters, VRR is essential for smooth motion and responsiveness. Proper AVR support ensures that VRR signals pass transparently from console or PC to the display.
ALLM (Auto Low Latency Mode) is an HDMI feature that automatically switches a display into its lowest-latency mode when a compatible source is detected.
Technically, ALLM uses HDMI signaling to notify the display that latency-sensitive content—such as gaming—is active. The display then disables non-essential processing like motion interpolation.
In AV receivers, ALLM must be correctly forwarded without interference. Improper handling can prevent displays from entering game mode, increasing input lag.
For premium gaming setups, ALLM removes the need for manual mode switching. It ensures that latency-sensitive experiences remain fast and responsive without user intervention.
QMS (Quick Media Switching) is an HDMI 2.1 feature designed to eliminate black-screen delays when switching between content with different frame rates.
From a technical standpoint, QMS leverages VRR capabilities to adjust refresh rates seamlessly without reinitializing the HDMI link. This avoids HDMI resync interruptions.
In AVR-based systems, QMS requires end-to-end support. Both the source and display must support VRR, and the AVR must pass the signaling transparently.
For high-end media systems, QMS improves usability rather than picture quality. It creates a smoother, interruption-free viewing experience when navigating modern streaming content.
FRL (Fixed Rate Link) is the high-bandwidth transmission mode introduced with HDMI 2.1 to support resolutions and frame rates beyond HDMI 2.0 capabilities.
Technically, FRL replaces the legacy TMDS signaling method. It enables higher data rates required for 4K/120Hz, 8K, HDR, and advanced color formats by using multiple data lanes and forward error correction.
In AV receivers, FRL support is a hardware-dependent capability. Chipset limitations, PCB design, and firmware stability all influence real-world performance.
For advanced home theaters and gaming systems, FRL is foundational. Without stable FRL operation, next-generation video formats cannot be reliably delivered.
TMDS is the legacy signaling method used by HDMI up to version 2.0.
From a technical perspective, TMDS transmits video data using three differential data channels and a clock channel. While reliable, it has strict bandwidth limits that cap resolution and frame rate combinations.
In AVR signal chains, understanding TMDS is essential when diagnosing compatibility issues. Many HDMI problems arise from mismatched expectations between TMDS and FRL capabilities.
For modern AV systems, TMDS represents backward compatibility rather than future scalability. Recognizing its limitations helps explain why HDMI 2.1 hardware upgrades are often unavoidable.
BT.709, also known as Rec. 709, is the color standard developed by ITU-R for high-definition television (HDTV).
From a technical perspective, BT.709 defines the color primaries, white point, and transfer characteristics used for SDR HD content. It was designed around the capabilities of CRT displays and remains the baseline for broadcast television and most legacy HD material.
In AV receivers and displays, BT.709 content must be mapped accurately without unnecessary expansion or saturation. Improper color conversion can lead to unnatural skin tones and color clipping.
For modern AV systems, BT.709 represents the reference point. Understanding it is essential for correct upscaling, tone mapping, and color management when mixing SDR and HDR content.
BT.2020, or Rec. 2020, is the color space standard developed for UHD and HDR video systems.
Technically, BT.2020 defines extremely wide color primaries that exceed the capabilities of most consumer displays. It serves as a container rather than a guarantee that all colors are physically reproducible.
In AV receivers and signal chains, BT.2020 signaling indicates that content may contain wide-gamut color information. Displays must perform accurate gamut mapping to render this content correctly.
For premium home theaters, BT.2020 is foundational to HDR and future display technologies. Correct handling ensures vivid color reproduction without distortion or oversaturation.
Color volume describes the three-dimensional representation of color that includes hue, saturation, and luminance.
From an engineering standpoint, color volume goes beyond color gamut by accounting for how brightness affects color reproduction. A display may cover a wide color gamut but still lose saturation at high brightness levels.
In HDR systems, color volume is a critical performance metric. Displays with higher peak brightness and better color stability can reproduce more of the intended visual information.
For high-end AV displays, color volume explains real-world differences that simple gamut percentages cannot. It directly impacts perceived realism and depth.
Chroma subsampling is a compression technique that reduces color resolution relative to luminance to save bandwidth.
Technically, the human visual system is more sensitive to brightness detail than color detail. Chroma subsampling exploits this by transmitting fewer color samples while preserving perceived image quality.
In AV receivers and HDMI signal chains, chroma subsampling determines bandwidth requirements and affects text clarity, fine edges, and UI sharpness.
For advanced AV systems, understanding chroma subsampling is essential when configuring resolutions, frame rates, and HDR settings—especially for gaming and PC use.
4:4:4, 4:2:2, and 4:2:0 are chroma subsampling formats that describe how color information is distributed across pixels.
From a technical perspective:
- 4:4:4preserves full color information for every pixel.
- 4:2:2reduces horizontal color resolution.
- 4:2:0reduces both horizontal and vertical color resolution.
In AV receivers and displays, support for these formats affects compatibility with HDR, high frame rates, and gaming modes. Bandwidth limitations often force trade-offs between resolution, refresh rate, and chroma fidelity.
For premium AV setups, choosing the correct chroma format is critical. It balances sharpness, color accuracy, and signal stability based on use case.
LPCM is an uncompressed digital audio format that represents sound as linear samples of amplitude over time.
From a technical perspective, LPCM preserves the original audio waveform without perceptual compression. It supports flexible channel counts, sample rates, and bit depths, making it the reference transport format inside most digital audio pipelines.
In AV receivers and HDMI signal chains, LPCM is often the final decoded format before digital-to-analog conversion. When a source outputs LPCM, decoding has already occurred upstream.
For high-end home theaters, LPCM represents transparency. It delivers predictable, lossless audio quality, but shifts responsibility for decoding and format handling to the source device.
Bitstream audio refers to the transmission of compressed audio data—such as Dolby or DTS formats—without decoding it at the source.
Technically, bitstreaming preserves encoded data so that decoding occurs inside the AV receiver. This allows the receiver to apply object-based rendering, speaker mapping, and proprietary processing.
In AVR systems, bitstream mode is essential for advanced surround formats like Dolby Atmos and DTS:X. It ensures that metadata reaches the decoder intact.
For premium AV setups, choosing between LPCM and bitstream determines where processing authority resides. Bitstreaming maximizes AVR control and immersive audio fidelity.
An audio clock is the timing reference that determines when digital audio samples are processed and converted.
From an engineering standpoint, clock accuracy and stability are critical. Even small deviations can cause timing errors that manifest as distortion, noise, or loss of synchronization.
In AV receivers, multiple clocks may coexist—source clocks, HDMI clocks, internal DSP clocks. Proper clock recovery and management are essential for stable playback.
For high-end audio systems, clock design separates reference-grade performance from average implementations. Precise timing underpins all digital audio quality.
Jitter is the variation in timing accuracy of a digital audio clock.
Technically, jitter introduces uncertainty in when samples are converted from digital to analog. Excessive jitter can degrade sound quality by introducing noise or subtle distortion.
In HDMI and digital audio systems, jitter can originate from source devices, cables, or clock recovery circuits. High-quality receivers minimize its impact through buffering and reclocking.
For premium audio systems, jitter control is a critical engineering discipline. Effective mitigation ensures clean, stable, and transparent sound reproduction.
Lip-sync compensation is the process of aligning audio playback with corresponding video to ensure perceived synchronization.
From a technical perspective, video processing often introduces latency through scaling, tone mapping, and frame buffering. Audio must be delayed appropriately to maintain alignment.
In AV receivers, lip-sync compensation may be manual or automatic (as mandated by HDMI eARC). Accurate compensation requires consistent latency reporting across devices.
For advanced home theaters, reliable lip-sync is essential. Even small misalignments can break immersion and cause viewer fatigue.