The movie and broadcast industries are gaining experience with high-dynamic-range (HDR) video technologies and are starting to produce HDR content at scale. This is accompanied by a learning process, particularly involving questions regarding the use of the extra dynamic range afforded by these technologies. To create further insight into the aspects of HDR content, in this paper, a variety of different types of content is analyzed. Manual annotation processes are used to determine the range of luminance values that, for example, diffuse white objects or white overlay graphics objects have in this content. We found that, for each type of object analyzed, the mean luminance value is reasonably constant across different types of content, but the spread of luminance values is remarkably large and strongly content dependent. Our results may form a basis for understanding HDR content and contribute toward forming opinions on defining reference levels in reports and standards, including International Telecommunications Union - Radiocommunication Rec. BT.2408.
An assessment of reference levels
An assessment of reference levels
An assessment of reference levels
Research Paper / Apr 2019
Related Content
White Paper /May 2025
Media over Wireless: Networks for Ubiquitous Video
Research Paper /Mar 2025
To realize the objectives of Integrated Sensing and
Communication (ISAC) in 6G, there is a need to introduce
new functionalities in 6G core (6GC) architecture that are
dynamic and resource-efficient. In ISAC, sensing signals are used by a Sensing Receiver (SRx) to measure and report Sensing Data Points (SDPs) to the network. However, a direct approach involving …
Research Paper /Mar 2025
This paper proposes a method that enhances the compression performance of the current model under development for the upcoming MPEG standard on Feature Compression for Machines (FCM).
This standard aims at providing inter-operable compressed bitstreams of features in the context of split computing, i.e., when the inference of a large computer vision Neural-Netwo…
Webinar /Jun 2024
Blog Post /Jul 2025
Blog Post /Jun 2025
Blog Post /Jun 2025