AfterDawn.com

Understanding how color is represented, or encoded, in digital video requires at least a rudimentary understanding of human perception and digital sampling. I'm going to gloss over the more technical information which is important if you're writing a program to understand how things really work, but is irrelevant for someone who just needs to understand what that program does. In some cases technically correct terms are used, and in others terms that are technically incorrect are used to reflect common useage in hobbyist community. In general these will be terms used interchangeably for analog and digital video, but which are really only correct for one or the other.

Color is essentially how the brain interprets different frequencies (wavelengths) of light that reach the eye. That light can either come directly from a light source, such as the sun or a light bulb, or as a reflection off of some other object. Light that contains equal amounts of every visible wavelength is white. When white light reaches an object, whether it's a painting, a car, or your skin, some frequencies will be absorbed and others will be reflected. The color of the object is equivalent to white light minus the absorbed frequencies. If all wavelengths of visible light are absorbed, no light is reflected and the object appears black. This is called subtractive color, and if you're like me you learned about this as a small child. Reflected light can be divided into three primary colors (colors that can be combined to make any other color) - Red, Yellow, and Blue.

When dealing with light coming directly from the source, or reflected from a white screen (meaning no visible wavelengths are absorbed) color isn't determined by subtracting, but rather by adding different frequencies together. Not surprisingly, this is called additive color. Like subtractive color, it can be divided into three primary colors, but instead of Red, Yellow, and Blue the primary additive colors are Red, Green, and Blue (RGB). This is how televisions, computer monitors, and film projectors all work.

Representing Color Digitally

Video that's stored in the standard uncompressed digital format must have a value of each primary additive color for each pixel. That is, each pixel is defined by three numbers; one representing Red, one representing Green, and another representing Blue. Like all representations of analog data in the digital domain, each one of these numbers is a sample. This means that there are a number of possible values for each one based on the size of number used to represent it, also known as bit depth (bits per sample). When applied to colors it's called color depth. Technically this means it's not uncompressed as a nearly infinite number of colors can be created from variations on these three colors, but given the limitations of the human eye it's possible to narrow them down to those the human eye and brain can distinguish from one another.

Bitrate vs. Bandwidth

If you know a little bit about digital video you should recognize the term bitrate, meaning how many bits are read per second. This is the correct way to measure a digital signal, but is completely meaningless in the analog domain. This doesn't mean there isn't a way to measure the "size" of an analog signal, but instead of a stream of bits it's a range of frequencies. This is the bandwidth of the signal. Every wave, whether it's made up of light, sound, electricity, or radio has a certain frequency (how many times a complete wave repeats per second) and one cycle per second equates to 1Hz (Hertz). Just as lowering the bitrate of digital bitstream would allow frames to pass more rapidly on a wire, reducing the bandwidth of analog video allows more signals to be included on the same cable or in the same satellite transmission.

Gamma Correction

Analog video uses a linear scale to represent Red, Green, and Blue values. This means that the difference between a value of 1 and 2 is the same as a difference between 128 and 129, which is also the same as the difference between 254 and 255. This is fine if you're simply measuring the light instead of representing a picture with it. Unfortunately, since the human eye can distinguish between smaller variations in wavelength at some frequencies than others, this representation either gives extra information that isn't perceptible by humans, or eliminates perceptible information to avoid the wasted storage or excessive transmission bandwidth. When this system was designed it wasn't much of an issue because any bandwidth concerns were overshadowed by the quality of technology available to reproduce the signal. In the digital world that extra bandwidth translates to bits, meaning we can either choose to waste bits for details we can't see or avoid wasted bits by sacrificing detail in the most important frequencies. In reality, digital video uses a third option called Gamma Correction.

Gamma correction fixes the problem by using a non-linear scale for each value. By devoting more bits to describe frequencies that the eye can perceive the most detail in, equal quality in all frequencies of visible light can be achieved without wasting precious storage space. In general, when you see what looks like an analog notation, such as RGB, with a single quote (or apostrophe) added afterward, it refers to gamma corrected digital video. For example, gamma corrected RGB uses the notation R'G'B'. This is normally only an important distinction if you work with both analog and digital signals, but is also important to understand if you're trying to understand a technically accurate text. For the purposes of this guide, unless analog video is specifically mentioned, you should assume that all values refer to gamma corrected digital video. Most of the information available on the internet intended for hobbyists uses analog analog notation to refer to digital video.

previous  | next

Table of Contents

  1. 1. Introduction
  2. 2. Basic Facts
  3. 3. Color Encoding
  4. 4. Color Compression
  5. 5. Common Color Spaces
Written by: Rich Fiscus