Display Artifacts and Image Quality

If displays were able to drive each of their pixels to exactly the right color and luminance at exactly the right time, we would not be troubled by image artifacts – but we are.

by Raymond M. Soneira

IN a previous article ["Understanding Display Image Characteristics and Gamma," Information Display 21, 34–42 (March/April, 2005)], we discussed the photometry and colorimetry for CRT, LCD, plasma, and DLP display technologies. In this article, we will discuss display artifacts and image quality.

If all of the pixels of a display performed exactly as described by the photometry and colorimetry, there would be nothing more to discuss because images are just made up of pixels. Unfortunately, it is not quite that simple because a considerable amount of processing is required between the image or signal source and the actual display device. All of this processing affects and modifies what each pixel finally displays, often adversely affecting image quality and accuracy and frequently introducing artifacts – visible features in an image that should not be there. The artifacts that affect all displays in general will be discussed; however, the artifacts that affect one particular technology more than others have recently reviewed a great deal of attention.

Analog versus Digital

Three fundamental issues for any display are the way in which it generates pixels, produces intensities, and processes input signals, each one of which can be performed in either an analog or digital domain. The terms analog and digital are used in a wide variety of contexts. Generally, analog refers to anything that can take on a continuously variable value, while digital refers to anything that can only take on a discrete set of values. Analog is smooth but imprecise, while digital has jumps but is precise.

Digital is assumed to be better than analog, but that is not always the case. Each has its own particular advantages and disadvantages, can produce excellent image and picture quality, and can be superior to the other. It is the details of the implementation that determines the quality of the end result and the artifacts that are produced.

Intensity

The intensity or luminance of a display pixel can be controlled by either an analog or digital process. Although any display can accept both analog and digital signals, the signals must be converted into the native mode of the particular display by the time the signals are received by the display device.

Both CRTs and LCDs produce their intensity scale through analog voltage control of the device. The range of brightness that is produced is both perfectly smooth and infinitesimally graduated. Some signal processing may be necessary in order to obtain the desired gamma or gray scale, as discussed in the previous article. If all the processing is performed via analog circuitry, then the display retains its pure analog nature, which means that it is free of intensity artifacts (as will be explained shortly) but is susceptible to signal and image-quality degradation.

On the other hand, both plasma and DLP displays have only digital on and off pixel states, so they must produce their intensity scale digitally; i.e., by rapidly switching between the two states and varying the percentage of time that is spent in each state. If the switching frequency is high enough, the human eye responds to the time-averaged intensity of the pixel. For example, for an intensity of 25%, the pixel is turned on for a quarter of the time and off for the remaining three quarters.

In principle, in this way it is possible to produce an infinitesimally graduated intensity scale. But in practice, the switching frequency is fixed and the states are digitally controlled so that only a discrete set of intensity levels can be produced. As a result, the intensity scale is no longer smooth but increases as a series of steps. The resulting jumps in intensity produce what is known as quantization error because all intensities are forced up or down to the nearest available digital value.

If these steps are very small, the human eye will not detect the lack of intermediate values and smoothness. If not, artificial intensity and color contours, which are artifacts, may become visible in the image, particularly at lower intensities. Because most signals are now digitally generated, including almost all analog signals, there is already a quantization error built in to most images. At low intensities the on-time can be so brief that a viewer may observe it as image noise.

In most consumer devices, the goal is to produce 256 intensity levels – 8 bits per primary color or 24 bits total. Signal processing generally reduces the total number of levels that are actually available, so less than 256 levels are provided. This introduces additional gray-scale artifacts, which will be discussed later.

Pixels

The pixel structure of a display can also be either analog or digital, which is determined by either the inherent nature of the display technology or the implementation chosen by the manufacturer.

DLP and plasma displays have an intrinsic, discrete pixel-matrix format that is fixed at the time of manufacture and cannot be changed. Each DLP pixel is made up of a micromirror and each plasma pixel is made up of three gas cells containing red, green, and blue phosphors. This discrete image structure results in digital pixels (or discrete or fixed pixels). Each digital pixel has a unique digital address on the display.

There are many advantages and disadvantages to this approach. The repeating pixel structure can become quite noticeable in some images, which is called pixelation. Gaps between the pixels, known as the screen-door effect, make pixelation more apparent. The gaps are noticeable in plasma and LCD technologies, but are almost invisible in DLP and LCOS. The higher the display resolution, the less apparent pixelation becomes. For a 1920 x 1080-pixel display, pixelation is generally invisible at typical viewing distances.

A digital image that has a pixel format identical to that of the display can be reproduced exactly and appear perfectly sharp. If the image has a different pixel format, it must be digitally rescaled to match the pixel format of the display, which introduces artifacts into the image. If the image is analog and not digital, it must be digitized for digital displays, which introduces a different set of artifacts.

Because CRTs are pure analog devices, they do not have any preordained pixel structure built in, so they are free to support a very wide range of image formats. The phosphor dots or stripes in direct-view color CRT monitors introduce a grain into the image, which can produce wispy moiré interference patterns, but they do not impart any particular pixel structure. CRT projectors use CRTs with smooth phosphor coatings so that moiré artifacts are not produced.

CRTs do not have an innate pixel structure, but they are still considered to have analog pixels (or continuous pixels) because pixels can be thought of as being formed and stretched in whatever way is necessary. If a CRT monitor is connected to a computer, a wide variety of image formats can be selected that will be accurately reproduced without any pixel or rescaling artifacts.

This is not the whole story because virtually all CRTs are operated so that images are made up of a specified number of continuous horizontal lines, which is called a raster. The raster is actually a property of the image structure rather than of the CRT itself, which in principle will accept whatever line structure is delivered to it. So images on a CRT are actually analog horizontally but digital vertically because of the image line structure. When the CRT's beam size is properly adjusted, the raster is virtually invisible, but that is not always the case, especially with CRTs that support multiple resolutions. Although LCD panels require an analog signal internally, the signal processing in LCDs is generally digital.

Signals

The signal is the means by which an image is delivered to a display and is also the object of all of the processing that takes place. Most displays now accept both analog and digital signals. Note that the signals are doubly digital or analog (a digital signal has digital pixels with digital intensities and an analog signal has analog pixels with analog intensities).

Conversion of the two signal types always involves some image degradation and artifacts. An analog signal is transmitted as a voltage, typically ranging between 0 and 0.7 V, while a digital signal is transmitted as a number, generally ranging between 0 and 255.

Analog signals. An analog signal has the advantage of not possessing any built-in structure, which makes it artifact-free. It can contain much more subtle image information than a digital signal, but this means it is much more susceptible to degradation and interference than a digital signal. High-quality analog signal processing is now more difficult and expensive to implement than digital signal processing.

Analog signals typically come in the form of RGB – separate red, green, and blue primary color channels – for computers (via DB15 VGA or BNC connectors) and component YPBPR (separate luminance and encoded blue and red channels) for video (via BNC or RCA connectors). Because analog signals are easily degraded, it is important to use high-quality signal sources. The biggest performance differentiator is often the quality of the analog electronics within the display itself because of the considerable amount of processing electronics that is required.

High-quality cables are also important, particularly for runs longer than a few meters. Long runs of more than 25 m typically require repeaters with equalization and signal processing in order to compensate for the transmission losses, but distances of 100 m or more are possible with little overall signal degradation.

Digital signals. A digital signal has the advantage of absolute precision when it is generated, transmitted, or processed. Certain limitations and artifacts result from the discrete nature of these signals, but there are many advantages in using them. If both the signal source and display work with digital signals, then it is usually better to use a digital rather than an analog connection. Even so, there are two reasons for picking analog over digital: (1) digital often produces stronger and more visible artifacts and (2) analog often provides a larger set of controls for adjusting the image.

Digital signals come most commonly in the DVI (Digital Visual Interface) format for computers and DVI with HDCP (High-Bandwidth Digital Content Protection) for video. DVI has red, green, and blue channels with 8 bits of intensity information per channel. There is a crucial difference in the signal levels of these two different types of DVI. Computer signals cover the complete 8-bit (0–255) range, having 0 for black and 255 for peak intensity; 8-bit video signals operate over a somewhat smaller range, having 16 for black and 235 for peak white. The remaining levels are reserved to accommodate signals that overshoot this nominal range, and also for synchronization. To obtain accurate image reproduction it is important that the display provide automatic or manual adjustments to accommodate these range differences.

Another issue with DVI signals is that the transmission distance is typically limited to just a few meters. Beyond that, some form of repeater is required. Beyond the maximum recommended distance the image will start to degrade, first in the form of intermittent digital-noise artifacts appearing in the image and then by a total loss of signal.

However, much greater distances are possible. Silicon Image, one of the principal developers of DVI, has demonstrated DVI transmissions of 20 m with UXGA (1600 x 1200 pixels) images using chipsets that are specially designed for long cable runs. Longer runs of 100 m or more are possible using fiber-optic DVI cables.

HDMI (High-Definition Multimedia Interface) is the next generation of digital interconnect, and it is beginning to appear in the marketplace. It is backwards compatible with DVI. HDMI has two major advantages: support for up to 12 bits of intensity (4096 levels) and typical transmission distances of up to 15 m. These characteristics will significantly reduce many of the digital artifacts discussed here. Other advantages of HDMI include a smaller connector and the inclusion of audio and control signals together with the video signal, thus only a single cable is needed.

Signal Quality and Accuracy

Both analog and digital signals are subject to various types of errors that keep them from delivering perfect image quality and accuracy, although for different reasons. Video and photographic images always begin in the analog world and almost always need to be converted into digital signals for transmission or storage.

Computer-generated images that are displayed on DLP or plasma displays with digital connections are the only images that can remain purely digital end-to-end. Most computer and video hardware are still connected via analog signals, so a reverse digital-to-analog conversion is generally necessary.

Most display devices are still CRT- or LCD-based and therefore require an internal analog signal. These displays often require some form of internal digital signal processing, so almost all images spend time in both the ana-log and digital domains and need to be converted at least once and quite often more than once.

Digital Granularity

Because digital signals and intensities are absolutely precise, they always introduce some type of brightness artifact to each pixel of an image due to quantization error. And since the intensity steps are applied individually to each of the red, green, and blue primary-color channels, they also introduce quantization errors to the hue and saturation for all resulting color mixtures. If the steps are very fine, the human eye will not detect the jumps and lack of smoothness, but if the steps are not fine, the granularity of the digital steps will introduce false visual intensities and color contours into an image. These are most noticeable when an image contains fine graduations in intensity or color.

How fine do the steps need to be to make these granularity effects invisible? It is the luminance ratio of two values that determines what the human eye sees, so the question becomes, How small a variation in the bright-ness ratio can the eye detect? It turns out to be roughly 1% over a wide range of luminance, so if the change in luminance between two digital intensity steps is more than about 1%, the granularity can be detected.

The number of intensity steps available is determined by the number of bits used to specify the intensity levels. For example, 8 bits allows 256 levels, which is what is used in most computer and video signals. A greater number of bits will provide a finer intensity scale. HDMI will eventually allow up to 12-bit signals, i.e., 4096 levels.