Menu
Search

[RLG DLF logo]

Guides to Quality in Visual Resource Imaging

3. Imaging Systems: the Range of Factors Affecting Image Quality
Donald D'Amato
� 2000 Council on Library and Information Resources


1.0 Introduction
1.1 Image Quality Specification and Measurement
1.2 Factors that Affect Quality
2.0 Basic Terminology
3.0 Components of an Imaging System
3.1 Acquisition-Image Capture Devices
3.2 Image Processing
3.3 Compression
3.4 File Formatting and Storage
3.5 Displays
3.6 Printing
4.0 Image Quality Specification and Measurement
4.1 Spatial Sampling Frequency and Geometric Distortion
4.2 Quantization and Tonal Response
4.3 Spatial Resolution
4.4 Spatial and Temporal Uniformity
4.5 Color Accuracy
5.0 Color Management and ICC Profiles
5.1 International Color Consortium
5.2 Profile Connection Space
5.3 ICC Profile Format
5.4 Creation and Editing of Device Profiles
6.0 Managing an Imaging System
6.1 Determination of Requirements
6.2 Preparation of Specifications
6.3 Comparative Evaluation of Systems
6.4 Quality Control During Image Production
7.0 The User's Perspective
7.1 Measurement of Image Quality for Displays
7.2 Measurement of Image Quality for Printers
8.0 Appendix A

1.0 Introduction

Many museums, libraries, and other archiving institutions see digital imaging as a solution to the dilemma of providing unrestricted access to their high-quality document collections while simultaneously preserving those collections within secure, environmentally controlled storage. Digital images may be reproduced indefinitely without degradation and distributed electronically worldwide. Conversion to digital image form may also provide a type of permanence for otherwise deteriorating collections. However, an essential determinant of the value of surrogate digital images, particularly for researchers, historians, and conservators, is their quality.

Speaking of the importance of image quality, Charles S. Rhyne, Professor Emeritus of Art History at Reed College, has stated that "No potential of digital imagery is more unrecognized and undervalued than fidelity to the appearance of the original works of art, yet none carries more promise for transforming the study of art." He states further that " . . . with each jump in quality new uses become possible." and "What has been overlooked in most discussions of digital imagery is the immense potential of high quality images" ( Rhyne, 1996).

Unfortunately, for many researchers and other users, many of the digital images currently available from museums and libraries could not be termed high quality and are probably suitable only for purposes of document identification.

This guide describes some of the technical issues associated with planning, acquiring, configuring, and operating an imaging system. Final users of the images may also find this guide valuable because it provides suggestions for testing and configuring viewing or printing systems and may help explain decisions made by content originators concerning quality trade-offs.

This guide does not recommend any specific scanner, camera, operating system, or image-processing software, nor does it suggest the use of a specific acquisition technique, sampling frequency, spatial resolution, number of bits per pixel, color space, compression algorithm, or storage format. Rather, the guide provides background information on digital imaging, describes generally applicable techniques for ensuring high-quality imagery, and suggests procedures that, to the maximum extent possible, are not limited by the capabilities of currently available hardware or software. The selection of specific components is left to the system users and should be based upon their assessments of their own image quality requirements, their document sizes and quantities, and the acquisition speed needed.

This guide explains how image quality can be measured and maintained, both during the conversion effort and during subsequent processing. Wherever possible, the various trade-offs that must be made among factors such as image quality, compression ratio, and storage or transmission time are described in terms that are not system-dependent.

The remainder of this section explains the approach to image quality measurement employed by this guide. Section 2 provides some basic terminology. Section 3 describes the main components and processing steps of an imaging system, emphasizing those that are crucial to the maintenance of quality. Section 4 describes how image quality can be specified and measured. Section 5 provides information on the International Color Consortium's approach to color management and the use of ICC profiles. Section 6 discusses some of the issues associated with the management of an imaging system, particularly for the conversion of a large number of objects into digital images. Section 7 describes the end user's perspective on quality management.

1.1 Image Quality Specification and Measurement

When an untrained observer describes the quality of a digital image, he or she generally uses nonquantitative, subjective terms, such as "lifelike," "well-focused," "very sharp," "nicely toned," "good colors," or "faithful to the original." Such terms are, of course, open to misinterpretation. Moreover, the means of presentation and the viewing environment, as well as the content of the images, can greatly affect any visual assessment of image quality, even for a trained observer.

Methods that are more objective have been developed to characterize the performance of an imaging system and the quality of the images it produces. Some of these methods measure image quality using images of actual documents or objects in a collection. As the characteristics of the objects themselves are usually unknown, such approaches depend upon the content of the images and require considerable expertise to interpret. A preferable alternative, and the one employed in this guide, is to measure images of test patterns whose characteristics are known a priori.

The measurement of image quality using known patterns can often be automated or be performed infrequently; it need not be a major effort. Most high-volume conversion projects are readily able to include test patterns within portions of the object images or to capture, occasionally, test pattern images with object images. If test patterns are periodically interspersed and substandard image quality is detected, all images generated after the last above-standard test pattern must be considered substandard. Thus, the frequency with which test patterns are interspersed must be balanced against the cost of rescanning image batches that might have substandard quality.

Standards-making bodies have recently made efforts to develop international standards for the specification and measurement of digital image quality. Notably, the Photographic and Imaging Manufacturers Association Technical Committee on Electronic Still Picture Imaging (IT10) is working to establish standards for electronic still picture imaging. The American National Standards Institute accredits PIMA as a standards-making organization. Among PIMA's activities relevant to image quality is the development of standards for the International Organization for Standardization (ISO) Technical Committee 42, Working Group 18. Table 1 provides the numbers and titles of the ISO/TC42/WG18 image quality standards now being developed, along with their status as of early 2000. Further information may be found at the PIMA Web site's IT10 page.

Table 1. ISO standards documents being developed for image quality

ISO Number Title Technical Committee Draft Number
ISO 16067 Photography-Electronic scanners for photographic images-Spatial resolution measurements: Part I Scanners for reflective media ISO/TC42/WG18 Working Draft 3.1
ISO 14524/DIS Photography - Electronic Still Picture Cameras--Methods for measuring opto-electronic conversion functions ISO/TC42/WG18 DIS
ISO 15739 Photography-Electronic still picture cameras-Noise measurements ISO/TC42/WG18 Working Draft 5.2
ISO 17321 Graphic Technology and Photography-Colour characterization of digital still cameras using colour targets and spectral illumination ISO/TC42/WG18
ISO/TC130/WG3
Working Draft 3.1
ISO 12233:1999E Photography-Electronic still picture cameras-Resolution measurements ISO/TC42/WG18 FDIS

1.2 Factors that Affect Quality

The perceived quality of a digital image, and hence its utility for research purposes, depend upon many interrelated factors, including:

  • quality of the original objects
  • quality of intermediary photographic reproduction, if any
  • means of digitization and the conversion parameters selected
  • image processing and compression algorithms
  • use of reformatting, resampling, or reduction in number of quantization levels
  • viewer's hardware and application software
  • quality and configuration of the viewer's monitor or printer
  • viewing environment

Some of these factors are under the control of the originator of the digital image and others are under the control of the end user. Providing a high-quality image to the end user requires that quality be maintained in each of these processing steps, and that both the originator and the end user appreciate the consequences of their choices of many parameters and procedures. Clearly, the degree of control over image quality that each may exercise is asymmetrically distributed. If the content originator does not produce adequate quality, an end user can usually do little to improve the images.

2.0 Some Basic Terminology

Digital images are composed of discrete picture elements, or pixels, that are usually arranged in a rectangular matrix or array. Each pixel represents a sample of the intensity of light reflected or transmitted by a small region of the original object. The location of each pixel is described by a rectangular coordinate system in which the origin is normally chosen as the upper left corner of the array and the pixels are numbered left-to-right and top-to-bottom, with the upper left pixel numbered (0,0).

It is convenient to think of each pixel as being rectangular and as representing an average value of the original object's reflected or transmitted light intensity within that rectangle. In actuality, the sensors in most digital image capture devices do not "see" small rectangular regions of an object, but rather convert light from overlapping nonrectangular regions to create an output image.

A document or another object is converted into a digital image through a periodic sampling process. The pixel dimensions of the image are its width and height in pixels. The density of the pixels, i.e., the number of pixels per unit length on the document, is the spatial sampling frequency, and it may differ for each axis.

The value of each pixel represents the brightness or color of the original object, and the number of values that a pixel may assume is the number of quantization levels . If the illumination is uniform, the values of the pixels in a gray-scale image correspond to reflectance or transmittance values of the original. The values of the pixels in a color image correspond to the relative values of reflectance or transmittance in differing regions of the spectrum, normally in the red, green, and blue regions. A gray-scale image may be thought of as occupying a single plane, while a color image may be thought of as occupying three or more parallel planes.

A bitonal image is an image with a single bit devoted to each pixel and, therefore, only two levels—black and white. A gray-scale image may be converted to a bitonal image through a thresholding process in which all gray levels at or below a threshold value are converted to 0-black—and all levels above the threshold are converted to 1-white. The threshold value may be chosen to be uniform throughout the image, that is, "global thresholding" or it may be regionally adapted based on local features, or "adaptive thresholding." Although many high-contrast documents may be converted to bitonal image form and remain useful for general reading, most other objects of value to historians and researchers should probably not be. Too much information is lost during thresholding.

These concepts are illustrated in fig.1, which contains a gray-scale image of the printed word "Gray" and a color image of the printed word "Color." A portion of the gray-scale image has been enlarged to display the individual pixels. A bitonal image has been created from the enlarged section to illustrate the consequent loss of information caused by thresholding. As may be observed, the darker "r" remains recognizable, but the lighter "a" is rendered poorly. The color image is also displayed as three gray-scale images in its red, green, and blue planes. Note that the pixels corresponding to the color red are lighter in the red plane image and similarly for the green and blue plane images.

Figure 1. A gray-scale image with a portion enlarged, a bitonal image of the same portion, and a color image with its three color planes shown separately.

3.0 Components of an Imaging System

The components of a digital imaging system may be either separate devices or processing steps within a computer system. They are discussed in their conventional order of use and include the following:

  1. Acquisition Image processing
  2. Compression
  3. File formatting and storage
  4. Display
  5. Printing

3.1 Acquisition-Image Capture Devices

Digital image capture devices may be categorized as scanners or cameras, depending upon how the periodic sampling process is performed. Although there is not a one-to-one correspondence between sensor design and types of capture devices, a scanner usually employs a line-array sensor and captures a single line of pixels at a time. Therefore, the document must be moved across the object plane or the sensor must be moved across the image plane. A digital camera, on the other hand, typically uses an area-array sensor, which captures the values of all pixels within the image during a single exposure. Also, a scanner usually has a fixed object-to-sensor distance, while a digital camera provides a focusing mechanism to accommodate a large range of object-to-sensor distances.

The selection of either a scanner or digital camera for a digital conversion effort involves trade-offs among several factors, many of which are changing rapidly as the relevant technologies, particularly for cameras, evolve. Guide 2 in this series, Selecting a Scanner , provides further information on these trade-offs.

3.2 Image Processing

The term image processing refers to the many digital techniques available for altering the appearance of an image. Image processing operations may be used to enhance original "master" images, to convert master images to derivative images (reproductions of reduced quality), or to prepare master or derivative images for display or printing. Image processing operations include the following:

  • resampling-changing the image's pixel dimensions
  • pixel level modification
    • adjustment of brightness and contrast
    • gamma correction
    • histogram modification
    • color correction
  • enhancement through sharpening, smoothing, or spatial filtering
  • thresholding—converting a gray-scale or color image to a bitonal image
  • rotation—either by multiples of 90 degrees or through an arbitrary angle

Because this guide is not intended to include an extensive discussion of image processing, only resampling and enhancement will be discussed here. These two techniques were chosen because they are often used in high-quality imaging. Resampling alters the sampling frequency and, hence, the pixel dimensions of an image, for purposes such as down-sampling for more efficient display, printing, or transmission. The image enhancement technique of "unsharp masking" is used to compensate for image blurring or smoothing that may occur as a consequence of scanning or printing. The interested reader is referred to the many textbooks available on image processing for further descriptions of these and other image-processing techniques.

3.2.1 Image Resampling

The conversion of a digital image having a particular number of pixels in each axis into another image with a different number of pixels in each axis may be done through a resampling process. Resampling should be distinguished from resizing, in which the number of pixels in each axis remains the same but the stored values (usually in the image file header) representing the number of samples per unit distance along each axis are changed.

Resampling is most often associated with the process of creating derivative images from a master digital image. However, scanning systems themselves often perform resampling to convert from their actual ("true optical") sampling frequency to a selected output sampling frequency. The algorithm selected for such processing can have significant consequences on the quality of the output images. Indeed, resampling to create images of a higher sampling frequency than the optical sampling frequency (a process often referred to as interpolated resolution) should be avoided since it only increases the size and processing times of the image but does not enhance its quality.

The three most commonly used algorithms for resampling involve assigning values to the pixels in the new (resampled) image based upon

  1. the value of the closest pixel among the four surrounding pixels in the original image ("nearest neighbor")
  2. a weighted combination of the values of the four surrounding pixels in the original image (bilinear interpolation)
  3. a weighted combination of the values of the sixteen surrounding pixels in the original image (bicubic interpolation).

The nearest neighbor approach provides the fastest processing but results in the poorest quality. Bicubic interpolation usually provides the best quality but requires considerably more processing time, especially for larger images. Bilinear interpolation is intermediate in quality and processing speed.

Although bilinear interpolation is generally standardized, the algorithm and the parameters used for bicubic interpolation are implementation-dependent. Results of varying quality are observed among commercially available image-processing systems.

See also, Guide 2, Spatial Sampling Rate.

3.2.2 Unsharp Masking

A commonly used image enhancement technique for sharpening an image is known, somewhat enigmatically, as unsharp masking , derived from conventional film photography. The term refers to the sequence of operations in which a purposely blurred (unsharpened) negative copy of a photograph is used as a subtractive mask, in combination with the photograph itself, to produce a sharpened copy of the photograph.

In the digital domain, unsharp masking is performed by generating a smoothed copy of an image, multiplying the smoothed image by a fractional value, and subtracting it from the original image. All three operations can be performed in a single spatial filtering operation. Digital unsharp masking can be very effective at reducing the inevitable image degradation in scanners that results from document motion during exposure, charge spreading in the sensor, and poor focus and in printers from ink or toner spreading.

The selection of the parameters to be used for the unsharp masking of images before display or printing is, unfortunately, largely a trial-and-error process, and each image-processing package provides a slightly different set of parameters for the user to adjust. Nonetheless, when a set of parameters has been found that enhances the images without introducing excessive noise, those parameters can be used for similarly captured and printed images.

3.3 Compression

Because images may consume large amounts of storage, various types of compression algorithms are used to reduce their size. Most images compress quite well because the values of the pixels of an image are usually locally correlated. Compression algorithms use this correlation to reduce the number of bits that must be stored or transmitted.

Image-compression algorithms may be either information-preserving (also known as reversible or lossless ) or non-information-preserving (also known as irreversible or lossy ). Lossless compression can be reversed to generate the original image exactly; lossy compression sacrifices information and cannot be reversed without some degradation. For most images, lossy compression achieves substantially higher compression ratios than does lossless compression, often without sacrificing much in fidelity to the original.

Guide 5 in this series, File Formats for Digital Masters , covers compression more extensively.

3.4 File Formatting and Storage

An image file format should be flexible, powerful, able to accommodate a wide range of image formats and compression techniques, nonproprietary, and officially published by an international standards-making organization. In addition, it should be widely supported by computer software applications.

If a company-developed product or specification becomes widely used, it is a de facto standard, and other vendors may support it to stay competitive. In the imaging field, there are many de facto standards. The disadvantage with any de facto standard is that the authoring organization can modify it at will, without consulting the user community. Guide 5, File Formats for Digital Masters, covers file formats more extensively.

3.5 Displays

Currently, two technologies dominate the display of color images-shadow-mask cathode ray tubes (CRTs) and liquid crystal displays (LCDs). The quality of the displayed images depends not only on the monitor's capabilities but also on the computer's display adapter board (i.e., the graphics card or video display adapter) and its setup, the configuration of the display driver software, appropriate characterization of the monitor, and the viewing environment.

3.5.1 CRT Displays

CRT displays are the means through which most high quality "soft copy" images are provided to users. Color CRTs use an array of red-green-blue phosphor triads, combined with a precisely aligned metal shadow mask and electron guns, to produce color images. A diagram of these components and their physical relationships with one another is provided in fig. 2. (Note that the diagram incorrectly implies that the electron beams pass through one hole at a time. In actuality, CRTs have electron beams that at the 5 percent intensity level encompass several holes.)

Even if the CRT's electron beams could be precisely focused, the minimum size of displayed pixels is constrained by the triad spacing, since each pixel should encompass at least one triad. High-quality displays currently available have a triad pitch of about 0.25 mm (0.01"), thereby limiting their useful pixel density to about 100 per inch.

Figure 2. Diagram of shadow-mask color CRT technology

The dynamic range of a CRT is the ratio of its brightest light intensity to its darkest, as measured under ambient lighting. Maximum brightness is limited by the electron beam current, the fraction of the electron beam current that passes through the shadow mask, and the efficiency of the phosphors. The darkest value is limited by the reflection of ambient illumination from the phosphor matrix and the glass faceplate.

3.5.2 Liquid Crystal Displays

LCD technology has improved substantially in recent years, and LCDs now almost compete with CRTs in certain high-end image display applications.

The term liquid crystal refers to a state of a substance that is neither truly solid nor liquid. In 1963, it was discovered that the way in which light passes through a liquid crystal could be affected by an applied electric field. LCDs are formed in a sandwich-like arrangement in which the liquid crystal layer affects the rotation of polarized light. By placing polarizers on the light entering and leaving and transparent electrodes on either side of the liquid crystal layer, it can make light selectively pass through the sandwich, when and where a voltage is applied. Color LCDs are formed using a color filter array for the exiting light.

While LCDs are perfectly flat, dimensionally stable, and require no focusing, they do not yet have the spatial resolution of CRTs. The other challenges facing the designer of color LCDs for imaging applications include obtaining sufficient brightness, contrast, number of levels per color, and range of viewing angles.

3.5.3 Display Adapters

The computer's display adapter converts the digital images stored in the computer to the analog electronic signals required by the monitor. The adapter determines the maximum number of pixels addressable in each axis (addressability), the refresh rate, and the number of colors per pixel. (Of course, the monitor must be equally capable. Just because the adapter has addressability does not means that all of the pixels are distinguishable or resolvable on the monitor.)

Display adapters usually contain their own memory to store the images during refresh cycles. As an example of memory requirements for a display adapter, the display of 1,280 (horizontal) pixels by 1,024 (vertical) pixels with 24 bits of color information per pixel requires at least 4 megabytes of video memory (i.e., 1,280 x 1,024 x 24 / 8).

3.5.4 Display Characterization

The colors produced by a CRT display are dependent not only upon the amplitude of the input signals provided by the display adapter but also upon the phosphors and electron gun currents and voltages (which are affected by the display's brightness, contrast, and color temperature controls) and by the ambient illumination. For the same input signal, displays from different manufacturers or with differing model numbers can produce quite different colors. Moreover, as monitors age, their electron guns produce less current and the conversion efficiency of their phosphors may diminish. The consistent display of colors requires, therefore, that a display device be characterized periodically to reassess its behavior for various input values.

Characterization is the process of determining the way in which a device renders known colors. It should be distinguished from calibration , which is the process of making sure that a device is performing according to a priori specifications that are usually provided by the manufacturer. A display may be characterized by measuring its output colors using a colorimeter or spectrophotometer for a various digital input values. Characterization detects a device's color space and gamut (i.e., its range of displayable colors.) Alternatively, a display may be characterized more coarsely through an interactive manual process in which displayed colors are compared with one another and with the colors on a printed sheet of paper.

The characterization of a device and the development of a computer file, or profile, that contains a precise description of the device's responses to known colors is often termed profiling. Section 5 of this guide provides further information on the profiling process.

3.5.5 The Viewing Environment

The viewing environment can have a surprisingly large effect on perceived colors. For high-quality imaging, it is essential that a consistent viewing environment be maintained. Ambient illumination can affect the colors from a display because a portion of incident light is reflected from the surface of the CRT and from its phosphor matrix. Our perception of color is also affected by the colors of materials surrounding the display. It can be very challenging to compare the colors from a self-luminous device, such as a CRT, with those from a reflective surface, such as a painting. Viewing environments having standardized illumination and neutral surroundings are essential for such comparisons.

3.6 Printing

Unlike self-luminous displays that render colors through an additive process, the printing of color uses a subtractive mixture of dyes or pigments. The three subtractive primaries-cyan, magenta, and yellow-are commonly used. However, if these primaries were present in equal quantities, only eight colors could be produced. To extend the range of printable shades requires that the printing system modulate the amount of ink deposited. Printing systems can be categorized into two types: (1) those that are able to control either the density of ink deposited at each ink dot; and (2) those that can produce ink dots only of a consistent size and density but that can change the frequency of occurrence of the dots. The first category can be termed continuous-tone printers, the second halftoning printers.

The determination of whether to use a continuous tone or a halftoning printer for an application is not as obvious as it once may have been. Commercially available printers in both categories have improved substantially. Notably, for many applications ink-jet printers, which use sophisticated halftoning methods, now compete with dye-sublimation and photographic printers, which use continuous-tone methods.

3.6.1 Continuous-Tone Printers

Continuous-tone printers are able to control the density of ink or the resulting dot size through a modulation of the mechanism for ink deposition. For example, in dye-sublimation printers, the temperature of each of the pixel-sized elements in a heating element array controls the amount of dye transferred from a donor film to the paper. When the donor film is changed and the paper repositioned, successive deposition of cyan, magenta, and yellow dyes produces a full-color print. Precise color rendition requires that the temperature of each of the elements be carefully controlled.

3.6.2 Halftoning Printers

Conventional halftone printing uses a photographic screen with holes that have a radially varying density. A print is created when a conventional photographic negative is masked with the screen during the printing of a high-contrast positive. The resulting image contains the pattern of the screen with the size of each of the dots varying in proportion to the amount of light transmitted by the negative. The halftoned print may then be used to create a printing plate.

In digital halftoning, the dot size remains constant while the frequency of occurrence of the dots is varied within many small halftone cells. Fig. 3 displays a simple halftoning scheme with halftone cells consisting of 3 x 3 dots, thereby providing 10 levels of density per cell. In this case, each pixel is represented by one halftone cell. Commercial devices use considerably more sophisticated algorithms to ensure that their dot patterns are not prominent and that the transitions between regions with differing levels are not obvious.

Figure 3. Representation of a simple printing algorithm using 3 x 3 halftone cells

3.6.3 Confusing Terminology: ppi, dpi, lpi

There is considerable confusion in the field of digital imaging about terms relating to halftone screen frequency, printable dot frequency, and spatial sampling frequency. Too often, the term dots per inch (dpi) is used in lieu of pixels per inch (ppi). The term dpi should probably be reserved for the dot frequency in a halftoning printer, while the term ppi should be used when referring to the sampling frequency of a scanner or camera. A less frequently used term is lines per inch (lpi), which should be used when referring to the line or halftone cell frequency of a halftoning printer or to the frequency of occurrence of lines in a bar pattern.

3.6.4 Printer Characterization

A continuous tone or halftoning printer may be characterized, or profiled, by printing a digital test pattern containing a series of known color patches. The colors of the patches on the output print are measured using a colorimeter or spectrophotometer. Profile-building software, which reads each of the patches and compares their colors with those expected, is used to build the profile that describes the printer's characteristics precisely.

4.0 Image Quality Specification and Measurement

The design of an imaging system should begin with an analysis of the physical characteristics of the originals and the means through which the images may be generated. For example, one might examine a representative sample of the originals and determine the level of detail that must be preserved, the depth of field that must be captured, whether they can be placed on a glass platen or require a custom book-edge scanner, whether they can tolerate exposure to high light intensity, and whether specular reflections must be captured or minimized. A detailed examination of some of the originals, perhaps with a magnifier or microscope, may be necessary to determine the level of detail within the original that might be meaningful for a researcher or scholar. For example, in drawings or paintings it may be important to preserve stippling or other techniques characteristic of the artist.

The analysis should also include an assessment of the quality required by the applications for which the images will be used. The result of that assessment will guide the selection of the algorithms and components used in the scanning, compression, storage, display, and printing subsystems. Judgments of quality can be very subjective. Inevitably, trade-offs must be made among many parameters and costs of the various components.

While there does not seem to be any single, content-independent metric that relates closely to our perception of quality, there are many metrics that, in combination, can be used to specify a desired level of image quality, at least if images of test targets may be captured and analyzed. Thus, if the original documents or objects to be digitized are first characterized through measurements of the range of their reflectances, colors, and levels of detail, it is then possible to select image quality test targets and testing procedures to ensure that these characteristics are faithfully captured in the images.

There is considerable confusion in the field of digital imaging, perhaps caused by commercial competition, about the specification of image quality. For example, scanner manufacturers often emphasize the " true optical resolution" of their systems when referring to the maximum number of pixels that can be acquired (without interpolation) per unit length along each axis. This number is usually expressed in dpi or ppi. Scanner manufacturers also often emphasize " bit-depth," which is the number of bits per pixel (bpp) that their systems are capable of capturing. However, most buyers do not realize that scanners having identical "true optical resolution" and "bit-depth" may capture images of quite different quality. As another example, digital camera manufacturers often describe the "resolution" of their devices in terms of the total number of pixels in the image sensor. Again, the quality of the images produced by digital cameras having equal numbers of total pixels can vary substantially, even if the number of color levels produced are identical.

The following subsections define many of the terms associated with image quality and describe how quality can be specified and measured using test patterns and associated metrics.

4.1 Spatial Sampling Frequency and Geometric Distortion

The spatial sampling frequency, or sampling rate, is the number of pixels captured per unit length, measured in the plane of the document or other object. Manufacturers often refer (incorrectly) to a scanner's maximum spatial sampling frequency as its "true optical resolution." Sampling frequency can be easily and precisely measured with a test pattern containing horizontal and vertical rulings. For a flat field of view, the sampling frequency should be uniform throughout. Any variation in the sampling frequency would result in geometric distortion.

Other factors being equal, the storage required for an uncompressed image is proportional to the product of the sampling frequencies for each axis. There is, therefore, considerable motivation to use the minimum sampling frequency that will produce images with a level of quality appropriate for the applications in mind.

4.2 Quantization and Tonal Response

The storage required for an uncompressed image is proportional to the logarithm of the number of quantization levels. A gray-scale image using 256 (i.e., 28) levels per pixel would require one-half of the storage required for the same image if 65,536 (i.e., 216) levels per pixel were used. The number of levels of quantization per pixel is usually chosen to be the maximum number of values that can be represented by integer multiples of a byte.

Although the human eye does not have a linear response to light intensity and most computer displays do not produce light with an intensity that is linear with input, the quantization levels of input devices are most often chosen to be spaced uniformly with respect to input light amplitude. That is, the tonal response is selected to be linear with reflectance or transmittance.

Measurement of the values from gray-scale step patterns (often called step wedges) enables the determination of the tonal response curves of a scanner or camera. Fig. 4 displays a 20-step gray-scale wedge pattern and the values of optical density for each step.

Figure 4. 20-step gray-scale wedge with the corresponding values of (absolute) optical density

The optical density of a reflective medium is usually provided relative to that of a perfect diffuse reflector. (A perfect diffusely reflecting surface is defined to have an optical density of zero.) Absolute reflectance is equal to the number 10 raised to the negative of the optical density

and is usually expressed as a percentage. (A perfect diffusely reflecting surface would have an absolute reflectance of 100 percent.) The relative reflectance of one of the steps in the pattern (relative to that of the paper itself) may be found by subtracting the optical density of the paper from the optical density of the step and raising the number 10 to the negative of that value

The measurement of the color response curves for a low-cost color scanner is illustrated in fig. 5. As may be seen, the response curves for this particular unit are quite nonlinear, with a definite convex upward shape. This is often referred to as having a gamma, or exponent, of less than one. Note that the straight black line is the least squares fit to the red response. The linear equation and the correlation coefficient for this fit are also shown.

Figure 5. Tonal response curves for a low-cost tabletop scanner

4.3 Spatial Resolution

Spatial resolution is a measure of the ability to discern fine detail in an image. An image with high resolution will appear to be sharp and in focus. Although the spatial resolution of a scanning system is often considered equivalent to its sampling frequency, it is a distinct metric and should be measured with a suitable test pattern.

Scanning systems having the same sampling frequency and quantization can exhibit quite different spatial resolutions, depending upon focus accuracy, contamination of the sensor or optical elements, vibration in the document transport mechanism, electronic noise introduced before analog to digital conversion, and other factors.

4.3.1 Visual Assessment of Spatial Resolution

For many systems, spatial resolution can be judged visually using simple legibility patterns such as those shown in fig. 6.

Figure 6. Three legibility test patterns

In these test patterns, either a series of parallel black-white line pairs of decreasing spacing (and, hence, increasing spatial frequency) or of converging black-white lines is printed. The numbers printed next to the various pattern elements are the spatial frequencies, expressed in line pairs per millimeter or per inch. The star pattern's circular breaks are at spatial frequencies of 50, 100, and 200 line pairs per inch. Using these test patterns, one can make a rough assessment of spatial resolution by determining that point at which the black lines appear to merge or become virtually indistinguishable from one another.

Several cautionary statements must be issued concerning the use of such patterns. First, it is easy to misinterpret the resulting images because of a phenomenon known as aliasing , in which misleading patterns are caused by the interference of the sampling grid and the pattern. Second, the shape of the tonal response function affects the appearance of the bar patterns. Specifically, high contrast will cause the black-and-white bar pairs to appear to be sharper than they really are and the image to appear to have a higher resolution. Before attempting a visual comparison between two systems, one should ensure that they have the same tonal range and response function. A third caution is that bar patterns whose frequencies are above the Nyquist limit, i.e., one-half of the sampling frequency, should not be used.

4.3.2 Techniques for the Measurement of Spatial Resolution

One metric of spatial resolution is the spatial frequency response, also known as the modulation transfer function (MTF). MTF is the amplitude of a linear system's output in response to a sinusoidally varying input signal of unit amplitude. Equivalently, it is the magnitude of the Fourier transform of a system's response to an input signal that is a perfectly sharp, single point of light-the point-spread function of the system.

The MTF describes the response of a linear system to all frequencies (up to the Nyquist limit of one-half the sampling frequency). It can be measured directly, using sine wave modulated patterns, or with a step function image (i.e., a "knife edge" transition), through the Fourier transform of the difference function.

Because MTF is a function of spatial frequency, attempts have been made to reduce MTF curves to a single number, such as the modulation transfer function area (MTFA), which is the area under the MTF curve, compensated for the characteristics of the human visual system.

An alternative metric of resolution, and one that is usually simpler to measure because it requires only direct measurement with a high-contrast bar chart, is the contrast transfer function (CTF). The CTF is deemed more susceptible to aliasing errors and other misinterpretations than is the MTF, although methods of detecting such aliasing and determining the MTF by using the CTF have been developed.

See also, Guide 2, Resolution or Modulation Transfer Function.

4.4 Spatial and Temporal Uniformity

Ideally, the response of an image acquisition system to an object having uniform reflectance will be uniform throughout the system's field of view (spatially) and over time (temporally). However, the response of a real imaging system varies over its field of view because of uneven illumination, optical aberrations, and nonuniform levels among the image sensor's elements for the same input light intensity. A system's response may vary over time because of varying illumination levels, electronic noise during digitization, or statistical variations in the numbers of charge carriers (electrons or holes) collected.

Spatial and temporal uniformity may be measured using test patterns of uniform reflectance. Suitable averaging over multiple images can separate the temporally and spatially varying components.

A useful technique to assess the response and illumination variability over the field of view is to acquire an image of a uniform target and to perform histogram equalization on the image. In histogram equalization, the original image's pixel levels are redistributed to achieve a uniform distribution in the output image. The histogram of an image is a graph of the frequency of occurrence of its pixel levels. Spatial variations that are not apparent in the original image often become apparent in the histogram-equalized image. This technique is illustrated in fig. 7. In this case, although the unequalized image appears uniform, unevenness in the illumination is quite apparent in the histogram-equalized image. The top is noticeably darker, there is evidence of smudges or fingerprints on the glass, and there may be a slight darkening of several columns of pixels (i.e., in the slow scan direction) on the right side.

Figure 7 Example of the use of histogram equalization to locate nonuniformities in a scanned image

4.5 Color Accuracy

Humans perceive color when combinations of wavelengths of visible light strike the retina of their eyes. Many people do not realize, however, that differing combinations of wavelengths can produce the same color sensation.

There are three types of color receptor in the human eye; consequently, we can describe the sensation of color by three values. A color model enables a unique code or set of coordinates, usually three, to be assigned to each perceivable color. We can imagine that each of these coordinates is an axis in a three-dimensional space, and that the range of all perceivable colors fills this space. For example, we can envision a red, green, and blue (RGB) space, such as is used by television displays, to be a cube with sides of unit length, with the origin (0,0,0) representing black and the opposite vertex (1,1,1) representing white.

Displays that produce color by the emission of light are based upon an additive color model, usually with red, green, and blue primaries, although the actual combinations of wavelengths emitted by the RGB primaries are system-dependent. Systems that produce color through the absorption of light (e.g., printing pigments) are based upon a subtractive color model, usually including black and the three colors cyan, magenta, and yellow (CMYK). Again, the particular combinations of wavelengths absorbed are system-dependent.

Color for both emissive and absorptive systems can be measured using a device known as a colorimeter, which mimics the human color response. A colorimeter is a spectrophotometer (a device that measures light intensity as a function of wavelength) with spectral weighting functions that simulate the sensitivity of the eye's color receptors.

4.5.1 Device-Dependent Color

The commonly used additive color RGB coordinate systems of monitors and scanners are device-dependent; that is, the color produced or sensed by a particular combination of RGB coordinates will vary from one system to another. Additive RGB systems cannot encompass all perceivable colors. Similarly, subtractive CMYK coordinate systems, such as used in most color printing devices, are device-dependent and can render only a limited range of colors. The range of colors that a device is capable of rendering is known as its gamut.

4.5.2 Device-Independent Color

In 1931, the Commission Internationale de l'Éclairage (CIE), or International Commission on Illumination, produced a standard response function (known as the Standard Observer) for color matching based on experimentation with normal subjects viewing colored light sources under carefully controlled conditions. The CIE developed an artificial coordinate system in which the tristimulus values required to match all perceivable colors are made positive, and designated these coordinates X, Y, and Z, which are often normalized to x, y, and z.

         

Since then, there have been refinements to the Standard Observer, although the x, y, and z values remain the basis for many device-independent representations of color.

Fig. 8 displays three curves in 1931 CIE xy space. (The third dimension, z, may be omitted because x + y + z = 1.) Such a plot is known as a chromaticity diagram. Any color, without its luminance component, is represented by a point in this space.

Figure 8. The color gamuts of a computer monitor (black triangle) and a dye sublimation printer (red irregular hexagon), superimposed upon the Standard Observer curve (blue) in CIE 1931 xy space

The outermost curve in this plot is the locus of all monochromatic wavelengths of visible light (known as the CIE Standard Observer Curve); all perceivable colors lie within it.

The vertices of the triangle represent the coordinates of the primary colors of an additive color device, specifically the colors of the three phosphors of an RGB monitor. The region within the triangle represents the monitor's gamut (i.e., the range of all possible colors that may be displayed by the monitor.) Clearly, many perceivable colors are outside of the triangle and cannot be rendered by this device.

The irregular hexagon represents the gamut of a subtractive color device, specifically a dye-sublimation printer. All colors printable by this device lie within the hexagon. Again, a large range of perceivable colors cannot be rendered by the device. The area of intersection of the triangle and the hexagon represents the range of colors that are viewable on both the monitor and the printer.

4.5.3 Perceptually Uniform Color Space

While the CIEXYZ color space is device-independent and can represent all perceivable colors, it is highly nonuniform in terms of perceptible color shifts. A slight change in the values in one portion of the space may represent only a slight color shift. That same numerical change in another portion of the space may represent a considerably larger color shift. This is a problematic situation if one desires to specify the tolerances with which colors may be rendered.

A color space has been specified by the CIE that includes all of the physically realizable colors and is close to being perceptually uniform; that is, a just-perceptible variation in color is of approximately the same size throughout the space. Its color coordinates, designated L*, a*, and b*, are described in terms of the CIE X, Y, and Z coordinates. The color space is often termed CIELAB. L is the lightness component, and a (green to magenta) and b (blue to yellow) are the chromatic components. A definition of CIELAB color space is provided in Appendix A.

One measure of color difference in CIELAB, known as Delta E, is the simple Euclidean distance between the colors. A Delta E value of 1.0 is usually considered to represent a just-perceptible change in color.

See also, Guide 2, Color Reproduction.

4.5.4 Surface Texture and Specular Reflection

When we see uniformly colored objects under normal room lighting or daylight conditions, we perceive them to be uniformly illuminated and reflecting a uniform amount of light. In actuality, few objects in our surroundings are either uniformly illuminated or have surfaces that only reflect incoming light diffusely. Most surfaces reflect light both diffusely and specularly. Specular (or mirror-like) reflection occurs on smooth surfaces when the angle of incident illumination is close to the angle of reflection.

Surface characteristics such as texture and gloss produce an appearance that is dependent upon the angle of view. A scanner or camera having uniform illumination and a single point of view cannot capture angle-of-view-dependent features. Thus, if only two-dimensional images are considered, there must necessarily be differences between the original and the reproduction for many artworks, and these differences will probably be difficult to quantify. Some of these differences may be of interest to researchers; for example, an art historian may wish to examine brush strokes on an oil painting. To characterize at least some of the surface effects, it may be possible to examine differences between an image obtained with carefully placed, multiple point sources and an image with uniform illumination.

5.0 Color Management and ICC Profiles

The end-to-end management of color in the printing industry has traditionally been as much art as technology. It has required coordination between the designer and the printer of a color document and a feedback loop that allows a designer to inspect and alter, if necessary, the colors in the final product. Over time, a designer would become more familiar with the characteristics of particular printing systems and learn to alter the colors on his or her display to accommodate those characteristics. As computer networks evolved and the display or printing of a document image became more removed from its production, the need arose for device-independent color management in which colors are specified absolutely.

5.1 International Color Consortium

The ICC was formed to develop a specification for a color profile format. The assumption underlying the specification is that any input or output device could be profiled (characterized) to describe the transformations required to convert any image from a device-independent color space to that of the device itself and from the device's color space to the device-independent space.

The ICC was established for the purpose of "creating, promoting and encouraging the standardization and evolution of an open, vendor-neutral, cross-platform color management system architecture and components" ( ICC 1999). The ICC now consists of a group of more than 50 companies that include both manufacturers and users of color imaging devices.

5.2 Profile Connection Space

The conversion (in both directions) between I input color spaces and O output color spaces would seem to require 2 x I x O different conversion functions. However, by using an intermediary color space, in which all perceivable colors can be represented, only 2 x (I + O) different conversion functions are required-a substantial reduction in the number of functions if conversion among many spaces must be performed. This intermediary space can be thought of as a common language, with interpreters required only to translate the common language to and from the languages of each of the input and output spaces.

Using such an intermediary device-independent color space, known as the profile connection space (PCS), the ICC developed a specification for the unambiguous conversion among the many device-dependent color spaces (ICC 1998). The ICC has chosen two PCSs, namely CIEXYZ and CIELAB.

Fig. 9 illustrates this concept for two input and two output devices. A color management system uses information within the profiles, which contain explicit information on the color response characteristics of each of the devices, to convert between the native color spaces for any combination of the input and output devices.

Figure 9. Diagram of the use of a profile connection space to convert between various device-dependent color spaces

Although fig. 9 shows input only via direct digitization scanner or camera, it is also applicable for input using an intermediary photographic process. If either photographic prints or transparencies are used before digitization, a profile could be prepared that included such photographic processing. In that case, the color test pattern for the preparation of the profile should be photographed and processed in a manner identical to that used for the objects in the collection.

5.3 ICC Profile Format

Device profiles are explicitly defined data structures that describe the color response functions and other characteristics of an input or output device and provide color management systems with the information required to convert color data between a device's native (i.e., device-dependent) color space and the PCS. The ICC specification divides devices into three broad classifications: input devices, display devices, and output devices. The ICC also defines four additional color processing profile classes: device link, color space conversion, abstract, and named color profiles.

ICC-compliant color profiles are combined ASCII and binary data files that contain a fixed length header, followed by a tag table, followed by a series of tagged elements. The header provides information such as the profile's size, the date and time it was created, the version number, the device's manufacturer and model number, the primary platform on which the profile was created, the profile connection space selected, the input or output data color space, and the rendering intent. The tag table is a table of contents for the tags and the tag element data in the profiles. The tags within the table may be in any order, as may the tagged elements. Both matrix multiplication and look-up table calculation elements may be used for the conversion between native color spaces and the PCS.

Color profiles can exist as separate files and may be invoked as needed by a color management system, in which case they are usually placed within specified system-dependent folders. They can also be embedded within several types of image files, notably in Tag(ged) Image File Format (TIFF), JPEG File Interchange Format (JFIF), and Encapsulated Postscript (EPS). The intention of embedded profiles is to allow a user to display or print a file's color data without having the profile of the system that created the image stored on the destination system.

5.4 Creation and Editing of Device Profiles

Device profiles may be obtained from a device's manufacturer or, with the use of profile-building software, they may be created by the device's user. The profiles from the manufacturer are usually generic for a specific model and do not account for unit-to-unit variability. Precision color management requires that a system's user create a custom profile and check its accuracy periodically, since a system's color response may differ from the manufacturer's nominal response and may change over time as a consequence of lamp aging, amplifier gain change, phosphor aging, and similar factors. Several commercially available software packages enable users to create and edit profiles for scanners, digital cameras, monitors, and printers.

5.4.1 Scanner Profiling

For scanners, the creation of a profile requires that a calibrated color test target be scanned. The test target most often used is the IT8.7, which is available as a reflective 5" x 7" print (IT8.7/2), a 35-mm slide transparency (IT8.7/1), and a 3" x 4" transparency (IT8.7/3). The IT8.7 contains approximately 250 color swatches. Each physical target should be accompanied by the set of its calibrated color values in a computer file known as a target description file. The profile-generation software compares the values specified for each of the swatches with the input values from the scanner to create the scanner's profile. Fig. 10 displays a reduced size image of Kodak's version of the IT8.7/2, known as the Q-60R1.

Figure 10. A reduced-size image of Kodak's version of the IT8.7/2

5.4.2 Printer Profiling

For printers, profile creation requires that a known digital image, containing swatches of various colors, be printed. The resulting output print must then be measured with a colorimeter. The profile generation software compares the readings from the colorimeter with the values in the digital image to create the printer's profile. The accuracy of the generated profile increases as the number of swatches used increases. Fig. 11 displays an image of a set of 226 color swatches generated by printer profiling software.

Figure 11. A reduced size image of a set of color swatches printed using printer profiling software

5.4.3 Monitor Profiling

Manufacturers of monitors usually provide either a generic profile or a model-specific profile. Such profiles, which generally provide an adequate level of color correction, may be available on the manufacturer's Web site.

Alternatively, simple profiles for monitors can be custom generated using one of several commercially available programs that measure a monitor's characteristics. Users of these programs must interactively match test swatches with carefully chosen patterns. A printed color card template may be provided that enables a user to compare the monitor's color side-by-side with that seen on the card under ambient illumination. Such programs are often supplied with high-end graphics monitors.

The creation of a more accurate profile for a monitor requires that colors generated by the monitor be measured using a clamp-on colorimeter or spectrophotometer. Profile-building software changes the digital values of the color displayed in a portion of the display, measures the color of that portion with the colorimeter, and creates the profile by comparing the values measured with the values sent.

6.0 Managing an Imaging System

This section discusses some of the issues associated with the specification, evaluation, and management of an imaging system, with an emphasis on those issues concerning the description and maintenance of image quality.

6.1 Determination of Requirements

Preparation for the development of an imaging system for any sizable conversion effort should begin with an assessment of the characteristics of the original objects and a determination of the fidelity with which they must be preserved. Curators, preservationists, imaging experts, and potential users of the images might be consulted concerning what they consider an adequate level of detail and color fidelity for their purposes.

See also, Guide 1, Developing Appropriate Capture Specifications and Processes.

6.1.1 Preservation of Fine Detail

A detailed visual examination of representative objects should be conducted, aided by magnification devices as appropriate. If the objects are drawn or painted, the investigation might include measurements of the widths of the finest lines. If the objects are silver halide photographs or negatives, the investigation might include measurements of film grain size. (Presumably, film grain detail would not need to be preserved.)

A spatial sampling frequency should be selected that will adequately preserve all relevant details in the master images; that frequency would normally be at least twice the inverse of the width of the smallest detail. However, since the storage volume of the uncompressed images and, most likely, the acquisition time will be proportional to the square of the sampling frequency, a trade-off point must be selected between the level of detail preserved and storage, transmission, and acquisition costs.

6.1.2 Selection of Number of Levels of Gray or Color

The examination should include using a densitometer and a colorimeter to measure the range of the objects' optical densities and colors. Precision conversion requires that the scanner or camera have a dynamic range and a color gamut that will preserve the full range of the objects' densities and colors. The dynamic range of a scanner or camera will be limited not only by the number of quantization levels but also by internal electronic noise, dark current, and, for shorter exposure times, statistical fluctuations in collected charge. For example, 8 bits per pixel would seem to provide a dynamic range of 256:1. In actuality, the dynamic range is often substantially less because levels near pure black (at 0) and pure white (at 255) are unavailable. Dark current and noise may prevent any level less than about 5 from being meaningful. Saturation at a level of 255 means that pure white should be set somewhat lower (i.e., at about 250). Thus, the dynamic range would be about 50:1 (250÷5) for this example.

The color gamut of a three-color scanner or camera is inherently triangular (in CIE xy space), and most scanners cannot encompass the full range of colors that can be created by diverse pigments and dyes. Plotting the measured colors of the objects and the gamut of a scanner under consideration in CIE xy space will provide an indication of which colors will be preserved and which will not. The distance from the edge of the gamut to colors outside the gamut provides an indication of the degree of color loss. The distance from the gamut could be calculated using Delta E in CIELAB space to provide a more intuitive distance metric.

The number of bits allocated per color will determine the fineness with which color changes can be preserved. If extended regions of slowly varying color are present in the images, it may be advantageous to use a greater than normal number of bits per color. In most cases 8 bits per color (24 bits per pixel) provides adequately smooth transitions. If not, 12 bits per color (36 pits per pixel) may be required.

6.2 Preparation of Specifications

If a procurement of digital imaging equipment is to be conducted competitively, it is important that detailed, unambiguous specifications be prepared. To the extent possible, the specifications should reference accepted and open standards, such as those prepared by ISO Technical Committees. One or more sets of test patterns should be prepared. Those responsible for acquisition should consider disclosing to competing vendors the designs of the test patterns that will be used for system analysis. If suitably sized test charts appropriate for the conversion effort are not available off-the-shelf, one may have custom charts prepared. Some charts combine all of the required patterns on a single chart.

6.3 Comparative Evaluation of Systems

To the extent possible, image acquisition systems under consideration should be tested before procurement and, preferably, before the development of specifications, using both the test patterns described earlier and representative objects from the collection.

In preparation for a competitive procurement, a set of evaluation factors, along with relative weights for the factors, is typically prepared. The weighting should consider image quality, although it is quite difficult to determine precisely relative weights for the contributions to overall image quality of factors such as spatial resolution, color fidelity, and dynamic range. The weights would undoubtedly be content-dependent, and only detailed psychophysical experiments with subjects typical of the final users might provide a quantitative basis for their determination. Nonetheless, a set of weights might be prepared, based upon the opinions of potential users and experts in imaging.

6.4 Quality Control During Image Production

Ongoing quality control should be part of any large conversion effort. The characteristics of most image acquisition systems change over time, and periodic testing is required to ensure that the characteristics remain within specifications. Such a quality-control process should be distinct from the process of updating color profiles, through which the normal aging of illumination lamps and slowly changing amplifier gain may be accommodated. Characteristics of scanners and cameras that may change over time and that cannot be accommodated by the profiling process include spatial resolution, spatial uniformity, and gray-scale range. As dirt and foreign matter accumulate on a glass platen and on the optical elements and sensor, image quality is degraded. If multiple illumination sources are used, the balance between them may change in a manner for which profiling cannot compensate. Additionally, many factors are under operator control that may change as a result of shifting priorities or inattention to details.

Therefore, an imaging system should be tested on a schedule determined by experience—frequently at the beginning and decreasing in frequency thereafter, until any variations in the system's characteristics are detected occasionally. It is often possible to scan test patterns during shift changes for the operators or during periodic preventive maintenance. To the extent possible, the analysis of the test patterns should be conducted automatically through software. The analysis software should produce a report that displays the current values for the various image quality factors, along with their specified tolerance limits.

7.0 The User's Perspective

For the final user of the images, an important issue is whether the digital images retrieved are represented as faithfully as possible when displayed or printed. To ensure that they are, the user's display or printing system must be properly calibrated and profiled and the driver software must transmit the images to the display or printer in the format appropriate to the profile.

7.1 Measurement of Image Quality for Displays

The image quality of a computer display may be assessed visually using digitally generated test patterns similar to those described earlier for scanners and cameras. Test patterns for measuring spatial resolution, number of discernible levels of gray and color, and geometric accuracy may be easily devised using graphics or image-editing software. While scanned images of printed test patterns can be used in lieu of computer-generated test patterns, it should be remembered that the degradation associated with the scanning process may be difficult to separate from that associated with the display.

Particularly for CRTs, image quality varies as a function of position on the display. Achieving a uniformly high-resolution image for all regions on the display requires precise control of the CRT's electron beams. Advanced electron beam forming-and-deflection techniques are used in high-end color monitors to ensure uniform spot size and intensity.

The display adapter card should be configured to take maximum advantage of the monitor's capabilities. Even when the best monitor available is obtained, the images are in the correct format, and the profile is optimal, the settings for the graphics adapter often are not selected to take best advantage of the monitor's capabilities. Many users seem unaware of the capabilities of their display adapter and have chosen a default value for the active area (addressability) or the color palette (number of bits per pixel), thereby limiting the quality of displayed images.

Many users are unaware that the color temperature (that is, the balance among the red, green, and blue outputs) of a monitor may be changed. Most monitors are set up for a color temperature of 9300 ºK, resulting in a very bright, but quite blue output. Instead, a color temperature of 6500 ºK (or 5000 ºK) might be chosen. That setting should provide output images with colors closer to those seen under daylight or ambient illumination.

If possible, the monitor and display adapter combination should be profiled. This can be done with the aid of commercially available profile-building software and a clamp-on colorimeter. It can also be done, albeit with somewhat less accuracy, with any one of several simple, inexpensive software packages that generate red, green, and blue bar patterns and require the viewer to select the best match to the pattern from a set of uniform colors. Such profiling software thereby effectively determines the gamma of the display for each of the colors. In combination with the known color coordinates for the display's phosphors, a simple display profile can then be generated. Alternatively, generic profiles suitable for most noncritical applications are usually available from the manufacturers of the more commonly used displays.

7.2 Measurement of Image Quality for Printers

The quality of printed images may be measured in a manner similar to that described for scanners and cameras, except that the input test patterns will be precisely generated digital images rather than hard-copy prints or transparencies, and the measurements will be performed visually or with optical instruments. The test patterns can be easily generated using graphics or image editing packages. The user will probably wish to design patterns for spatial resolution, gray-scale and color levels and range, spatial uniformity, and color fidelity.

The format in which an image is transmitted to a printer and the capabilities of the printer driver software can have a great effect on the quality of the printed images. Users should ensure that the best, most up-to-date driver (usually from the printer's manufacturer) is being used, rather than a driver that was designed to accommodate several varieties of printers. Users should also try to ensure that the driver is using the correct ICC-compliant color profile, that the images are being transmitted from the application to the printer driver in a color space appropriate to the profile, and that all other parameters selected during the image printing are the same as those used during the color profiling.

8.0 Appendix A: Definition of CIELAB Color Space

The following equations define the color space known as CIELAB:

and X n, Y n, and Z n are the values for the reference white.
One often-used measure of color difference in CIELAB is known as Delta E and is defined by the following formula:


GUIDES HOME | DLF HOME | OCLC RESEARCH HOME | CLIR HOME