To understand graphics file formats, you need some background in computer graphics. Of course, computer graphics is an enormous subject, and we can't hope to do it justice here. In general, we assume in this book that you are not a novice in this area. However, for those who do not have an extensive background in computer graphics, this chapter should be helpful in explaining the terminology you'll need to understand the discussions of the individual graphics file formats.
Pixels and Coordinates
Pixel Data and Palettes
Overlays and Transparency
For Further Information
If you're interested in exploring any of these topics further, far and away the best overall text is Computer Graphics: Principles and Practice by James D. Foley, Andries van Dam, S.K. Feiner, and J.F. Hughes. This is the second edition of the book formerly known throughout the industry as "Foley and van Dam." You'll find additional references in the "For Further Information" section at the end of this chapter.
Locations in computer graphics are stored as mathematical coordinates, but the display surface of an output device is an actual physical object. Thus, it's important to keep in mind the distinction between physical pixels and logical pixels.
Physical pixels are the actual dots displayed on an output device. Each one takes up a small amount of space on the surface of the device. Physical pixels are manipulated directly by the display hardware and form the smallest independently programmable physical elements on the display surface. That's the ideal, anyway. In practice, however, the display hardware may juxtapose or overlay several smaller dots to form an individual pixel. This is true in the case of most analog color CRT devices, which use several differently colored dots to display what the eye, at a normal viewing distance, perceives as a single, uniformly colored pixel.
Because physical pixels cover a fixed area on the display surface, there are practical limits to how close together two adjacent pixels can be. Asking a piece of display hardware to provide too high a resolution--too many pixels on a given display surface--will create blurring and other deterioration of image quality if adjacent pixels overlap or collide.
In contrast to physical pixels, logical pixels are like mathematical points: they specify a location, but are assumed to occupy no area. Thus, the mapping between logical pixel values in the bitmap data and physical pixels on the screen must take into account the actual size and arrangement of the physical pixels. A dense and brightly colored bitmap, for example, may lose its vibrancy when displayed on too large a monitor, because the pixels must be spread out to cover the surface.
Figure 2-1 illustrates the difference between physical and logical pixels.
The number of bits in a value used to represent a pixel governs the number of colors the pixel can exhibit. The more bits per pixel, the greater the number of possible colors. More bits per pixel also means that more space is needed to store the pixel values representing a bitmap covering a given area on the surface of a display device. As technology has evolved, display devices handling more colors have become available at lower cost, which has fueled an increased demand for storage space.
Most modern output devices can display between two and more than 16 million colors simultaneously, corresponding to one and 24 bits of storage per pixel, respectively. Bi-level, or 1-bit, displays use one bit of pixel-value information to represent each pixel, which then can have two color states. The most common 1-bit displays are monochrome monitors and black-and-white printers, of course. Things that reproduce well in black and white--line drawings, text, and some types of clip art--are usually stored as 1-bit data.
A Little Bit About Truecolor
People sometimes say that the human eye can discriminate between 2^24 (16,777,216 colors), although many fewer colors can be perceived simultaneously. Naturally enough there is much disagreement about this figure, and the actual number certainly varies from person to person and under different conditions of illumination, health, genetics, and attention. In any case, we each can discriminate between a large number of colors, certainly more than a few thousand. A device capable of matching or exceeding the color-resolving power of the human eye under most conditions is said to display truecolor. In practice, this means 24 bits per pixel, but for historical reasons, output devices capable of displaying 2^15 (32,768) or 2^16 (65,536) colors have also incorrectly been called truecolor.
More recently, the term hicolor has come to be used for displays capable of handling up to 2^15 or 2^16 colors. Fullcolor is a term used primarily in marketing; its meaning is much less clear. (If you find out what it means, exactly, please let us know!)
It is frequently the case that the number or actual set of colors defined by the pixel values stored in a file differs from those that can be displayed on the surface of an output device. It is then up to the rendering application to translate between the colors defined in the file and those expected by the output device. There is generally no problem when the number of colors defined by the pixel values found in the file (source) is much less than the number that can be displayed on the output device (destination). The rendering application in this case is able to choose among the destination colors to provide a match for each source color. But a problem occurs when the number of colors defined by the pixel values exceeds the number that can be displayed on an output device. Consider the following examples.
In the first case, 4-bit-per-pixel data (corresponding to 16 colors) is being displayed on a device capable of supporting 24-bit data (corresponding to more than 16 million colors). The output device is capable of displaying substantially more colors than are needed to reproduce the image defined in the file. Thus, colors in the bitmap data will likely be represented by a close match on the output device, provided that the colors in the source bitmap and on the destination device are evenly distributed among all possible colors. Figure 2-2 illustrates this case.
This process, called quantization, results in a loss of data. For source images containing many colors, quantization can cause unacceptable changes in appearance, which are said to be the result of quantization artifacts. Examples of common quantization artifacts are banding, Moire patterns, and the introduction of new colors into the destination image that were not present in the source data. Quantization artifacts have their uses, however; one type of quantization process, called convolution, can be used to remove spurious noise from an image and to actually improve its appearance. On the other hand, it can also change the color balance of a destination image from that defined by the source data.
In the next case, the output device can display fewer colors than are defined by the source data. A common example is the display of 8-bit-per-pixel data (corresponding to 256 colors) on a device capable of displaying 4-bit data (corresponding to 16 colors). In this case, there may be colors defined in the bitmap which cannot be represented on the 4-bit display. Thus, the rendering application must work to match the colors in the source and destination. At some point in the color conversion process, the number of colors defined in the source data must be reduced to match the number available on the destination device. This reduction, or quantization, step is illustrated in Figure 2-3.
Copyright © 1996, 1994 O'Reilly & Associates, Inc. All Rights Reserved.