Computer images are merely a collection of coloured pixels on the screen. But in the binary language of computers, labels such as "red" or "purple" have no meaning. So how do computers identify colours? The answer is that every piece of visual hardware or software uses some kind of "colour space" - a model for representing colours...
The most common colour space you'll encounter is RGB, because it's the way your colour monitor works. Your monitor projects various intensities of red, green and blue light onto a screen - thus the term RGB - to produce the full range of hues and tones. RGB identifies every instance of colour by three numbers, called "channels" . These specify the intensity of red, green and blue as a number from 0 (dark) to 255 (full intensity).
You can combine these channels to make new colours in the same way you would mix paints. Red and green light together make yellow; green and blue make cyan; and blue and red make violet. Pairing unequal values creates the incremental colours in between (e.g. orange is red with a bit of green).
Colour combinations like the ones above produce pure, bright hues. Using equal values in all three channels produces neutrals ranging from black (all channels at 0) to white (all channels at 255). So colour neutralises as the RGB values approach equivalence: increasing all channels at once adds "white ", creating a pale tint; reducing the strongest colours adds "black ", creating a dark shade. As you become accustomed to working with RGB, you'll develop an intuitive sense of the values needed for a given colour.
Most graphics tools let you use HSB (hue-saturation-brightness) which follow the paint-mixing metaphor more directly. Hue is a position on a 360 degree colour wheel, with red at 0, green at 120 and blue at 240. Saturation and brightness are both percentages. 100 per cent equals a pure hue, while adding white and black respectively, reduces them toward 0.
CMYK defines a colour by the amount of cyan, magenta, yellow and black pigment needed to produce it on paper. The four CMYK channels describe colours more precisely than most monitors can display, so it's for high-quality print work.
RGB measures each channel from 0 to 255 because that's the range you get from eight bits of data, and eight bits make a byte. The amount of data used to represent a colour is called "colour depth ".
Colour depth is important in two respects when working with graphics for the Web: the colour depth of your monitor and the colour depth of the files you use to store your images. Monitor colour depth depends on the capacity your display hardware supports and how the software drivers are configured. Your operating system usually provides some sort of control panel to set the display colour depth. File colour depth depends on the file format you use to store your graphics.
Since typical RGB use three eight-bit channels, it adds up to a 24-bit colour depth. When available, full 24-bit colour is called "true colour ". A true-colour monitor displays every pixel's colour exactly. The option often appears as Millions of Colours in monitor settings, since it adds up to 16,777,216 RGB combinations. Likewise, a true-colour image file records the full range of colours precisely.
True colour allows more hues than the eye can distinguish; so most operating systems offer the option of 16-bit high colour (Thousands of Colours on Macintosh). In high colour, the monitor actually displays only 32 distinct levels of red, 32 of blue and 64 of green. The visual difference is almost unnoticeable, but reducing the colour depth to 16 bits per pixel boosts video performance. And running your computer system in high colour won't affect your image data: most applications, such as Photoshop or a Web browser, still use the full 24-bit values. The data gets rounded off only when displayed on the monitor. That's why there are very few high-colour image files.
Older, less powerful computer hardware and certain file formats can handle only eight bits per pixel. An eight-bit colour range is rather small for three channels, so eight-bit environments use indexed colour. With indexed colour images, the system or image file maintains a colour table or "palette " of up to 256 colours. The eight-bit value for each pixel identifies which of those colours to use - the computer equivalent of painting by numbers. Indexed colour lets eight-bit displays and images simulate true colour, since the palette colours themselves are 24-bits deep.
Dithering and antialiasing
Applications that create or display graphics often bump up against the limits of the hardware they run on. Images may have more colours than the monitor can show or details too small for the pixels to render. That's where dithering and antialiasing come in.
Monitors and image files limited to 256 colours can create the illusion of more colours by dithering the available colours in a diffuse pattern of pixels, approximating the desired colour. Dithering is used by operating systems and display applications, such as Web browsers, running on eight-bit monitors. Image editors use dithering to convert true-colour images to indexed colours. Because it can look bad in some situations, most image editors make dithering an option. The alternative to dithering is colour substitution, which uses the closest colour on the palette.
All computers and printers, regardless of colour depth, render pixels in a grid. This creates problems for images that aren't grid-shaped. The strict division between pixels is called "alias" , so certain applications use antiliasing to smooth out the image. This interpolates colours where they meet, creating the illusion of smooth nonhorizontal or nonvertical boundaries.
Antialiased type appears smoother and more legible than pixelated aliased type; antialiased images typically look less blocky and more professional. Image editors usually offer an antialias option for most operations. Just bear in mind that antialiased images tend to require more colours to create the interpolated regions.
Colour matching and gamma correction
One problem with the RGB colour model is that it measures colour relative to the hardware being used at the time. A common complaint among designers - and their clients - is that graphics developed on one platform don't look the same on another. For example, an image that looks great on a PC may appear pale or washed out on a Macintosh.
The problem is that all monitors are not alike and it goes deeper than ambient light or the brightness knob. The relation between RGB values and the actual colour displayed on the screen is almost never linear. For example, a red channel set to 200 should theoretically be twice as bright as a red channel set to 100, but it usually isn't. And the actual relation, called gamma , varies from computer to computer; so even if one colour matches, most of the rest won't.
The images below simulate the differing gamma effects of the PC and the Macintosh:
Raster vs. vector
No, it's not some ancient Greek family tragedy. When you start working directly with image files, the way the image data is recorded determines your options for changing it.
On a computer monitor images are nothing more than variously coloured pixels. Certain kinds of image-file formats record images literally in terms of the pixels to display. These are raster images and you can edit them only by altering the pixels directly with a Bitmap editor. Photoshop and Paint Shop Pro are two of the most popular Bitmap editors.
Vector image files record images descriptively in terms of geometric shapes. These shapes are converted to Bitmaps for display on the monitor. Vector images are easier to modify because the components can be moved, resized, rotated, or deleted independently. PostScript is a popular vector format for printing, but, so far, Macromedia's Flash is the closest thing to a standard vector format on the Web. In an attempt to make it an industry-wide standard, Macromedia opened its Flash file format in April 1998, making it freely available to content and tools developers. The only W3C-supported vector format still under development is Scalable Vector Graphics (SVG).
This distinction can loom large, e.g. when clients or co-workers ask you to alter the text on an image. Chances are the image is stored in a raster formatted image file so the change won't be as easy as they think. You'll have to alter the wording by changing the individual pixels themselves. Bear this in mind when creating images you might have to modify later.
True vs. Web image formats
Any file that is stored on a computer or sent over the Internet is in a specific format. Images are no exception and there are a wide variety of image formats in use today. Your choice of image format is based on a variety of factors such as whether you plan on editing the image in the future, whether you want the smallest possible image for downloading over the Web, or what image editing tools you have at your disposal.
When you want to save or keep a copy of an image for further editing, you need to pick a format that records the image correctly, without losing any details. These are typically called true image formats . As long as you store your original images in a true image format, you can reedit them later without losing any quality.
However, true image formats tend to have large file sizes making them unsuitable for sending over the Internet. For Web images, you want to pick a format that will result in the smallest possible file size. The two most common today are the Graphics Interchange Format (GIF) and the Joint Photographic Experts Group (JPEG). The key is that both of these formats compromise the image for the sake of compression, so you shouldn't use them for original artwork you may want to modify later. (The exception to this is an image with no more than 256 colours, which can be safely stored as a GIF.) Most image editors offer a Save As or Export command to let you safely create separate GIF or JPEG versions for posting on the Web, saving the original in a true image format.
True image formats
A true image format accurately stores an image for future editing. There are dozens, if not hundreds, of existing true image formats and picking the right one depends on which editing tools you plan on using, as well as whether you need to share the files with others who might use a different set of tools.
Every major computer operating system has its own native image format. Applications written for a given operating system are almost guaranteed to support that format, so you can play it safe if someone needs the image and you know the platform they use. Windows and OS/2 use the BMP format, while Macintosh prefers the PICT format. Unix has less of a standard but X Windows and similar interfaces favour XWD files. All of these formats support full 24-bit colour but can also compress images with sufficiently few colours into eight-bit, four-bit, or even one-bit indexed colour images.
TIFF (Tagged Information File Format) is a loss-free, 24-bit colour format intended for cross-platform use and tends to be accepted by most image editors on most systems. The only drawback is that TIFF has evolved into several incompatible versions, so different image editors may not be able to read each other's TIFF files. But recent versions of popular applications such as Photoshop and CorelDraw should have no problem.
By far the most promising loss-free format is PNG, the Portable Network Graphic. It accurately compresses 24- or even 32-bit colour images - the latter of which are 24-bit images with an added eight-bit alpha, or transparency, channel. It also indexes images with 256 or fewer colours for further compression and supports gamma correction. Best of all, it's intended to be a Web format. Although only the most recent applications properly read or create PNGs, the 4.0 browsers already support the format, albeit incompletely.
Web image format: GIF
CompuServe's GIF (Graphics Interchange Format) compresses images in two ways: first, it uses something called Lempel-Ziv encoding, which counts rows of like-coloured pixels as a single unit. Second, it limits itself to indexed colour. This means that a GIF can have no more than 256 colours, so you may have to reduce the colours in your images to use it. That's why GIF doesn't work well for photographic or high-colour images.
GIFs with sufficiently few colours realize greater compression: 128 or fewer colours are referenced with seven-bit data; 64 or less with six-bit data; and so on, down to a one-bit, two-colour GIF. This makes GIF an optimal format for simple line art and that means there are limits and rewards to adding or removing colours.
GIF has a few unique features. A GIF file can contain several images and a duration value for each one to produce animations. It also has limited transparency: one colour in an image's palette can be designated as such. This is an either/or arrangement; pixels with colours close to the transparent one will not be partially transparent.
Web image format: JPEG
The JPEG (Joint Photographic Experts Group) format supports full 24-bit colour. It compresses images by accurately recording the brightness of each pixel but averaging out the hues, which our eyes distinguish less accurately. In effect, it records a description of an image, not the literal composition of that image. The viewer's Web browser or graphics application decodes this description into a Bitmap that looks more or less like the original image.
The accuracy of the reconstructed image depends on how much compression is applied - a value you can choose in most JPEG-savvy, image-editing tools. The decoded hues are rendered in sample blocks with diffused shapes. Since these blocks tend to overlap, it's very difficult - and takes a lot of data - to produce a distinct boundary between colours. But this technique works very well for photographic images with gradual colour changes and no sharp edges. Tropical birds, for example, are particularly well suited to the JPEG format. On the down side, JPEGs are notoriously difficult to edit. If you open a JPEG and modify it, you're modifying the interpreted bitmap rather than the JPEG data itself. Resaving as a JPEG will put the interpreted bitmap, defects and all, back through the encoding process, and the resulting image will be further degraded. Never resave a JPEG if you don't have to.
One more caveat: for high-quality printing, the JPEG format supports pixel resolutions besides 72 dots per inch (dpi). On the Web, anything over 72 dpi is a waste - there's no benefit to higher resolutions as there is when printing onto paper. When saving an image as a JPEG, be sure and double-check the resolution of the image.