The reflected light from your fig bar, when seen by your eye, is interpreted as color. The billions of photons enter your eye and are focused onto the back of your eye, where your retina acts as sort of a photographic plate. The retina’s millions of cone cells are excited when struck by the photons, and this causes neural energy to travel to your brain, which interprets the information as light and color. The more photons that strike the cone cells, the more excited they get. This level of excitation is interpreted by your brain as the brightness of the light, which makes sense—the brighter the light, the more photons there are to strike the cone cells.
The eye has three kinds of cone cells. All of them respond to photons, but each kind responds most to a particular wavelength. One is more excited by photons that have reddish wavelengths, one by green wavelengths, and one by blue wavelengths. Thus light that is composed mostly of red wavelengths will excite red-sensitive cone cells more than the other cells, and your brain receives the signal that the light you are seeing is mostly reddish. You do the math—a combination of different wavelengths of various intensities will, of course, yield a mix of colors. All wavelengths equally represented thus is perceived as white, and no light of any wavelength is black.
You can see that any “color” that your eye perceives is actually made up of light all over the visible spectrum. The “hardware” in your eye detects what it sees in terms of the relative concentrations and strengths of red, green, and blue light. Figure 8 -4 shows how brown comprises a photon mix of 60% red photons, 40% green photons, and 10% blue photons.
It makes sense that when we wish to generate a color with a computer, we do so by specifying separate intensities for red, green, and blue components of the light. It so happens that color computer monitors are designed to produce three kinds of light (can you guess which three?), each with varying degrees of intensity. In the back of your computer monitor is an electron gun that shoots electrons at the back of the screen you view. This screen contains phosphors that emit red, green, and blue light when struck by the electrons. The intensity of the light emitted varies with the intensity of the electron beam. These three color phosphors are then packed closely together to make up a single physical dot on the screen. See Figure 8-5.
You may recall that in Chapter 3 we explained how OpenGL defines a color exactly as intensities of red, green, and blue, with the glColor command. Here we will cover more thoroughly the two color modes supported by OpenGL.
There once was a time when state-of-the-art PC graphics hardware meant the Hercules graphics card. This card could produce bitmapped images with a resolution of 720 ? 348. The drawback was that each pixel had only two states: on and off. At that time, bitmapped graphics of any kind on a PC was a big deal, and you could produce some great monochrome graphics. Your author even did some 3D graphics on a Hercules card back in college.
Actually predating the Hercules card was the CGA card, the Color Graphics Adapter. Introduced with the first IBM PC, this card could support resolutions of 320 ?200 pixels and could place any four of 16 colors on the screen at once. A higher resolution (640 ?200) with two colors was also possible, but wasn’t as effective or cost conscious as the Hercules card (color monitors = $$$). CGA was puny by today’s standards—it was even outmatched then by the graphics capabilities of a $200 Commodore 64 or Atari home computer. Lacking adequate resolution for business graphics or even modest modeling, CGA was used primarily for simple PC games or business applications that could benefit from colored text. Generally though, it was hard to make a good business justification for this more expensive hardware.
The next big breakthrough for PC graphics came when IBM introduced the Enhanced Graphics Adapter (EGA) card. This one could do more than 25 lines of colored text in new text modes, and for graphics could support 640 ?350-pixel bitmapped graphics in 16 colors! Other technical improvements eliminated some flickering problems of the CGA ancestor and provided for better and smoother animation. Now arcade-style games, real business graphics, and even 3D graphics became not only possible but even reasonable on the PC. This advance was a giant move beyond CGA, but still PC graphics were in their infancy.
The last mainstream PC graphics standard set by IBM was the VGA card (which stood for Vector Graphics Array rather than the commonly held Video Graphics Adapter). This card was significantly faster than the EGA, could support 16 colors at a higher resolution (640 ?480) and 256 colors at a lower resolution of 320 ?200. These 256 colors were selected from a palette of over 16 million possible colors. That’s when the floodgates opened for PC graphics. Near photo-realistic graphics become possible on PCs. Ray tracers, 3D games, and photo-editing software began to pop up in the PC market.
IBM, as well, had a high-end graphics card—the 8514—for their “workstations.” This card could do 1024 ?768 graphics at 256 colors. IBM thought this card would only be used by CAD and scientific applications! But one thing is certain about the consumer market: They always want more. It was this short-sightedness that cost IBM its role as standard-setter in the PC graphics market. Other vendors began to ship “Super-VGA” cards that could display higher and higher resolutions, with more and more colors. First 800 ?600, then 1024 ?768 and even higher, with first 256 colors, then 32,000, to 65,000. Today 24-bit color cards can display 16 million colors at resolutions up to 1024 ?768. Inexpensive PC hardware can support full color at VGA resolutions, or 8 00 ?600 Super-VGA resolutions. Most Windows PCs sold today can support at least 65,000 colors at resolutions of 1024 ?768.