An important concept throughout this chapter is that of eye coordinates. Eye coordinates are from the viewpoint of the observer, regardless of any transformations that may occur—think of them as “absolute” screen coordinates. Thus, eye coordinates are not real coordinates, but rather represent a virtual fixed coordinate system that is used as a common frame of reference. All of the transformations discussed in this chapter are described in terms of their effects relative to the eye coordinate system.
Figure 7-1 shows the eye coordinate system from two viewpoints. On the left (a), the eye coordinates are represented as seen by the observer of the scene (that is, perpendicular to the monitor). On the right (b), the eye coordinate system is rotated slightly so you can better see the relation of the z-axis. Positive x and y are pointed right and up, respectively, from the viewer’s perspective. Positive z travels away from the origin toward the user, and negative z values travel farther away from the viewpoint into the screen.
When you draw in 3D with OpenGL, you use the Cartesian coordinate system. In the absence of any transformations, the system in use would be identical to the eye coordinate system. All of the various transformations change the current coordinate system with respect to the eye coordinates. This, in essence, is how you move and rotate objects in your scene—by moving and rotating the coordinate system with respect to eye coordinates. Figure 7-2 gives a two-dimensional example of the coordinate system rotated 45? clockwise by eye coordinates. A square plotted on this rotated coordinate system would also appear rotated.
In this chapter you’ll study the methods by which you modify the current coordinate system before drawing your objects. You can even save the state of the current system, do some transformations and drawing, and then restore the state and start over again. By chaining these events, you will be able to place objects all about the scene and in various orientations.
The viewing transformation is the first to be applied to your scene. It is used to determine the vantage point of the scene. By default, the point of observation is at the origin (0,0,0) looking down the negative z-axis (“into” the monitor screen). This point of observation is moved relative to the eye coordinate system to provide a specific vantage point. When the point of observation is located at the origin, then objects drawn with positive z values would be behind the observer.
The viewing transformation allows you to place the point of observation anywhere you want, and looking in any direction. Determining the viewing transformation is like placing and pointing a camera at the scene.
In the scheme of things, the viewing transformation must be specified before any other transformations. This is because it moves the currently working coordinate system in respect to the eye coordinate system. All subsequent transformations then occur based on the newly modified coordinate system. Later you’ll see more easily how this works, when we actually start looking at how to make these transformations.
Modeling transformations are used to manipulate your model and the particular objects within it. This transformation moves objects into place, rotates them, and scales them. Figure 7-3 illustrates three modeling transformations that you will apply to your objects. Figure 7-3a shows translation, where an object is moved along a given axis. Figure 7-3b shows a rotation, where an object is rotated about one of the axes. Finally, Figure 7-3c shows the effects of scaling, where the dimensions of the object are increased or decreased by a specified amount. Scaling can occur nonuniformly (the various dimensions can be scaled by different amounts), and this can be used to stretch and shrink objects.
The final appearance of your scene or object can depend greatly on the order in which the modeling transformations are applied. This is particularly true of translation and rotation. Figure 7-4a shows the progression of a square rotated first about the z-axis and then translated down the newly transformed x-axis. In Figure 7-4b, the same square is first translated down the x-axis and then rotated around the z-axis. The difference in the final dispositions of the square occurs because each transformation is performed with respect to the last transformation performed. In Figure 7-4a, the square is rotated with respect to the origin first. In 7-4b, after the square is translated, the rotation is then performed around the newly translated origin.
The viewing and the modeling transformations are, in fact, the same in terms of their internal effects as well as the final appearance of the scene. The distinction between the two is made purely as a convenience for the programmer. There is no real difference between moving an object backward, and moving the reference system forward—as shown in Figure 7-5, the net effect is the same. (You experience this firsthand when you’re sitting in your car at an intersection and you see the car next to you roll forward; it may seem to you that your own car is rolling backwards.). The term “modelview” is used here to indicate that you can think of this transformation either as the modeling transformation, or the viewing transformation, but in fact there is no distinction—thus, it is the modelview transformation.
The viewing transformation, therefore, is essentially nothing but a modeling transformation that you apply to a virtual object (the viewer) before drawing objects. As you will soon see, new transformations are repeatedly specified as you place more and more objects in the scene. The initial transformation provides a reference from which all other transformations are based.