TweetFollow Us on Twitter

OpenGL For Mac Users - Part 2

Volume Number: 15 (1999)
Issue Number: 1
Column Tag: Power Graphics

OpenGL For Mac Users - Part 2

by Ed Angel, University of New Mexico

Advanced capabilities: architectural features and exploiting hardware

Introduction

In Part 1, we developed the basics of OpenGL and argued that the OpenGL API provides an efficient and easy to use interface for developing three-dimensional graphics applications. However, if an API is to be used for developing serious applications, it must be able to exploit modern graphics hardware. In this article, we shall examine a few of the advanced capabilities of OpenGL.

We shall be concerned with three areas. First, we shall examine how to introduce realistic shading by defining material properties for our objects and adding light sources to the scene. Then we shall consider the mixing of geometric and digital techniques afforded by texture mapping. We shall demonstrate these capabilities through the color cube example that we developed in our first article. We will then survey some of the advanced features of OpenGL, concentrating on three areas: writing client-server programs for networked applications, the use of OpenGL buffers, and the ability to tune performance to available hardware.

Figure 1 is a more detailed view of the pipeline model that we introduced in Part 1. Geometric objects such as polygons are defined by vertices that travel down the geometric pipeline while discrete entities such as bits and picture elements (pixels) travel down a parallel pipeline. The two pipelines converge during rasterization (or scan conversion). Consider what happens to a polygon during rasterization. First, the rasterizer must compute the interior points of the polygon from the vertices. The visibility of each point must be determined using the z or depth buffer that we discussed in Part 1. If a point is visible, then a color must be determined for it. In the simple model that we used in Part 1, a color either was assigned to an entire polygon or was interpolated across the polygon using the colors at the vertices. Here we shall consider two other possibilities that can be used either alone or together. We can assign colors based on light sources and material properties that we can assign to the polygon. Second we can use the pixels from the discrete pipeline to determine or alter the color, a process called texture mapping. Once a color has been determined, we can place the point in the frame buffer, place it in one of the other buffers, or use the other buffers and tables to modify this color.


Figure 1. Pipeline Model.

In Part 1, we developed a sequence of programs that displayed a cube in various ways. These programs demonstrated the structure of most OpenGL programs. We divided our programs into three parts. The main function sets up the OpenGL interface with the operating system and defines the callback functions for interaction. It will not change in our examples here. The myinit function defines user parameters. The display callback typically contains the graphical objects. Our examples here will modify these two functions.

Lights and Materials

In simple graphics applications, we assign colors to lines and polygons that are used to color or shade the entire object. In the real world, objects do not appear in constant colors. Rather colors change gradually over surfaces due to the interplay between the light illuminating the surface and the absorbtion and scattering properties of the surface. In addition, if the material is shiny such as a metallic surface, the location of the viewer will affect what shade she sees. Although physically-based models of these phenomena can be complex, there are simple approximate models that work well in most graphical applications.


Figure 2. The Phong Shading Model.

OpenGL uses the Phong shading model, which is based on the four vectors shown in Figure 2. Light is assumed to arrive from either a point source or a source located infinitely far from the surface. At a point on the surface the vector L is the direction from the point to the source. The orientation of the surface is determined by the normal vector N. Finally, the model uses the angle of a perfect reflector, R, and the angle between R and the vector to the viewer V. The Phong model contains diffuse, specular, and ambient terms. Diffuse light is scattered equally in all directions. Specular light reflects in a range of angles close to the angle of a perfect reflection, while ambient light models the contribution from a variety of sources and reflections too complex to calculate individually. The Phong model can be computed at any point where we have the required vectors and the local absorbtion coefficients. For polygons, OpenGL applies the model at the vertices and computes vertex colors. To color (or shade) a vertex, we need the normal at the vertex, a set of material properties, and the light sources that illuminate that vertex. With this information, OpenGL can compute a color for the entire polygon. For a flat polygon, we can simply assign a normal to the first vertex and let OpenGL use the computed vertex color for the entire face, a technique called flat shading. If we want the polygon to appear curved, we can assign different normals to each vertex and then OpenGL will interpolate the computed vertex colors across the polygon. This later method is called smooth or interpolative shading. For objects composed of flat polygons, flat shading is more appropriate.

Let's again use the cube with the vertex numbering in Figure 3. We use the function quad to describe the faces in terms of the vertices

Listing 1: quad.c

GLfloat vertices[8][3] = {{-1.0,-1.0,-1.0}, {1.0,-1.0,-1.0}, 
	{-1.0,1.0.-1.0}, {1.0,1.0,-1.0},{-1.0,-1.0,1.0}, 
	{1.0,-1.0,1.0},{-1.0,1.0,1.0},{1.0,1.0,1.0}};

void quad(int a, int b, int c, int d)
{
		glBegin(GL_QUAD)
			glVertex3fv(vertices[a]);
			glVertex3fv(vertices[b]);
			glVertex3fv(vertices[c]);
			glVertex3fv(vertices[d]);
		glEnd();
}


Figure 3. Cube Vertex Labeling.

To flat shade our cube, we make use of the six normal vectors, each of which points outward from one of the faces. Here is the modified cube function

Listing 2: Revised cube.c with normals for shading

Glfloat face_normals[6][3] = {-1.0,0.0,0.0},{0.0,-1.0, 0.0},
	{0.0,0.0-1.0},(1.0,0.0,0.0},{0.0,1.0,0.0},{0.0,0.0,1.0}};

void cube()
{
	glNormal3fv(face_normals[2]);
	quad(0, 2, 3, 1);
	glNormal3fv(face_normals[4]);
	quad(2, 6, 7, 3);
	glNormal3fv(face_normals[0]);
	quad(0, 4, 6, 2);
	glNormal3fv(face_normals[3]);
	quad(1, 3, 7, 5);
	glNormal3fv(face_normals[5]);
	quad(4, 5, 7, 6);
	glNormal3fv(face_normals[1]);
	quad(0, 1, 5, 4);
}

Now that we have specified the orientation of each face, we must describe the light source(s) and the material properties of our polygons. We must also enable lighting and the individual light sources. Suppose that we require just one light source. We can both describe it and enable it within myinit . OpenGL allows each light source to have separate red, green and blue components and each light source consists of independent ambient, diffuse and specular sources. Each of these sources is configured in a similar manner. For our example, we will assume our cube consists of purely diffuse surfaces, so we need only worry about the diffuse components of the light source. Here is a myinit for a white light and a red surface:

Listing 3: Revised myint.c with lights and materials

void myinit()
{
	GLfloat mat_diffuse[]={1.0, 0.0, 0.0, 1.0};
	GLfloat light_diffuse[]={1.0, 1.0, 1.0, 1.0};
	GLfloat light0_pos[4] = { 0.5, 1.5, 2.25, 0.0 };

	glLightfv(GL_LIGHT0, GL_DIFFUSE, light_diffuse);
	glLightfv(GL_LIGHT0, GL_POSITION, light0_pos);

	/* define material properties for front face of all polygons */

	glMaterialfv(GL_FRONT, GL_DIFFUSE, mat_diffuse);

	glEnable(GL_LIGHTING);		/* enable lighting */
	glEnable(GL_LIGHT0);			/* enable light 0 */
  glEnable(GL_DEPTH_TEST);	/* Enable hidden-surface-removal */
	glClearColor(1.0, 1.0, 1.0, 1.0);

}

Both the light source and the material have RGBA components. The light source has a position in four-dimensional homogeneous coordinates. If the last component is one, then the source is a point source located at the position given by the first three components. If the fourth component is zero, the source is a distant parallel source and the first three components give its direction. This location is subject to the same transformations as are vertices for geometric objects. Figure 4 shows the resulting image


Figure 4. Red Cube with Diffuse Reflections.

Texture Mapping

While the capabilities of graphics systems are measured in the millions of shaded polygons per second that can be rendered, the detail needed in animations can require much higher rates. As an alternative, we can "paint" the detail on a smaller number of polygons, much like a detailed label is wrapped around a featureless cylindrical soup can. Thus, the complex surface details that we see are contained in two-dimensional images, rather than in a three-dimensional collection of polygons. This technique is called texture mapping and has proven to be a powerful way of creating realistic images in applications ranging from games to movies to scientific visualization. It is so important that the required texture memory and mapping hardware are a significant part of graphics hardware boards.

OpenGL supports texture mapping through a separate pixel pipeline that processes the required maps. Texture images (arrays of texture elements or texels) can be generated either from a program or read in from a file. Although OpenGL supports one through four-dimensional texture mapping, to understand the basics of texture mapping we shall consider only two-dimensional maps to three-dimensional polygons as in Figure 5.


Figure 5. Texture Mapping a Pattern to a Surface.

We can regard the texture image as continuous with two-dimensional coordinates s and t. Normally, these coordinate range over (0,1) with the origin at the bottom-left corner of the image. If we wish to map a texture image to a three-dimensional polygon, then the rasterizer must match a point on the polygon with both a point in the frame buffer and a point on the texture map. The first map is defined by the various transformations that we discussed in Part 1. We determine the second map by assigning texture coordinates to vertices and allowing OpenGL to interpolate intermediate values during rasterization. We assign texture coordinates via the function glTexCoord which sets up a present texture coordinate as part of the graphics state.

Consider the example of a quadrilateral. If we want to map the entire texture to this polygon, we can assign the four corners of the texture to the vertices

Listing 4: Assigning texture coordinates

glBegin{GL_QUADS);
   glTexCoord2f(0.0, 0.0);
   glVertex3fv(a);
   glTexCoord2f(1.0, 0.0);
   glVertex3fv(b);
   glTexCoord2f(1.0, 1.0);
   glVertex3fv(c);
   glTexCoord2f(0.0, 1.0);
   glVertex3fv(d);
glEnd();

Figure 6 shows a checkerboard texture mapped to our cube. If we assign the texture coordinates over a smaller range, we will map only part of the texture to the polygon and if we change the order of the texture coordinates we can rotate the texture map relative to the polygon. For polygons with more vertices, the application program must decide on the appropriate mapping between vertices and texture coordinates, which may not be easy for complex three-dimensional objects. Although OpenGL will interpolate the given texture map, the results can appear odd if the texture coordinates are not assigned carefully. The task of mapping a single texture to an object composed of multiple polygons in a seamless manner can be very difficult, not unlike the real world difficulties of wallpapering curved surfaces with patterned rolls of paper.

Like other OpenGL features, texture mapping first must be enabled (glEnable(GL_TEXTURE)). Although texture mapping is a conceptually simple idea, we must also specify a set of parameters that control the mapping process. The major practical problems with texture mapping arise because the texture map is really a discrete array of pixels that often come from images. How these images are stored can be hardware and application dependent. Usually, we must specify explicitly how the texture image is stored (bytes/pixel, byte ordering, memory alignment, color components). Next we must specify how the mapping in Figure 5 is to be carried out. The basic problem is that we want to color a point on the screen but this point when mapped back to texture coordinates normally does not map to an s and t corresponding to the center of a texel. One simple technique is to have OpenGL use the closest texel. However, this strategy can lead to a lot of jaggedness (aliasing) in the resulting image. A slower alternative is have OpenGL average a group of the closest texels to obtain a smoother result. These options are specified through the function glTexParameter. Another issue is what to do if the value of s or t is outside the interval (0,1). Again using glTexParameter, we can either clamp the values at 0 and 1 or use the range (0,1) periodically. The most difficult issue is one of scaling. A texel, when projected onto the screen, can be either much larger than a pixel or much smaller. If the texel is much smaller, then many texels may contribute to a pixel but will be averaged to a single value. This calculation can a very time consuming and results in the color of only a single pixel. OpenGL supports a technique called mipmapping that allows a program to start with a single texture array and form a set of smaller texture arrays that are stored. When texture mapping takes place, the appropriate array -- the one that matches the size of a texel to a pixel - -is used. The following code sets up a minimal set of options for a texture map and defines a checkerboard texture.

Listing 5: minimum texture map setup in myint.c

void myinit()
{
   GLubyte image[64][64][3];
	 int i, j, c;
	 for(i=0;i<64;i++) for(j=0;j<64;j++)
     {
       /* Create an 8 x 8 checkerboard image of black and white texels */
		   c = ((((i&0x8)==0)^((j&0x8))==0))*255;
		   image[i][j][0]= (GLubyte) c;
	     image[i][j][1]= (GLubyte) c;
	     image[i][j][2]= (GLubyte) c;
	  }
    glEnable(GL_DEPTH_TEST); /*Enable hidden-surface removal */
	  glClearColor(1.0, 1.0, 1.0, 1.0);
    glEnable(GL_TEXTURE_2D);  /* Enable texture mapping */
    glTexImage2D(GL_TEXTURE_2D,0,3,64,64,0,GL_RGB,
          GL_UNSIGNED_BYTE, image); /* Assign image to texture */
          /* required texture parameters */
    glTexParameterf(GL_TEXTURE_2D,GL_TEXTURE_WRAP_S,
          GL_CLAMP); 
    glTexParameterf(GL_TEXTURE_2D,GL_TEXTURE_WRAP_T,
          GL_CLAMP);
    glTexParameterf(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,
          GL_NEAREST);
	  glTexParameterf(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,
          GL_NEAREST);
}

First we create a 64 x 64 image to be used for our texture. We enable texture mapping and then pick image as the texture map. The other parameters in glTexImage2D give the size of the texture map, specify how it is stored and that it will be applied to the red, green and blue components. The four calls to glTexparameterf are required to specify how values of s and t outside (0,1) are to be handled (clamped) and that we are willing to use the nearest texel (for speed) rather than a filtered value. Figure 6 shows the cube with both texture mapping and shading. Note that in this mode texture mapping modifies the color determined by the shading calculation. Alternately, we can have the texture completely determine the color (decaling).


Figure 6. Texture Mapped Cube.

Clients and Servers

In many applications we must convey three-dimensional graphical information over a network, either to show the graphics remotely or to make use of hardware available on a remote machine. Web applications often fall into this category. Although we could send two-dimensional images across a network, the volume of data can be huge as compared to the compact description provided by three-dimensional geometry. Furthermore, in distributed interactive applications that involve manipulating large graphical databases, we would prefer to not have to send the database over the network repeatedly in response to small changes, such a change in viewing parameters.

In OpenGL, we regard the hardware with the display and the rendering engine as a graphics server and the program that defines and controls the graphics as a client. In what is called immediate-mode graphics, entities are sent to the graphics server as soon as they are defined in a program and there is no memory of these entities in the system. We used this mode in our sample cube programs. To redisplay the cube, we had to reexecute the code defining it. Thus, when we rotated the cube, we had to both alter the model-view matrix and reexecute the cube code defining its surfaces. In retained-mode graphics, graphical entities are defined and placed in structures called display lists, which are kept in the graphics server. For example, if we want to define a quadrilateral, store it on the server, and display it, we wrap its description between a glBeginList and a glEndList:

Listing 4: Display list for a quadrilateral

glBeginList(myQuad, GL_COMPILE_AND_DISPLAY);
  glBegin(GL_QUADS);
    glVertex3fv(a);
    glVertex3fv(b);
    glVertex3fv(c);
    glVertex3fv(d);
  glEnd();
glEndList();

In this example myQuad is an integer identifier for our retained object and the flag GL_COMPILE_AND_DISPLAY indicates that we want to define the list and display it on the server. If we only want to place a display list on the server without rendering it, we use the flag GL_COMPILE. Most OpenGL functions can appear inside a display as can other code. Similarly, to put a cube on the server, we need only surround any of our previous cube code with a glBeginList and a glEndList.We might also use display lists to put a character set on the server. In general, any objects we intend to redisplay are good candidates for display lists.

Once a display list is on the server, we can have it rendered by the command such as

glExecuteList(myCube)

Suppose that we wish to rotate the object and then redisplay it such as in our rotating cube example from Part 1. Within the display callback we would see code such as

GlRotatef(axis_x, axis_y, axis_z, theta)
GlExecuteList(myCube);

In terms of the traffic between the client program and the graphics server, we would be sending only the rotation matrix and function calls but not the object, as it is already stored on the server. If we did the same example using a complex object with thousands of vertices, once the object was placed on the server, further manipulation would require no more network traffic than the manipulation of a single quadrilateral or our cube.

OpenGL plays a vital, but often invisible, role in three-dimensional Web applications using VRML (Virtual Reality Modeling Language). VRML is based on the Open Inventor data base model. Open Inventor is an object-oriented graphics system built on top of OpenGL's rendering capabilities. VRML applications are client-server based with the rendering done on the client end. Thus, a VRML browser must be able to render databases that contain geometry and attributes that looks like OpenGL entities. Consequently, an obvious way to build a VRML browser is as an OpenGL application where the VRML server places display lists on the graphics server.

Buffers

OpenGL provides access to a variety of buffers. These include

  • Color buffers (including the frame buffer).
  • Depth Buffer.
  • Accumulation Buffer.
  • Stencil Buffer.

Many of these buffers have conventional uses, such as the frame buffer and the depth buffer, but all can be read from and written into by user programs and thus their uses are unlimited. In addition, the OpenGL architecture contains a variety of tables associated with these buffers and a variety of tests that can be performed on data as it is read from or written into buffers. Note that because these buffers are part of the graphics system, in an implementation in which these buffers and the associated tables are in separate hardware, once data have been moved to these buffers, we can achieve extremely high data rates as the system processor and bus are no longer involved.

For each type of buffer, we shall consider a few possible uses without going into coding details. We have seen that we can use multiple color buffers for double buffering. We can also use them for stereo viewing by rendering the left and right eye views into LEFT and RIGHT buffers and synching their display with special glasses that alternate presenting the images to the left and right eyes. For stereo animations, we can use four color buffers: FRONT_LEFT, FRONT_RIGHT, BACK_LEFT and BACK_RIGHT. More generally, we can use color buffers that are not being displayed to enhance the power of the geometric pipeline. For example, shadows are projections of objects from the perspective of the light source. We can do an off screen rendering with the camera at the light source into one of the buffers and then carefully composite this image with the standard rendering. Another application is to render each object in a different color into an off-screen buffer. We can use the location of the mouse to point into this buffer and the color read at this location is an identifier for the object, a simple way of doing interactive object selection or picking.

The depth buffer is used in conjunction with many of the applications of the color buffers. One interesting use is for combining translucent and opaque polygons in a single scene. In our example of blending in Part 1, we turned off hidden-surface removal because all the polygons were translucent. If some of the polygons are opaque, unless the user program sends the polygons down the pipeline in the correct order, no combination of enabling hidden-surface removal and blending will produce a reasonable image. Consider what happens if we render opaque polygons first, and then make the depth buffer read-only for translucent polygons. Translucent polygons behind opaque polygons will be hidden, while those in front will be blended.

Color buffers typically have limited resolution, such as one byte per color component. Consequently, doing arithmetic calculations with color buffers can be subject to loss of color resolution. The accumulation buffer has sufficient depth that we can add multiple images into it without losing resolution. Obvious uses of such a buffer include image compositing and blending. To composite n images, we can add them individually into the accumulator buffer and then read out the result while scaling each color value by 1/n. If we had tried to add these images into the frame buffer, we would have risked overflowing the color values, which are typically are stored 8 bits/component. If we tried to scale the colors before we added them into the frame buffer, we would have lost most, if not all, of our color resolution. For example, if n=8, we could loose three bits per color component. With an accumulator buffer, we can trade resolution for an increase in compositing time.

Less obvious, but easy to implement, applications of the accumulation buffer include digital filtering of images, scene antialiasing, depth of field images, and motion blur. Consider, for example, the antialiasing problem for polygons. As polygons are rasterized, the rasterizer computes small elements called fragments which are at most the size of a pixel. Each fragment is assigned a color that can determine the color of the corresponding pixel in the frame buffer. Generally, if the fragment is small or fragments from multiple polygons lie on the same pixel we will see jagged images if we make binary decisions as to whether or not a given fragment completely determines the color of a pixel. One solution is to use the alpha channel we discussed to Part 1 to allow small amounts of color from multiple fragments to blend together. Unfortunately, this method can be very slow. An alternative is to render the scene multiple times into the accumulation buffer, each time with the viewer shifted very slightly. Each image will contain slightly different aliasing artifacts that will be averaged out by the accumulation process.

The stencil buffer allows us to draw pixels based on the corresponding values in the stencil buffer. Thus, we can create masks in the stencil buffer that we can use to do things such as creating windows into scenes or for placing multiple images in different parts of the frame buffer. What makes this buffer more interesting is that we can change its values as we render. This capability allows us to write programs that can determine if an object is in shadow or color objects differently if they are sliced by a plane.

Performance Tuning

OpenGL supports a well-defined architecture and as such can be implemented in hardware, software or a combination of the two. Because there are both geometric and discrete pipelines, not only are there a wide range of implementation strategies, where bottlenecks arise depends on both the application and how the programmer chooses to use the OpenGL architecture. Consequently, it is difficult to make "one size fits all" solutions to performance issues. We can survey a few possibilities.

Defining geometric objects with polygons is simple but can lead to many function calls. For example, our cube with vertex colors, normals and texture coordinates required 108 OpenGL function calls: six faces each requiring a glBegin and glEnd, four vertices per face, each requiring a glVertex, glColor, glNormal and glTexCoord. One way to avoid this problem if the object is to be drawn multiple times is to use display lists. Another is to use vertex arrays, a feature added in OpenGL 1.1. In myinit, we can now set up and enable arrays that contain all the required information (colors, normals, vertices, texture coordinates). A single OpenGL call

glDrawElements(GL_QUADS, 24, GL_UNSIGNED_BYTE, cubeIndices);

will render the six quadrilateral faces) whose indices are stored as unsigned bytes in the array cubeIndices, which contains the 24 vertices in the order they appear in our cube function above.

In the geometric pipeline lighting calculations can be very expensive, especially if there are multiple sources, as we have to do an independent calculation for each source. A large part of the computation is that of the vectors in the Phong model. If a polygon is small relative to the distance to a viewer, the view vector will not change significantly over the face of the polygon. If we use

GlLightModeli(LIGHT_MODEL_LOCAL_VIEWER, GL_FALSE);

we allow the implementation to take advantage of this situation. We can also tell OpenGL whether we wish light calculations to be done on only one side of surface through

GlLightModeli(LIGHT_MODEL_TWO_SIDE, GL_FALSE);

We can also automatically eliminate (or cull) all polygons which are not facing the viewer by enabling culling

(GlCullFace (GL_BACK)).

Texture calculations can also be very time consuming and are subject to aliasing problems. OpenGL allows us to decide if we want to filter a texture to get a smoother image or just use the closest texel which is faster. Perspective projections can be a problem for texture mapping as the interpolation is more complex. In many situations, either the error is small enough that we do not care about it or the image is changing so rapidly that we cannot notice the error. In these situations, we can tell OpenGL not to correct for perspective.

More generally, we have to worry about both polygons and pixels, which are passing through two very different pipelines. Depending on both the implementation and the application, either pipeline can be the bottleneck. Often performance tuning, involves deciding which algorithm best matches the hardware and making creative use of the many features available in OpenGL. With OpenGL supported directly in the hardware of many new graphics cards, many graphics applications programmers are rethinking how to create images. For example, with the large amounts of texture memory included on these cards, in many situations we can generate details through textures rather than through geometry. Often, we can also avoid lighting calculations by storing some carefully chosen textures.

Conclusion

With over 200 functions in the API, we have only scratched the surface of what we can do with OpenGL. The major omission in these two articles is how we can define various types of curves and surfaces. Nevertheless, you should have a fair idea of the range of functionality supported by the OpenGL architecture. In the future, the CAD and animation communities will not only use OpenGL as their standard API but also start making use of features particular to OpenGL, such as the accumulation and stencil buffers.

The advantages of OpenGL are many. It is close to the hardware but still easy to use to write application programs. It is portable and supports a wide variety of features. It is the only graphics API that I have seen in my 15 years in the field that is used by animators, game developers, CAD engineers and researchers on supercomputers. Personally, I routinely use OpenGL on a PowerMac 6100, a PowerBook, an SGI Infinite Reality Engine and a variety of PCs, rarely having to change my code moving among these systems. There is not much more that can I ask of an API.

Sources and URLs

OpenGL is administered through an Architectural Review Board. The two major sources for on-line information on OpenGL are the OpenGL organization http://www.opengl.org and Silicon Graphics Inc http://www.sgi.com/Technology/OpenGL. You can find pointers to code, FAQ, standards documents and literature at these sites. I keep the sample code from my book at ftp://ftp.cs.unm.edu under pub/angel/BOOK.

OpenGL is available for most systems. For Mac users, there is an implementation from Conix Enterprises http://www.conix3d.com that includes support for GLUT and for hardware accelerators. There is a free OpenGL-like API called Mesa http://www.ssec.wisc.edu/~brianp/Mesa.html that can be compiled for most systems, including linux, and will run almost all OpenGL applications. There is a linux version available from Metro Link http://www.metrolink.com. You can obtain the code for GLUT and many examples at http://reality.sgi.com/opengl/glut3/glut3.html.

Bibliography and References

  • Angel, Edward. Interactive Computer Graphics: A top-down approach with OpenGL. Addison-Wesley, Reading, MA, 1997.
  • Architectural Review Board. OpenGL Reference Manual, Second Edition, Addison-Wesley, 1997.
  • Kilgard, Mark, OpenGL Programming for the X Window System, Addison-Wesley, 1996.
  • Neider, Jackie, Tom Davis and Mason Woo. The OpenGL Programming Guide, Second Edition. Addison-Wesley, Reading, MA, 1997.
  • Wright, Richard Jr. and Michael Sweet, The OpenGL Superbible, Waite Group Press, Corte Madera, CA, 1997.

Acknowledgements

I would like to thank Apple Computer, Silicon Graphics, and Conix Enterprises for the hardware and software support that enabled me to write my OpenGL textbook.


Ed Angel is a Professor of Computer Science and Electrical and Computer Engineering at the University of New Mexico. He is the author of Interactive Computer Graphics: A top-down with OpenGL (Addison-Wesley, 1997). You can find out more about him at http://www.cs.unm.edu/~angel or write him at angel@cs.unm.edu.

 

Community Search:
MacTech Search:

Software Updates via MacUpdate

Latest Forum Discussions

See All

Tokkun Studio unveils alpha trailer for...
We are back on the MMORPG news train, and this time it comes from the sort of international developers Tokkun Studio. They are based in France and Japan, so it counts. Anyway, semantics aside, they have released an alpha trailer for the upcoming... | Read more »
Win a host of exclusive in-game Honor of...
To celebrate its latest Jujutsu Kaisen crossover event, Honor of Kings is offering a bounty of login and achievement rewards kicking off the holiday season early. [Read more] | Read more »
Miraibo GO comes out swinging hard as it...
Having just launched what feels like yesterday, Dreamcube Studio is wasting no time adding events to their open-world survival Miraibo GO. Abyssal Souls arrives relatively in time for the spooky season and brings with it horrifying new partners to... | Read more »
Ditch the heavy binders and high price t...
As fun as the real-world equivalent and the very old Game Boy version are, the Pokemon Trading Card games have historically been received poorly on mobile. It is a very strange and confusing trend, but one that The Pokemon Company is determined to... | Read more »
Peace amongst mobile gamers is now shatt...
Some of the crazy folk tales from gaming have undoubtedly come from the EVE universe. Stories of spying, betrayal, and epic battles have entered history, and now the franchise expands as CCP Games launches EVE Galaxy Conquest, a free-to-play 4x... | Read more »
Lord of Nazarick, the turn-based RPG bas...
Crunchyroll and A PLUS JAPAN have just confirmed that Lord of Nazarick, their turn-based RPG based on the popular OVERLORD anime, is now available for iOS and Android. Starting today at 2PM CET, fans can download the game from Google Play and the... | Read more »
Digital Extremes' recent Devstream...
If you are anything like me you are impatiently waiting for Warframe: 1999 whilst simultaneously cursing the fact Excalibur Prime is permanently Vault locked. To keep us fed during our wait, Digital Extremes hosted a Double Devstream to dish out a... | Read more »
The Frozen Canvas adds a splash of colou...
It is time to grab your gloves and layer up, as Torchlight: Infinite is diving into the frozen tundra in its sixth season. The Frozen Canvas is a colourful new update that brings a stylish flair to the Netherrealm and puts creativity in the... | Read more »
Back When AOL WAS the Internet – The Tou...
In Episode 606 of The TouchArcade Show we kick things off talking about my plans for this weekend, which has resulted in this week’s show being a bit shorter than normal. We also go over some more updates on our Patreon situation, which has been... | Read more »
Creative Assembly's latest mobile p...
The Total War series has been slowly trickling onto mobile, which is a fantastic thing because most, if not all, of them are incredibly great fun. Creative Assembly's latest to get the Feral Interactive treatment into portable form is Total War:... | Read more »

Price Scanner via MacPrices.net

Early Black Friday Deal: Apple’s newly upgrad...
Amazon has Apple 13″ MacBook Airs with M2 CPUs and 16GB of RAM on early Black Friday sale for $200 off MSRP, only $799. Their prices are the lowest currently available for these newly upgraded 13″ M2... Read more
13-inch 8GB M2 MacBook Airs for $749, $250 of...
Best Buy has Apple 13″ MacBook Airs with M2 CPUs and 8GB of RAM in stock and on sale on their online store for $250 off MSRP. Prices start at $749. Their prices are the lowest currently available for... Read more
Amazon is offering an early Black Friday $100...
Amazon is offering early Black Friday discounts on Apple’s new 2024 WiFi iPad minis ranging up to $100 off MSRP, each with free shipping. These are the lowest prices available for new minis anywhere... Read more
Price Drop! Clearance 14-inch M3 MacBook Pros...
Best Buy is offering a $500 discount on clearance 14″ M3 MacBook Pros on their online store this week with prices available starting at only $1099. Prices valid for online orders only, in-store... Read more
Apple AirPods Pro with USB-C on early Black F...
A couple of Apple retailers are offering $70 (28%) discounts on Apple’s AirPods Pro with USB-C (and hearing aid capabilities) this weekend. These are early AirPods Black Friday discounts if you’re... Read more
Price drop! 13-inch M3 MacBook Airs now avail...
With yesterday’s across-the-board MacBook Air upgrade to 16GB of RAM standard, Apple has dropped prices on clearance 13″ 8GB M3 MacBook Airs, Certified Refurbished, to a new low starting at only $829... Read more
Price drop! Apple 15-inch M3 MacBook Airs now...
With yesterday’s release of 15-inch M3 MacBook Airs with 16GB of RAM standard, Apple has dropped prices on clearance Certified Refurbished 15″ 8GB M3 MacBook Airs to a new low starting at only $999.... Read more
Apple has clearance 15-inch M2 MacBook Airs a...
Apple has clearance, Certified Refurbished, 15″ M2 MacBook Airs now available starting at $929 and ranging up to $410 off original MSRP. These are the cheapest 15″ MacBook Airs for sale today at... Read more
Apple drops prices on 13-inch M2 MacBook Airs...
Apple has dropped prices on 13″ M2 MacBook Airs to a new low of only $749 in their Certified Refurbished store. These are the cheapest M2-powered MacBooks for sale at Apple. Apple’s one-year warranty... Read more
Clearance 13-inch M1 MacBook Airs available a...
Apple has clearance 13″ M1 MacBook Airs, Certified Refurbished, now available for $679 for 8-Core CPU/7-Core GPU/256GB models. Apple’s one-year warranty is included, shipping is free, and each... Read more

Jobs Board

Seasonal Cashier - *Apple* Blossom Mall - J...
Seasonal Cashier - Apple Blossom Mall Location:Winchester, VA, United States (https://jobs.jcp.com/jobs/location/191170/winchester-va-united-states) - Apple Read more
Seasonal Fine Jewelry Commission Associate -...
…Fine Jewelry Commission Associate - Apple Blossom Mall Location:Winchester, VA, United States (https://jobs.jcp.com/jobs/location/191170/winchester-va-united-states) Read more
Seasonal Operations Associate - *Apple* Blo...
Seasonal Operations Associate - Apple Blossom Mall Location:Winchester, VA, United States (https://jobs.jcp.com/jobs/location/191170/winchester-va-united-states) - Read more
Hair Stylist - *Apple* Blossom Mall - JCPen...
Hair Stylist - Apple Blossom Mall Location:Winchester, VA, United States (https://jobs.jcp.com/jobs/location/191170/winchester-va-united-states) - Apple Blossom Read more
Cashier - *Apple* Blossom Mall - JCPenney (...
Cashier - Apple Blossom Mall Location:Winchester, VA, United States (https://jobs.jcp.com/jobs/location/191170/winchester-va-united-states) - Apple Blossom Mall Read more
All contents are Copyright 1984-2011 by Xplain Corporation. All rights reserved. Theme designed by Icreon.