 3D Graphic Engine Essentials

Volume Number: 15 (1999)
Issue Number: 6
Column Tag: Games Programming

# 3D Graphics Engine Essentials

by Eric Lengyel

## A crash course on topics that every 3D engine designer needs to master

### Overview

This year marks the beginning of an era in which all 3D computer games are requiring hardware acceleration. Using a 3D accelerator removes the burden of writing a software-based triangle rasterizer, but designing a high-quality game engine still requires a lot of work. This article discusses how to perform and efficiently implement calculations that today's hardware does not handle such as coordinate transformations, triangle clipping, and visibility determination. I am going to assume that the reader is familiar with the fundamentals of vector and matrix mathematics that I use heavily in this article. The bibliography lists a couple of sources, including a recent MacTech article, which can provide a good introduction to this material.

### Coordinate Transformations

At the lowest level, the 3D hardware receives a list of vertices and a list of triangles to draw on the screen. Each vertex carries (x, y) screen coordinates, a z depth value, a color, an alpha value, and (u, v) texture map coordinates. Each triangle simply indicates which three members of the vertex list serve as its corners. (See the Vertex and Triangle structures shown in Listing 1.) The problem at hand is how to calculate where on the screen a given point in 3D space should appear for a given camera position and orientation.

Every 3D engine needs to deal with three different coordinate systems which I will introduce here and discuss in more detail shortly. The first is called world space or global space. This is the coordinate system in which everything is absolute, and it's the system in which we specify our camera position and the position of every object in the world. The second coordinate system is called object space or local space. Each object has its own space in which the origin corresponds to the position of the object in world space. The third coordinate system is called camera space. In camera space, the camera resides at the origin, the x and y coordinate axes are aligned to the coordinate system of the screen (x points to the right, and y points downward), and the z axis points in the direction that the camera is facing (and thus the z coordinate in camera space represents depth). It should be noted here that many 3D graphics systems have the y axis pointing upward in camera space, but this results in evil things such as left-handed coordinate systems unless the z axis is also reversed (as is the case with OpenGL). In any event, it makes life more complicated that it needs to be, so I will avoid these variants and stick to what I believe to be the more intuitive y-down system.

In order to obtain screen coordinates for a given 3D point, we must do two things. First we have to transform the point into camera space, and then we have to project it onto the viewing plane which represents our screen. Clipping will take place between these two operations so that we never project points that will not actually participate in the final rendering of a scene. The transformation and projection of points is handled quite nicely in theory by using four-dimensional homogeneous coordinates, although in practice we will not use these coordinates to their fullest extent for efficiency reasons.

Let's begin with a transformation in normal 3D coordinates. Suppose that we had a scene that contained a single cube. In the cube's object space, it is convenient to place the origin at one of the corners and orient the coordinate axes to be parallel with the cube's sides. We want to be able to move the cube anywhere in the scene and to rotate it arbitrarily. Thus, we have to maintain world space coordinates for the position of the cube's origin, and we need to maintain a rotation matrix which represents the orientation of the cube's local axes in world space. Let us call the cube's world space position P and its rotation matrix M. Then to transform a point A from the cube's object space to its global position Aworld in world space, we calculate (1)

This is the object to world or local to global transformation. We can go the other way and transform from world space to object space by solving equation (1) for A, which gives us (2)

Note that the columns of the matrix M are simply the vectors which correspond to the directions that the object's axes point in world space. This is useful since we are going to calculate the camera to world transform just like we would for any other object in the scene. The camera's local x, y, and z axes correspond respectively to the world space right direction R, down direction D, and view direction V of our camera's configuration. This means that we transform a point Acamera from camera space to world space with the equation (3)

where Pcamera is the camera's world space position. However, we will normally want to transform points from world space to camera space, so the inverse of this transform is much more useful. If R, D, and V are normalized to unit length then the inverse of the rotation matrix is just its transpose, which we will call C, and the world to camera transformation is given by (4)

When we are projecting vertices from a triangle mesh onto the screen, we never need to know the actual world space coordinates of each vertex. We can transform directly from object space to camera space by composing the object to world and world to camera transformations as follows. (5)

Performing transformations in this manner is tedious since we need to use both of the matrices C and M, and we have to perform two matrix-vector multiplies. The problem is further compounded if a hierarchical scene is being utilized in which objects may have sub-objects which have their own local coordinate spaces, thus requiring additional steps to transform from sub-object space to camera space.

Fortunately, there is a more compact and elegant method for handling these transformations. It turns out that we can use the previously mentioned homogeneous coordinates, and in the process we can combine the 3x3 rotation matrix and translation vector needed in the transformations that we have been working with so far into a single 4x4 matrix. This simplifies the composition operation since all we have to do multiply the matrices corresponding to each transformation together.

First, the definition of homogeneous coordinates is necessary. A homogeneous point P is represented by four coordinates: (6)

The corresponding three-dimensional point P3D is obtained by dividing each coordinate by w and discarding the fourth coordinate (which always becomes 1): (7)

A 3D point with coordinates (x, y, z) can be represented in homogeneous coordinates by (x, y, z, 1). This gives us a four-dimensional vector which can be operated on by four-dimensional matrices. In fact, as will be demonstrated later, the projection of a point onto the view plane can be performed by using a 4x4 matrix that modifies the fourth coordinate of a vector.

Referring back to the 3x3 rotation matrix M and world space position P from the object to world transformation in equation (1), we see that we can produce the same transformation with a 4x4 matrix having the form. (8)

Multiplying this matrix by a point A = (Ax, Ay, Az, 1) yields the same result as multiplying M by A3D and then adding P. An important fact is that the fourth coordinate of A remains 1 when multiplied by a matrix of the form shown in equation (8). Additionally, when two matrices of this form are multiplied together, the fourth row of the product is always (0, 0, 0, 1). This information will allow us to make some optimizations when we implement the calculations.

Using the notation from equation (4), the world to camera transform can be accomplished in homogeneous coordinates by using the matrix. (9)

The product TcameraTworld gives us a single 4x4 matrix which transforms points from object space directly into camera space. This product only has to be calculated once for each object, and it replaces the relatively messy equation (5).

Most 3D acceleration API's require that the z-coordinate of a vertex in camera space be in the range 0.0 to 1.0. (The Glide API is an exception to this convention.) This means that we have to scale everything in the z direction so that 1.0 represents the furthest distance that anything in the scene will be from the camera. Calling this distance zmax, we can include the scale in the world to camera transformation by using the matrix. (10)

This is the final world to camera transformation that needs to be calculated whenever the camera position or orientation changes (which may be for every frame).

Now that we are able to transform our vertices into camera space, we need to discuss the method used to project them onto the view plane. We first need to choose the distance from the camera to the view plane. Shorter view plane distances result in wider fields of view. As shown in Figure 1, the screen is represented by a rectangle on the view plane whose x-coordinate ranges from -1.0 to 1.0 and whose y-coordinate ranges from -h to h, where h is the height to width ratio (h = 0.75 for a 640x480 display). For a desired horizontal field of view angle q, the view plane distance d is given by. (11)

A point which has already been transformed into camera space can be projected onto the view plane through multiplication by the matrix. (12)

This matrix transforms a point Q = (x, y, z, 1) into the point Q¢ = (x, y, z, z/d). When we divide by the w component, w = z/d, we obtain the 3D point, (13)

which lies in the view plane. Of course, in the real world we don't need to perform the matrix multiplication, but this does demonstrate that all we need to do in order to project a point onto the view plane is calculate 1/w = d/z. Figure 1. Camera Space and View Plane Configuration.

Once we have multiplied the x and y components of a point by 1/w, we have resolution-independent view plane coordinates. To obtain actual screen coordinates, we need to scale the x value from the range [-1.0, 1.0] to the range [0, swidth] and the y value from the range [-h, h] to the range [0, sheight], where swidth and sheight are the dimensions of the display in pixels. This is accomplished through the equations and (14) and (15)

The implementation details for the transformation and projection of vertices are summarized by the steps below. The TransformVertices function shown in Listing 3 transforms an array of vertices from object space to camera space and temporarily stores them in a private array of the MyEngine class where they are used during the clipping phase.

1. Calculate the world to camera transformation matrix Tcamera given by equation (10). The fourth row of this matrix is always (0, 0, 0, 1), so it may be suppressed for efficiency. As done with the Matrix4D structure shown in Listing 1, we can consider it to be a 3x4 matrix that behaves like a 4x4 matrix.
2. For a given object in the scene, calculate the matrix product TcameraTworld, where Tworld is the object to world transformation. Again, the fourth rows of these matrices may be ignored since they will always be (0, 0, 0, 1).
3. For each vertex V belonging to the object, transform the point from object space to camera space by calculating Vcamera = TcameraTworldV. Once again, there is no need to use the fourth coordinate of the vector V.
4. Clip all of the triangles belonging to the object (see the section entitled "Clipping").
5. For each vertex V that was transformed in step 3, calculate 1/w = d/Vz. Screen coordinates are given by equations (14) and (15) where xview = Vx/w and yview = Vy/w.
6. Pass the values xscreen, yscreen, Vz, and 1/w to the 3D acceleration API. (In QuickDraw 3D RAVE, these values correspond to the x, y, z, and invW fields of the TQAVGouraud and TQAVTexture structures.)
7. Repeat steps 2-6 for each object in the scene which is potentially visible (see the section entitled "Visibility Determination").

#### Listing 1: Useful structures

```Vector3D
```

This structure is used to hold a 3D point or a direction vector. Operators are overloaded for addition, subtraction, scalar multiplication, and the dot product.

```struct Vector3D
{
float      x, y, z;
Vector3D() {}
Vector3D(float r, float s, float t)
{
x = r; y = s; z = t;
}
Vector3D operator +(const Vector3D& v) const
{
return (Vector3D(x + v.x, y + v.y, z + v.z));
}
Vector3D operator -(const Vector3D& v) const
{
return (Vector3D(x - v.x, y - v.y, z - v.z));
}
Vector3D operator *(float f) const
{
return (Vector3D(x * f, y * f, z * f));
}
// Dot product
float operator *(const Vector3D& v) const
{
return (x * v.x + y * v.y + z * v.z);
}
};
```

#### Matrix4D

This structure is used to hold a 4x4 transformation matrix whose fourth row is always (0, 0, 0, 1). Multiplication operators are overloaded for the matrix-vector product and the matrix-matrix product. The Inverse function calculates the inverse of a matrix.

```struct Matrix4D
{
float      n;
Matrix4D() {}
Matrix4D(float n00, float n01, float n02, float n03,
float n10, float n11, float n12, float n13,
float n20, float n21, float n22, float n23)
{
n = n00; n = n01; n = n02;
n = n03; n = n10; n = n11;
n = n12; n = n13; n = n20;
n = n21; n = n22; n = n23;
}

Vector3D operator *(const Vector3D& v) const;
Matrix4D operator *(const Matrix4D& m) const;
};

// Matrix-vector product
Vector3D Matrix4D::operator *(const Vector3D& v) const
{
return (Vector3D(   n * v.x + n * v.y +
n * v.z + n,
n * v.x + n * v.y +
n * v.z + n,
n * v.x + n * v.y +
n * v.z + n));
}

// Matrix-matrix product
Matrix4D Matrix4D::operator *(const Matrix4D& m) const
{
return (Matrix4D(
n * m.n + n * m.n +
n * m.n,
n * m.n + n * m.n +
n * m.n,
n * m.n + n * m.n +
n * m.n,
n * m.n + n * m.n +
n * m.n + n,
n * m.n + n * m.n +
n * m.n,
n * m.n + n * m.n +
n * m.n,
n * m.n + n * m.n +
n * m.n,
n * m.n + n * m.n +
n * m.n + n,
n * m.n + n * m.n +
n * m.n,
n * m.n + n * m.n +
n * m.n,
n * m.n + n * m.n +
n * m.n,
n * m.n + n * m.n +
n * m.n + n));
}

Matrix4D Inverse(const Matrix4D& m)
{
float n00 = m.n;
float n01 = m.n;
float n02 = m.n;
float x = m.n;
float n10 = m.n;
float n11 = m.n;
float n12 = m.n;
float y = m.n;
float n20 = m.n;
float n21 = m.n;
float n22 = m.n;
float z = m.n;
float t = 1.0F / (n00 * (n11 * n22 - n12 * n21) -
n01 * (n10 * n22 - n12 * n20) +
n02 * (n10 * n21 - n11 * n20));
return (Matrix4D(
(n11 * n22 - n12 * n21) * t,
(n02 * n21 - n01 * n22) * t,
(n01 * n12 - n02 * n11) * t,
((n12 * n21 - n11 * n22) * x +
(n01 * n22 - n02 * n21) * y +
(n02 * n11 - n01 * n12) * z) * t,
(n12 * n20 - n10 * n22) * t,
(n00 * n22 - n02 * n20) * t,
(n02 * n10 - n00 * n12) * t,
((n10 * n22 - n12 * n20) * x +
(n02 * n20 - n00 * n22) * y +
(n00 * n12 - n02 * n10) * z) * t,
(n10 * n21 - n11 * n20) * t,
(n01 * n20 - n00 * n21) * t,
(n00 * n11 - n01 * n10) * t,
((n11 * n20 - n10 * n21) * x +
(n00 * n21 - n01 * n20) * y +
(n01 * n10 - n00 * n11) * z) * t));
}
```

#### Vertex

This structure contains all of the information that is associated with a single vertex.

```struct Vertex
{
Vector3D      point;                 // Position
float         red, green, blue;      // Diffuse color
float         alpha;                 // Transparency
float         u, v;                  // Texture mapping
};
```

#### Triangle

This structure contains all of the information that is associated with a single triangle.

```struct Triangle
{
long         index;     // Indexes into vertex array
Vector3D     normal;       // Surface normal
};
```

#### Mesh

This structure contains an array of vertices and an array of triangles which compose a single mesh. The objectToWorldTransform matrix defines the mesh's position and orientation as given by equation (8). The bounding sphere information is used for visibility determination.

```struct Mesh
{
long            vertexCount;      // Number of vertices in mesh
long            triangleCount;    // Number of triangles in mesh
Vertex         *vertexArray;      // Pointer to vertex array
Triangle      *triangleArray;     // Pointer to triangle array
Matrix4D      objectToWorldTransform;
Vector3D      boundingSphereCenter;
};
```

#### MyEngine

The MyEngine class encapsulates information about the camera configuration and holds pointers to storage for transformed vertices and clipped triangles. The functions which perform coordinate transformations, triangle clipping, and visibility determination are implemented as methods of this class.

```class MyEngine
{
private:
// Maximum z value
float         maxDepth;
// Distance from camera to view plane
float         viewPlaneDistance;
// Height to width ratio of the screen
float         heightOverWidth;
// Camera configuration
Vector3D      cameraPosition;      // World space
Vector3D      rightDirection;      // Should be unit length
Vector3D      downDirection;      // Should be unit length
Vector3D      viewDirection;      // Should be unit length
// Transformation from world space to camera space
Matrix4D      worldToCameraTransform;
// World space side plane normals (used for bounding sphere test)
Vector3D      leftPlaneNormal;
Vector3D      rightPlaneNormal;
Vector3D      topPlaneNormal;
Vector3D      bottomPlaneNormal;
// Number of transformed vertices in the vertex array
long            vertexCount;
// Number of clipped triangles in the triangle array
long            triangleCount;
// Preallocated storage for vertices and triangles
Vertex         *vertexArray;
Triangle      *triangleArray;
// Vertex interpolation function used for clipping
static void InterpolateVertices(const Vertex *v1,
const Vertex *v2, float t, Vertex *newVertex);
public:
MyEngine();
~MyEngine();
// Called before any rendering calculations (details below)
void PrepareToRender(void);
// Determine if a mesh's bounding sphere is visible
bool SphereVisible(const Vector3D& worldCenter,
// Transform vertices to camera space
void TransformVertices(long count,
const Vertex *vertex,
const Matrix4D& objectToWorldTransform);
// Clip triangles to view frustum
void ClipTriangle(const Triangle *triangle,
long clipFlags);
// Do everything necessary to render a mesh
void RenderMesh(const Mesh *mesh);
};

MyEngine::MyEngine()
{
// Allocate vertex and triangle arrays. Choose the constants kMaxVertices and
// kMaxTriangles so that these arrays are large enough to hold the largest single
// mesh that will be rendered. Be sure to include space for extra vertices and
// triangles that may be created during the clipping process.
vertexArray = new Vertex[kMaxVertices];
triangleArray = new Triangle[kMaxTriangles];
}

MyEngine::~MyEngine()
{
delete[] triangleArray;
delete[] vertexArray;
}
```

#### PrepareToRender

This function computes the world to camera transformation matrix and the world space normal vectors for the four side planes of the view frustum. It should be called once for each rendered frame before any RenderMesh calls are made.

```void MyEngine::PrepareToRender(void)
{
float recipMaxDepth = 1.0F / maxDepth;

// Compute world to camera transformation. See equation (10).
worldToCameraTransform.n = rightDirection.x;
worldToCameraTransform.n = rightDirection.y;
worldToCameraTransform.n = rightDirection.z;
worldToCameraTransform.n = downDirection.x;
worldToCameraTransform.n = downDirection.y;
worldToCameraTransform.n = downDirection.z;
worldToCameraTransform.n =
viewDirection.x * recipMaxDepth;
worldToCameraTransform.n =
viewDirection.y * recipMaxDepth;
worldToCameraTransform.n =
viewDirection.z * recipMaxDepth;
float x = cameraPosition.x;
float y = cameraPosition.y;
float z = cameraPosition.z;
worldToCameraTransform.n =
-worldToCameraTransform.n * x -
worldToCameraTransform.n * y -
worldToCameraTransform.n * z;
worldToCameraTransform.n =
-worldToCameraTransform.n * x -
worldToCameraTransform.n * y -
worldToCameraTransform.n * z;
worldToCameraTransform.n =
-worldToCameraTransform.n * x -
worldToCameraTransform.n * y -
worldToCameraTransform.n * z;
// Compute unit-length normal vectors for the four side planes.
// These are used by the SphereVisible function.
float f = viewPlaneDistance * maxDepth;
float h = heightOverWidth;
// See the sidebar "Fast Square Roots" for the RSqrt function.
float g1 = RSqrt(1.0F + f * f);
float g2 = RSqrt(h * h + f * f);
leftPlaneNormal =
(viewDirection + rightDirection * f) * g1;
rightPlaneNormal =
(viewDirection - rightDirection * f) * g1;
topPlaneNormal =
(viewDirection * h + downDirection * f) * g2;
bottomPlaneNormal =
(viewDirection * h - downDirection * f) * g2;
}
```

#### TransformVertices

This function transforms an array of vertices from an object's local space into camera space. Each vertex's color, transparency, and mapping is simply copied to the array of transformed vertices here, but the routine could be modified so that a translation to the native vertex format of the 3D acceleration API being used is performed.

```void MyEngine::TransformVertices(long count,
const Vertex *vertex,
const Matrix4D& objectToWorldTransform)
{
vertexCount = count;
// Calculate object to camera transform.
Matrix4D m = worldToCameraTransform *
objectToWorldTransform;
// Transformed vertices get placed in private array.
Vertex *v = vertexArray;
for (long a = 0; a < count; a++)
{
// Transform vertex into camera space.
v->point = m * vertex->point;
v->red = vertex->red;
v->green = vertex->green;
v->blue = vertex->blue;
v->alpha = vertex->alpha;
v->u = vertex->u;
v->v = vertex->v;

v++;
vertex++;
}
}
```

### Clipping

Figure 2 illustrates the view frustum, the volume of space shaped like a truncated pyramid whose apex lies at the camera position, that coincides with the exact visible region for a given camera configuration. The view frustum is composed of four side planes which pass through the camera position, the front plane or near plane which is usually placed at z = d (d being the view plane distance from the previous section), and the back plane or far plane which is placed at z = 1.0. Figure 2. The View Frustum.

After the three vertices used by a particular triangle have been transformed into camera space, there are three possibilities. The first is that all three vertices lie inside the view frustum. In this case, we say that the triangle is trivially accepted and pass it on unaltered to be rendered. The second possibility is that all three vertices lie outside the view frustum. When this happens, we say that the triangle is trivially rejected and do not consider it any further for rendering. The remaining possibility is that some of the vertices lie inside the view frustum and the rest lie outside. For this case, the triangle must be clipped into one or two smaller triangles which do lie completely inside the view frustum.

For each triangle, clipping occurs in camera space one plane at a time. If it is known that everything in the scene is within the distance zmax from the camera, then it is safe to ignore the back plane. For each plane, we consider the side facing the view frustum to be the positive side, and the side facing away to be the negative side. Thus a point is inside the view frustum if and only if it does not lie on the negative side of any of the six planes. A point lying exactly on one of the planes is still considered to be inside the view frustum.

For a given clipping plane, we need to classify the three vertices of a triangle as either lying on the positive side of the plane (which for our purposes includes vertices that lie exactly on the plane) or lying on the negative side of the plane. If it is ever the case that all three vertices lie on the negative side of any plane, then the triangle should be immediately discarded because it is not visible. If all three vertices lie on the positive side of the plane, then no clipping needs to occur for that plane and the triangle should be passed on to be checked against the other clipping planes. The remaining two possibilities are illustrated in Figure 3. If two vertices lie on the negative side of the plane, then the triangle needs to be clipped to coincide with the part of its area which lies on the positive side of the plane. If only one vertex lies on the negative side of the plane, then the polygon corresponding to the area lying on the positive side of the plane is a quadrilateral, and thus needs to be split into two triangles. Figure 3. Clipped Triangles.

The method by which we classify vertices as lying on the positive or negative side of a clipping plane depends on which plane we are considering. For the front and back planes, we simply compare the z coordinate of a vertex to the distance separating the plane from the camera. Thus a vertex lies on the positive side of the front plane if z >= d, and a vertex lies on the positive side of the back plane if z <= 1.0. For the remaining four side planes, we can determine the location of a vertex relative to one of the planes by calculating the dot product between the vertex position and the plane's normal vector. A negative dot product indicates that the point lies on the negative side of the plane. The table below lists the inward-facing normal vectors for each plane of the view frustum where d is the view plane distance and h is the height to width ratio of the screen.

 Plane Normal Front (0, 0, 1) Back (0, 0, -1) Left (d, 0, 1) Right (-d, 0, 1) Top (0, d, h) Bottom (0, -d, h)

After determining that a triangle is split by a clipping plane (by observing that one or two vertices lie on the negative side of the plane), we must find the actual points of intersection along the triangle's edges. These points will serve as the vertices for the clipped triangle(s). The line segment connecting two vertices V1 and V2 can be described by the parametric equation (16)

where t ranges from 0.0 to 1.0. A plane is described by the equation (17)

where N is the plane's normal vector and D is the distance from the origin to the plane along the direction of N. To determine at what value of t a line segment intersects a plane, we substitute the function P(t) from equation (16) for P in equation (17). This gives us. (18)

Solving this equation for t, we obtain. (19)

The value of D is the view plane distance d for the front plane, 1.0 for the back plane, and 0.0 for the side planes since they pass through the camera space origin. Once the value of t has been calculated, we simply plug it into equation (16) to obtain the interpolated vertex position.

When implementing vertex interpolation for a clipping function, it is very important to consistently choose which vertex is to be V1 and which vertex is to be V2 for equation (19). Consider the case when there are two adjacent triangles which share an edge that intersects one of the clipping planes. If, when the first triangle is clipped, V1 is chosen to be the vertex lying on the positive side of the plane, and when the second triangle is clipped, V1 is chosen to be the vertex lying on the negative side of the plane, the resulting values of t may not be exactly the same due to different floating point round-off errors. This would cause the interpolated vertex for the first triangle to be slightly misaligned with the interpolated vertex for the second triangle, possibly causing visible artifacts along the clipped edge.

The ClipTriangle function shown in Listing 4 demonstrates a good triangle clipping implementation. When the function needs to interpolate vertices, it always chooses the vertex lying on the negative side of a clipping plane as V1 for the application of equation (19). The clipFlags parameter indicates to the clipping function which planes actually need to be considered. The next section describes a method for selecting its initial value. Any triangle that is created during the clipping process is recursively clipped using a value for clipFlags which excludes any planes against which its corresponding original triangle has already been clipped.

#### Listing 4: Triangle clipping

```InterpolateVertices
```

This routine produces a new vertex whose position, color, transparency, and texture mapping coordinates are the result of interpolating between two given vertices using a given parameter t, where 0.0 <= t <= 1.0.

```void MyEngine::InterpolateVertices(const Vertex *v1,
const Vertex *v2, float t, Vertex *newVertex)
{
newVertex->point.x = v1->point.x +
t * (v2->point.x - v1->point.x);
newVertex->point.y = v1->point.y +
t * (v2->point.y - v1->point.y);
newVertex->point.z = v1->point.z +
t * (v2->point.z - v1->point.z);
newVertex->red = v1->red + t * (v2->red - v1->red);
newVertex->green = v1->green + t * (v2->green - v1->green);
newVertex->blue = v1->blue + t * (v2->blue - v1->blue);
newVertex->alpha = v1->alpha + t * (v2->alpha - v1->alpha);
newVertex->u = v1->u + t * (v2->u - v1->u);
newVertex->v = v1->v + t * (v2->v - v1->v);
}
```

#### ClipTriangle

This routine clips a given triangle to the view frustum. It is assumed that all three vertices are closer to the camera than the back plane. The resulting clipped triangle(s) are added to the triangle array pointed to by MyEngine::triangleArray. The clipFlags parameter is a combination of the following bit flags, which indicate which planes need to be considered for the triangle.

```const long clipFlagNone = 0;
const long clipFlagFront = 1;
const long clipFlagBack = 2;
const long clipFlagLeft = 4;
const long clipFlagRight = 8;
const long clipFlagTop = 16;
const long clipFlagBottom = 32;
const long clipFlagAll = 63;
void MyEngine::ClipTriangle(const Triangle *triangle,
long clipFlags)
{
Triangle      t;
float d = viewPlaneDistance;
float h = heightOverWidth;
long i1 = triangle->index;
long i2 = triangle->index;
long i3 = triangle->index;
const Vertex *v1 = &vertexArray[i1];
const Vertex *v2 = &vertexArray[i2];
const Vertex *v3 = &vertexArray[i3];
if (clipFlags & clipFlagFront)      // Clip against front plane.
{
long b1 = (v1->point.z < d);      // Vertex 1 on negative side?
long b2 = (v2->point.z < d);      // Vertex 2 on negative side?
long b3 = (v3->point.z < d);      // Vertex 3 on negative side?
// Count number of vertices on negative side of plane.
long b = b1 + b2 + b3;
if (b != 0)
{
if (b == 1)   // One vertex is on the negative side.
{                  // Find out which one it is.
if (b1)
{
// Chop original triangle down to a smaller quadrilateral. Modify
// the original triangle to serve as one half of the quad and create
// a new triangle to fill in the other half.
i1 = vertexCount++;
long i = vertexCount++;
Vertex *v = &vertexArray[i1];
InterpolateVertices(v1, v2, (d - v1->point.z) /
(v2->point.z - v1->point.z), v);
InterpolateVertices(v1, v3, (d - v1->point.z) /
(v3->point.z - v1->point.z), &vertexArray[i]);
t.index = i;
t.index = i1;
t.index = i3;
v1 = v;
}
else if (b2)
{
i2 = vertexCount++;
long i = vertexCount++;
Vertex *v = &vertexArray[i2];
InterpolateVertices(v2, v3, (d - v2->point.z) /
(v3->point.z - v2->point.z), v);
InterpolateVertices(v2, v1, (d - v2->point.z) /
(v1->point.z - v2->point.z), &vertexArray[i]);
t.index = i1;
t.index = i;
t.index = i2;
v2 = v;
}
else
{
i3 = vertexCount++;
long i = vertexCount++;
Vertex *v = &vertexArray[i3];
InterpolateVertices(v3, v1, (d - v3->point.z) /
(v1->point.z - v3->point.z), v);
InterpolateVertices(v3, v2, (d - v3->point.z) /
(v2->point.z - v3->point.z), &vertexArray[i]);
t.index = i3;
t.index = i2;
t.index = i;
v3 = v;
}
// Clip new triangle recursively to preserve order.
// New triangle doesn't need to be clipped against front plane again.
ClipTriangle(&t, clipFlags & (clipFlagLeft |
clipFlagRight | clipFlagTop | clipFlagBottom));
}
else if (b == 2)   // Two vertices are on the negative side.
{                        // Find out which one is NOT.
if (!b1)
{
// Chop original triangle down to smaller triangle.
i2 = vertexCount++;
i3 = vertexCount++;
Vertex *v = &vertexArray[i2];
Vertex *w = &vertexArray[i3];
InterpolateVertices(v2, v1, (d - v2->point.z) /
(v1->point.z - v2->point.z), v);
InterpolateVertices(v3, v1, (d - v3->point.z) /
(v1->point.z - v3->point.z), w);
v2 = v;
v3 = w;
}
else if (!b2)
{
i3 = vertexCount++;
i1 = vertexCount++;
Vertex *v = &vertexArray[i3];
Vertex *w = &vertexArray[i1];
InterpolateVertices(v3, v2, (d - v3->point.z) /
(v2->point.z - v3->point.z), v);
InterpolateVertices(v1, v2, (d - v1->point.z) /
(v2->point.z - v1->point.z), w);
v3 = v;
v1 = w;
}
else
{
i1 = vertexCount++;
i2 = vertexCount++;
Vertex *v = &vertexArray[i1];
Vertex *w = &vertexArray[i2];
InterpolateVertices(v1, v3, (d - v1->point.z) /
(v3->point.z - v1->point.z), v);
InterpolateVertices(v2, v3, (d - v2->point.z) /
(v3->point.z - v2->point.z), w);
v1 = v;
v2 = w;
}
}
else return;   // All three vertices on negative side - reject triangle.
}
}

if (clipFlags & clipFlagLeft)   // Clip against left plane.
{
// Calculate dot product of each vertex position with plane's normal.
float d1 = v1->point.z + v1->point.x * d;
float d2 = v2->point.z + v2->point.x * d;
float d3 = v3->point.z + v3->point.x * d;
long b1 = (d1 < 0.0F);
long b2 = (d2 < 0.0F);
long b3 = (d3 < 0.0F);
long b = b1 + b2 + b3;
if (b != 0)
{
if (b == 1)
{
if (b1)
{
i1 = vertexCount++;
long i = vertexCount++;
Vertex *v = &vertexArray[i1];
InterpolateVertices(v1, v2, d1 / (d1 - d2), v);
InterpolateVertices(v1, v3, d1 / (d1 - d3),
&vertexArray[i]);
t.index = i;
t.index = i1;
t.index = i3;
v1 = v;
}
else if (b2)
{
i2 = vertexCount++;
long i = vertexCount++;
Vertex *v = &vertexArray[i2];
InterpolateVertices(v2, v3, d2 / (d2 - d3), v);
InterpolateVertices(v2, v1, d2 / (d2 - d1),
&vertexArray[i]);
t.index = i1;
t.index = i;
t.index = i2;
v2 = v;
}
else
{
i3 = vertexCount++;
long i = vertexCount++;
Vertex *v = &vertexArray[i3];
InterpolateVertices(v3, v1, d3 / (d3 - d1), v);
InterpolateVertices(v3, v2, d3 / (d3 - d2),
&vertexArray[i]);
t.index = i3;
t.index = i2;
t.index = i;
v3 = v;
}

ClipTriangle(&t, clipFlags & (clipFlagRight |
clipFlagTop | clipFlagBottom));
}
else if (b == 2)
{
if (!b1)
{
i2 = vertexCount++;
i3 = vertexCount++;
Vertex *v = &vertexArray[i2];
Vertex *w = &vertexArray[i3];
InterpolateVertices(v2, v1, d2 / (d2 - d1), v);
InterpolateVertices(v3, v1, d3 / (d3 - d1), w);
v2 = v;
v3 = w;
}
else if (!b2)
{
i3 = vertexCount++;
i1 = vertexCount++;
Vertex *v = &vertexArray[i3];
Vertex *w = &vertexArray[i1];
InterpolateVertices(v3, v2, d3 / (d3 - d2), v);
InterpolateVertices(v1, v2, d1 / (d1 - d2), w);
v3 = v;
v1 = w;
}
else
{
i1 = vertexCount++;
i2 = vertexCount++;
Vertex *v = &vertexArray[i1];
Vertex *w = &vertexArray[i2];
InterpolateVertices(v1, v3, d1 / (d1 - d3), v);
InterpolateVertices(v2, v3, d2 / (d2 - d3), w);
v1 = v;
v2 = w;
}
}
else return;
}
}

if (clipFlags & clipFlagRight)
{
float d1 = v1->point.z - v1->point.x * d;
float d2 = v2->point.z - v2->point.x * d;
float d3 = v3->point.z - v3->point.x * d;

long b1 = (d1 < 0.0F);
long b2 = (d2 < 0.0F);
long b3 = (d3 < 0.0F);
long b = b1 + b2 + b3;
if (b != 0)
{
if (b == 1)
{
if (b1)
{
i1 = vertexCount++;
long i = vertexCount++;
Vertex *v = &vertexArray[i1];
InterpolateVertices(v1, v2, d1 / (d1 - d2), v);
InterpolateVertices(v1, v3, d1 / (d1 - d3),
&vertexArray[i]);
t.index = i;
t.index = i1;
t.index = i3;
v1 = v;
}
else if (b2)
{
i2 = vertexCount++;
long i = vertexCount++;
Vertex *v = &vertexArray[i2];
InterpolateVertices(v2, v3, d2 / (d2 - d3), v);
InterpolateVertices(v2, v1, d2 / (d2 - d1),
&vertexArray[i]);
t.index = i1;
t.index = i;
t.index = i2;
v2 = v;
}
else
{
i3 = vertexCount++;
long i = vertexCount++;
Vertex *v = &vertexArray[i3];
InterpolateVertices(v3, v1, d3 / (d3 - d1), v);
InterpolateVertices(v3, v2, d3 / (d3 - d2),
&vertexArray[i]);
t.index = i3;
t.index = i2;
t.index = i;
v3 = v;
}

ClipTriangle(&t, clipFlags & (clipFlagTop |
clipFlagBottom));
}
else if (b == 2)
{
if (!b1)
{
i2 = vertexCount++;
i3 = vertexCount++;
Vertex *v = &vertexArray[i2];
Vertex *w = &vertexArray[i3];
InterpolateVertices(v2, v1, d2 / (d2 - d1), v);
InterpolateVertices(v3, v1, d3 / (d3 - d1), w);
v2 = v;
v3 = w;
}
else if (!b2)
{
i3 = vertexCount++;
i1 = vertexCount++;
Vertex *v = &vertexArray[i3];
Vertex *w = &vertexArray[i1];
InterpolateVertices(v3, v2, d3 / (d3 - d2), v);
InterpolateVertices(v1, v2, d1 / (d1 - d2), w);
v3 = v;
v1 = w;
}
else
{
i1 = vertexCount++;
i2 = vertexCount++;
Vertex *v = &vertexArray[i1];
Vertex *w = &vertexArray[i2];
InterpolateVertices(v1, v3, d1 / (d1 - d3), v);
InterpolateVertices(v2, v3, d2 / (d2 - d3), w);
v1 = v;
v2 = w;
}
}
else return;
}
}

if (clipFlags & clipFlagTop)
{
float d1 = v1->point.z * h + v1->point.y * d;
float d2 = v2->point.z * h + v2->point.y * d;
float d3 = v3->point.z * h + v3->point.y * d;
long b1 = (d1 < 0.0F);
long b2 = (d2 < 0.0F);
long b3 = (d3 < 0.0F);
long b = b1 + b2 + b3;
if (b != 0)
{
if (b == 1)
{
if (b1)
{
i1 = vertexCount++;
long i = vertexCount++;
Vertex *v = &vertexArray[i1];
InterpolateVertices(v1, v2, d1 / (d1 - d2), v);
InterpolateVertices(v1, v3, d1 / (d1 - d3),
&vertexArray[i]);
t.index = i;
t.index = i1;
t.index = i3;
v1 = v;
}
else if (b2)
{
i2 = vertexCount++;
long i = vertexCount++;
Vertex *v = &vertexArray[i2];
InterpolateVertices(v2, v3, d2 / (d2 - d3), v);
InterpolateVertices(v2, v1, d2 / (d2 - d1),
&vertexArray[i]);
t.index = i1;
t.index = i;
t.index = i2;
v2 = v;
}
else
{
i3 = vertexCount++;
long i = vertexCount++;
Vertex *v = &vertexArray[i3];
InterpolateVertices(v3, v1, d3 / (d3 - d1), v);
InterpolateVertices(v3, v2, d3 / (d3 - d2),
&vertexArray[i]);
t.index = i3;
t.index = i2;
t.index = i;
v3 = v;
}

ClipTriangle(&t, clipFlags & clipFlagBottom);
}
else if (b == 2)
{
if (!b1)
{
i2 = vertexCount++;
i3 = vertexCount++;
Vertex *v = &vertexArray[i2];
Vertex *w = &vertexArray[i3];
InterpolateVertices(v2, v1, d2 / (d2 - d1), v);
InterpolateVertices(v3, v1, d3 / (d3 - d1), w);
v2 = v;
v3 = w;
}
else if (!b2)
{
i3 = vertexCount++;
i1 = vertexCount++;
Vertex *v = &vertexArray[i3];
Vertex *w = &vertexArray[i1];
InterpolateVertices(v3, v2, d3 / (d3 - d2), v);
InterpolateVertices(v1, v2, d1 / (d1 - d2), w);
v3 = v;
v1 = w;
}
else
{
i1 = vertexCount++;
i2 = vertexCount++;
Vertex *v = &vertexArray[i1];
Vertex *w = &vertexArray[i2];
InterpolateVertices(v1, v3, d1 / (d1 - d3), v);
InterpolateVertices(v2, v3, d2 / (d2 - d3), w);
v1 = v;
v2 = w;
}
}
else return;
}
}

if (clipFlags & clipFlagBottom)
{
float d1 = v1->point.z * h - v1->point.y * d;
float d2 = v2->point.z * h - v2->point.y * d;
float d3 = v3->point.z * h - v3->point.y * d;
long b1 = (d1 < 0.0F);
long b2 = (d2 < 0.0F);
long b3 = (d3 < 0.0F);
long b = b1 + b2 + b3;
if (b != 0)
{
if (b == 1)
{
if (b1)
{
i1 = vertexCount++;
long i = vertexCount++;
InterpolateVertices(v1, v2, d1 / (d1 - d2),
&vertexArray[i1]);
InterpolateVertices(v1, v3, d1 / (d1 - d3),
&vertexArray[i]);
t.index = i;
t.index = i1;
t.index = i3;
}
else if (b2)
{
i2 = vertexCount++;
long i = vertexCount++;
InterpolateVertices(v2, v3, d2 / (d2 - d3),
&vertexArray[i2]);
InterpolateVertices(v2, v1, d2 / (d2 - d1),
&vertexArray[i]);
t.index = i1;
t.index = i;
t.index = i2;
}
else
{
i3 = vertexCount++;
long i = vertexCount++;
InterpolateVertices(v3, v1, d3 / (d3 - d1),
&vertexArray[i3]);
InterpolateVertices(v3, v2, d3 / (d3 - d2),
&vertexArray[i]);
t.index = i3;
t.index = i2;
t.index = i;
}

ClipTriangle(&t, clipFlagNone);
}
else if (b == 2)
{
if (!b1)
{
i2 = vertexCount++;
i3 = vertexCount++;
InterpolateVertices(v2, v1, d2 / (d2 - d1),
&vertexArray[i2]);
InterpolateVertices(v3, v1, d3 / (d3 - d1),
&vertexArray[i3]);
}
else if (!b2)
{
i3 = vertexCount++;
i1 = vertexCount++;
InterpolateVertices(v3, v2, d3 / (d3 - d2),
&vertexArray[i3]);
InterpolateVertices(v1, v2, d1 / (d1 - d2),
&vertexArray[i1]);
}
else
{
i1 = vertexCount++;
i2 = vertexCount++;
InterpolateVertices(v1, v3, d1 / (d1 - d3),
&vertexArray[i1]);
InterpolateVertices(v2, v3, d2 / (d2 - d3),
&vertexArray[i2]);
}
}
else return;
}
}

// Add clipped triangle to the triangle array.
Triangle *newTriangle = &triangleArray[triangleCount++];
newTriangle->index = i1;
newTriangle->index = i2;
newTriangle->index = i3;
}
```

### Visibility Determination

Visibility determination is the mechanism by which an engine figures out early in the rendering process what parts of a scene are potentially visible for a given camera configuration. Determining the potentially visible set (sometimes abbreviated PVS) of objects usually involves algorithms which are very specific to a particular engine, and the process often appears to be more of an art than a science. The highest level of visibility determination is heavily dependent on the type of environment an engine has to deal with. The methods that an engine would use to determine what interior walls of a castle are visible are completely different from the methods that would be used to determine the visibility of large objects in outer space. Discussing the complex systems in use by today's state-of-the-art engines to determine the visibility of large structures in a large environment is beyond the scope of this article. This section describes the lower levels of visibility determination and provides a method for quickly determining whether all or part of an object might intersect the view frustum (thus making it potentially visible).

We have already visited the lowest level of visibility determination in the previous section. Triangle-level visibility is handled in part by the clipping algorithm, which is able to determine whether a triangle is completely visible, partially visible, or not visible at all. Before a triangle is clipped, however, we should check to see whether it is actually facing the camera. As long as solid models are being used, any triangle which is facing away from the camera will be completely obscured by some other part of the model. The process of removing these triangles from the rendering pipeline is called backface culling. We can quickly determine that a triangle is facing away from the camera by calculating the dot product between the triangle's normal vector and the direction to the camera. If the dot product is positive, then the triangle is facing the camera; otherwise, it should be culled. The direction to the camera for a particular triangle can be obtained by subtracting any one of the triangle's three vertices from the camera position. However, the vertex positions are represented in a model's local coordinate space and the camera position is represented in world space. Before performing any backface culling calculations for a particular model, we need to transform the global camera position into the model's object space. This is easily accomplished by multiplying the camera position by the inverse of the model's object to world transformation matrix as demonstrated in the RenderMesh function shown in Listing 6.

At the next higher level above individual triangles, we need to be able to determine the visibility of entire models such as characters and projectiles. Model-level visibility is the primary topic of this section and will be accomplished through the use of bounding volumes. Bounding volumes are shapes which have a simple geometry and completely encompass the models to which they correspond. The simplest geometry that can be used for a model's bounding volume is a sphere. The easiest way to obtain a decent bounding sphere for a triangle mesh is to calculate the average of the mesh's vertex coordinates to serve as the center and the maximum distance from any vertex to the center to serve as the radius. For most meshes it is possible to construct a bounding volume such as an ellipsoid or a box which contains less empty space around a model than a sphere does, but such a volume would require considerably more expensive visibility calculations. We therefore restrict our discussion to bounding spheres in the interests of simplicity and efficiency.

Before we spend valuable processor time transforming a mesh's vertices into camera space and clipping its triangles to the view frustum, we want to make sure that the mesh's bounding sphere actually falls at least partially within the camera's field of view. We apply this test by measuring the distance from the bounding sphere's center to each of the view frustum's planes in world space. If the sphere's center falls on the negative side of any plane by a distance greater than or equal to its radius, then no part of the model surrounded by the sphere is visible and thus should not be rendered. If the sphere's center falls on the positive side of a plane by a distance greater than or equal to its radius, then we know that no vertex in the mesh falls on the negative side of the plane. This information enables us to make an optimization later by excluding that plane when the mesh's triangles are clipped. The only bits in the clipFlags parameter which is passed to the ClipTriangle function that need to be set are those that correspond to planes for which the distance from the plane to the bounding sphere's center is less than the sphere's radius.

In order to perform the bounding sphere calculations in world space, we need to know the world space normal vectors for each of the view frustum planes. The normal vector for the front plane is simply the camera's viewing direction. The normal vectors for the four side planes are calculated in the PrepareToRender function shown in Listing 2. The actual bounding sphere visibility test is implemented in the SphereVisible function shown in Listing 5. This function returns a boolean value indicating whether the sphere is at least partially visible, and if so, also returns a set of flags indicating which planes have to be checked during the clipping process.

#### Listing 5: Bounding sphere visibility test

```SphereVisible
```

This function returns a boolean value indicating whether a mesh's bounding sphere is visible. If true, then the clipFlags parameter is set to indicate which planes the sphere intersects and thus need to be considered for clipping.

```bool MyEngine::SphereVisible(const Vector3D& worldCenter,
{
long flags = clipFlagNone;
float f = viewPlaneDistance * maxDepth;
Vector3D cameraRelativeCenter =
worldCenter - cameraPosition;
float d = cameraRelativeCenter * viewDirection;
if (d < f - radius) return (false);
if (d < f + radius) flags |= clipFlagFront;
d = cameraRelativeCenter * leftPlaneNormal;
if (d < -radius) return (false);
if (d < radius) flags |= clipFlagLeft;
d = cameraRelativeCenter * rightPlaneNormal;
if (d < -radius) return (false);
if (d < radius) flags |= clipFlagRight;
d = cameraRelativeCenter * topPlaneNormal;
if (d < -radius) return (false);
if (d < radius) flags |= clipFlagTop;
d = cameraRelativeCenter * bottomPlaneNormal;
if (d < -radius) return (false);
if (d < radius) flags |= clipFlagBottom;
*clipFlags = flags;
return (true);
}
```

### Summary

This article has covered several fundamental topics with which every 3D engine designer needs to be familiar. We now have all of the mathematical tools which are necessary to render an arbitrary triangle mesh on the screen. These tools are combined in the RenderMesh function shown in Listing 6. This function takes a mesh and first determines whether is it visible by calling the SphereVisible function. If it is visible, then the mesh's vertices are transformed into camera space with the TransformVertices function. Finally, backface culling and clipping are performed one triangle at a time.

This is as far as we can go and still maintain platform-independence. The only remaining step is to transform the vertices into screen space (by applying equations (14) and (15)) and send them to the 3D acceleration API. We could gain some efficiency by changing the vertexArray and triangleArray members of the MyEngine class to be arrays of the native vertex and triangle structures of the API being used (for instance, TQAVTexture and TQAIndexedTriangle under QuickDraw 3D RAVE). If this is done, then the TransformVertices function would have to be modified to store the camera space vertices in the API's native format, and the InterpolateVertices function would have to be modified to work with this format as well.

There are, of course, many additional components to every 3D engine that this article has not explored such as lighting, animation, and collision detection, whose design depends more heavily on an engine's ultimate application than the material presented herein. The goal has been to provide a good starting point for those who wish to create their own 3D universe and to provide a strong foundation on which to build these higher-level systems.

#### Listing 6: Rendering a mesh

```RenderMesh
```

This function does everything which is necessary to render a mesh. The PrepareToRender function should be called before any RenderMesh calls are made.

```void MyEngine::RenderMesh(const Mesh *mesh)
{
long      clipFlags;
// First check if bounding sphere is visible.
if (SphereVisible(mesh->objectToWorldTransform *
&clipFlags))
{
const Vertex *vertex = mesh->vertexArray;
const Triangle *triangle = mesh->triangleArray;
// Transform vertices into camera space.
TransformVertices(mesh->vertexCount, vertex,
mesh->objectToWorldTransform);
// Calculate camera position in object space.
Vector3D localCameraPosition =
Inverse(mesh->objectToWorldTransform) *
cameraPosition;
// Perform backface culling and clipping for each triangle.
triangleCount = 0;
for (long a = 0; a < mesh->triangleCount; a++)
{
const Vertex *v = &vertex[triangle->index];
float d = (localCameraPosition - v->point) *
triangle->normal;
if (d > 0.0F) ClipTriangle(triangle, clipFlags);
triangle++;
}
if (triangleCount != 0)
{
// At this point we can transform the vertices into screen space using
// equations (14) and (15), and then submit them to the 3D acceleration API
// for rendering.
}
}
}
```

### Bibliography

• Abrash, Michael. Zen of Graphics Programming. The Coriolis Group, 1996.
• Foley, James D., et al. Computer Graphics: Principles and Practice. Addison-Wesley, 1990.
• Paeth, Alan W., ed. Graphics Gems V. AP Professional, 1995.
• Thomas, Kas. "Poor Man's Bryce, Part II," MacTech Magazine.

Eric Lengyel has recently joined Apple Computer where he is working on graphics technology for MacOS X. He can be reached at lengyel@apple.com.    Community Search:
MacTech Search:

LaunchBar 6.18.5 - Powerful file/URL/ema...
LaunchBar is an award-winning productivity utility that offers an amazingly intuitive and efficient way to search and access any kind of information stored on your computer or on the Web. It provides... Read more
Affinity Designer 2.3.0 - Vector graphic...
Affinity Designer is an incredibly accurate vector illustrator that feels fast and at home in the hands of creative professionals. It intuitively combines rock solid and crisp vector art with... Read more
Affinity Photo 2.3.0 - Digital editing f...
Affinity Photo - redefines the boundaries for professional photo editing software for the Mac. With a meticulous focus on workflow it offers sophisticated tools for enhancing, editing and retouching... Read more
WhatsApp 23.24.78 - Desktop client for W...
WhatsApp is the desktop client for WhatsApp Messenger, a cross-platform mobile messaging app which allows you to exchange messages without having to pay for SMS. WhatsApp Messenger is available for... Read more
Adobe Photoshop 25.2 - Professional imag...
You can download Adobe Photoshop as a part of Creative Cloud for only \$54.99/month Adobe Photoshop is a recognized classic of photo-enhancing software. It offers a broad spectrum of tools that can... Read more
PDFKey Pro 4.5.1 - Edit and print passwo...
PDFKey Pro can unlock PDF documents protected for printing and copying when you've forgotten your password. It can now also protect your PDF files with a password to prevent unauthorized access and/... Read more
Skype 8.109.0.209 - Voice-over-internet...
Skype is a telecommunications app that provides HD video calls, instant messaging, calling to any phone number or landline, and Skype for Business for productive cooperation on the projects. This... Read more
OnyX 4.5.3 - Maintenance and optimizatio...
OnyX is a multifunction utility that you can use to verify the startup disk and the structure of its system files, to run miscellaneous maintenance and cleaning tasks, to configure parameters in the... Read more
CrossOver 23.7.0 - Run Windows apps on y...
CrossOver can get your Windows productivity applications and PC games up and running on your Mac quickly and easily. CrossOver runs the Windows software that you need on Mac at home, in the office,... Read more
Tower 10.2.1 - Version control with Git...
Tower is a Git client for OS X that makes using Git easy and more efficient. Users benefit from its elegant and comprehensive interface and a feature set that lets them enjoy the full power of Git.... Read more

## Latest Forum Discussions Pour One Out for Black Friday – The Touc...
After taking Thanksgiving week off we’re back with another action-packed episode of The TouchArcade Show! Well, maybe not quite action-packed, but certainly discussion-packed! The topics might sound familiar to you: The new Steam Deck OLED, the... | Read more »
TouchArcade Game of the Week: ‘Hitman: B...
Nowadays, with where I’m at in my life with a family and plenty of responsibilities outside of gaming, I kind of appreciate the smaller-scale mobile games a bit more since more of my “serious" gaming is now done on a Steam Deck or Nintendo Switch.... | Read more »
Hello gentle readers, and welcome to the SwitchArcade Round-Up for December 1st, 2023. We’ve got a lot of big games hitting today, new DLC For Samba de Amigo, and this is probably going to be the last day this year with so many heavy hitters. I... | Read more »
Steam Deck Weekly: Tales of Arise Beyond...
Last week, there was a ton of Steam Deck coverage over here focused on the Steam Deck OLED. | Read more »
World of Tanks Blitz adds celebrity amba...
Wargaming is celebrating the season within World of Tanks Blitz with a new celebrity ambassador joining this year's Holiday Ops. In particular, British footballer and movie star Vinnie Jones will be brightening up the game with plenty of themed in-... | Read more »
KartRider Drift secures collaboration wi...
Nexon and Nitro Studios have kicked off the fifth Season of their platform racer, KartRider Dift, in quite a big way. As well as a bevvy of new tracks to take your skills to, and the new racing pass with its rewards, KartRider has also teamed up... | Read more »
‘SaGa Emerald Beyond’ From Square Enix G...
One of my most-anticipated releases of 2024 is Square Enix’s brand-new SaGa game which was announced during a Nintendo Direct. SaGa Emerald Beyond will launch next year for iOS, Android, Switch, Steam, PS5, and PS4 featuring 17 worlds that can be... | Read more »
This week, there is no new release for Apple Arcade, but many notable games have gotten updates ahead of next week’s holiday set of games. If you haven’t followed it, we are getting a brand-new 3D Sonic game exclusive to Apple Arcade on December... | Read more »
New ‘Honkai Star Rail’ Version 1.5 Phase...
The major Honkai Star Rail’s 1.5 update “The Crepuscule Zone" recently released on all platforms bringing in the Fyxestroll Garden new location in the Xianzhou Luofu which features many paranormal cases, players forming a ghost-hunting squad,... | Read more »
Hello gentle readers, and welcome to the SwitchArcade Round-Up for November 30th, 2023. It’s Thursday, and unlike last Thursday this is a regular-sized big-pants release day. If you like video games, and I have to believe you do, you’ll want to... | Read more »

## Price Scanner via MacPrices.net

Sunday Sale: Apple 14-inch M3 MacBook Pro on...
B&H Photo has new 14″ M3 MacBook Pros, in Space Gray, on Holiday sale for \$150 off MSRP, only \$1449. B&H offers free 1-2 day delivery to most US addresses: – 14″ 8-Core M3 MacBook Pro (8GB... Read more
Blue 10th-generation Apple iPad on Holiday sa...
Amazon has Apple’s 10th-generation WiFi iPad (in Blue) on Holiday sale for \$349 including free shipping. Their discount applies to WiFi models only and includes a \$50 instant discount + \$50 clippable... Read more
All Apple Pencils are on Holiday sale for \$79...
Amazon has all Apple Pencils on Holiday sale this weekend for \$79, ranging up to 39% off MSRP for some models. Shipping is free: – Apple Pencil 1: \$79 \$20 off MSRP (20%) – Apple Pencil 2: \$79 \$50 off... Read more
Deal Alert! Apple Smart Folio Keyboard for iP...
Apple iPad Smart Keyboard Folio prices are on Holiday sale for only \$79 at Amazon, or 50% off MSRP: – iPad Smart Folio Keyboard for iPad (7th-9th gen)/iPad Air (3rd gen): \$79 \$79 (50%) off MSRP This... Read more
Apple Watch Series 9 models are now on Holida...
Walmart has Apple Watch Series 9 models now on Holiday sale for \$70 off MSRP on their online store. Sale prices available for online orders only, in-store prices may vary. Order online, and choose... Read more
Holiday sale this weekend at Xfinity Mobile:...
Switch to Xfinity Mobile (Mobile Virtual Network Operator..using Verizon’s network) and save \$500 instantly on any iPhone 15, 14, or 13 and up to \$800 off with eligible trade-in. The total is applied... Read more
13-inch M2 MacBook Airs with 512GB of storage...
Best Buy has the 13″ M2 MacBook Air with 512GB of storage on Holiday sale this weekend for \$220 off MSRP on their online store. Sale price is \$1179. Price valid for online orders only, in-store price... Read more
B&H Photo has Apple’s 14-inch M3/M3 Pro/M...
B&H Photo has new Gray and Black 14″ M3, M3 Pro, and M3 Max MacBook Pros on Holiday sale this weekend for \$100-\$200 off MSRP, starting at only \$1499. B&H offers free 1-2 day delivery to most... Read more
15-inch M2 MacBook Airs are \$200 off MSRP on...
Best Buy has Apple 15″ MacBook Airs with M2 CPUs in stock and on Holiday sale for \$200 off MSRP on their online store. Their prices are among the lowest currently available for new 15″ M2 MacBook... Read more
Get a 9th-generation Apple iPad for only \$249...
Walmart has Apple’s 9th generation 10.2″ iPads on sale for \$80 off MSRP on their online store as part of their Cyber Week Holiday sale, only \$249. Their prices are the lowest new prices available for... Read more

## Jobs Board

Omnichannel Associate - *Apple* Blossom Mal...
Omnichannel Associate - Apple Blossom Mall Location:Winchester, VA, United States (https://jobs.jcp.com/jobs/location/191170/winchester-va-united-states) - Apple Read more
Senior Product Manager - *Apple* - DISH Net...
…Responsibilities** We are seeking an ambitious, data-driven thinker to assist the Apple Product Development team as our Wireless Product division continues to grow Read more
Senior Product Manager - *Apple* - DISH Net...
…Responsibilities** We are seeking an ambitious, data-driven thinker to assist the Apple Product Development team as our Wireless Product division continues to grow Read more
Senior Software Engineer - *Apple* Fundamen...
…center of Microsoft's efforts to empower our users to do more. The Apple Fundamentals team focused on defining and improving the end-to-end developer experience in Read more
Omnichannel Associate - *Apple* Blossom Mal...
Omnichannel Associate - Apple Blossom Mall Location:Winchester, VA, United States (https://jobs.jcp.com/jobs/location/191170/winchester-va-united-states) - Apple Read more