Cubby
Volume Number: 16 (2000)
Issue Number: 9
Column Tag: QuickDraw 3D Tricks
Cubby: Multiscreen Desktop VR Part I
by Tom Djajadiningrat and Maarten Gribnau
Multiple views and mirroring images in QuickDraw 3D
Summary
In a series of three articles we describe how to implement the visualization part of a Cubby, a desktop virtual reality system which uses three orthogonally placed head-tracked screens. This series will give you an understanding of how this type of three dimensional display works and show you how easy it is to implement using off-the-shelf components. To facilitate implementation we use Apple's QuickDraw3D API. Even if you are not interested in virtual reality display technology, you may still be interested in the QuickDraw 3D techniques presented here. The most important ones are multiple views on a single model and the mirroring of images without the use of offscreen GWorlds.
Introduction
Cubby is a desktop virtual reality system developed at Delft University of Technology (Djajadiningrat et al, 1997; Djajadiningrat, 1998). Cubby uses three orthogonally placed head-tracked screens which form a cubic display space (Figure 1). Through the coupling of the perspectives on the screens to the head-movements of the user in real-time, the illusion is created that a virtual scene stands inside the display space. Figure 2 shows a user in front of Cubby. Figure 3 shows a chair inside Cubby's display space from four perspectives as generated by four different head positions. Because of the way the screens are placed, Cubby allows the virtual scene to be viewed from a wide range of visual angles (see movie 'visualization.mov'). And since the virtual scene appears in front of rather than behind the screens, the user can get at the objects in the virtual scene with an instrument without the screens forming an obstruction. This makes it possible to manipulate objects by means of an instrument at the place where they appear (see movie 'manipulation.mov'). The 3D impression that Cubby creates is based purely on head-tracking. It does not use stereo though of course this could be added.
Figure 1. The Cubby setup.
Figure 2. A user in front of Cubby.
Figure 3. Four perspectives on a desk chair as generated by different head positions.
Technically, Cubby is similar to a CAVE (Cruz-Neira et al., 1993). A CAVE is a virtual reality environment in which the walls and þoor of a room form projection screens. A CAVE measures approximately 3x3x3 metres, while Cubby's display space is only 0.2x0.2x0.2 metres. Thus from a technical point of view you can think of Cubby as a miniature CAVE. From an application point of view, however, Cubby and CAVE are quite different. With a CAVE, the user is inside the cubic space while with a Cubby the user is outside it. This gives each system its pros and cons. With a CAVE, the virtual scene is all around the user. It is therefore well-suited to panoramic viewing and walkthrough or rollercoaster types of simulation. Because of its large size it is difÞcult to realize accurate head and hand tracking with the currently available tracking technology. With a Cubby the user looks upon the virtual scene as if it were an object. Cubby's workspace is much smaller than CAVE's, but because of this, it is possible to realize accurate tracking of head-position and instruments. Cubby is therefore well-suited to precision tasks such as surgical simulation and computer aided design. Other advantages of Cubby are that it consumes little space, is relatively low-cost, and can be built using consumer grade off-the-shelf technology.
This article describes how to implement the visualization part of a Cubby in QuickDraw 3D. It will give you a grasp of the technology behind multiple screen head-tracked displays such as Cubby and CAVE. Even if you are not interested in such technology, you may still be interested in the QuickDraw 3D techniques presented here. They are multiple views on a single model and the mirroring of images without the use of offscreen GWorlds. We also include a section troubleshooting, to help you get Cubby running, and a section Tidbits, with suggestions to further improve Cubby.
Cubby on Power Macintosh
What you should know
We assume that you are familiar with the basics of QuickDraw 3D programming. If you have not dealt with QuickDraw 3D before we suggest that you have a look at the introduction to QuickDraw 3D in Develop 22 (Thompson and Fernicola, 1995) or at chapter nine 'QuickDraw 3D' of 'Tricks of the Mac Game Programming Gurus' (Greenstone, 1995).
We also assume that you are familiar with Part I and II of 'Desktop VR using QuickDraw 3D', two MacTech articles that appeared in July and August of 1998 (Djajadiningrat and Gribnau, 1998; Gribnau and Djajadiningrat, 1998).
Required hardware and software
To try out the QuickDraw 3D techniques in this article you need a PowerMacintosh with an accelerated 3D graphics board and QuickDraw 3D 1.5.4 or better.
If you wish to build an actual Cubby you need additional electronic hardware such as a head-tracker, three projectors and possibly extra graphics boards and scan converters. As a head-tracker we use a Dynasight infra-red tracker by Origin Instruments. With regard to projectors, graphics boards and scan converters, your exact needs depend on which conÞguration you choose. We will discuss possible conÞgurations in a minute. Of course, a Cubby consists of more than electronics and computer hardware alone. You also need to build a physical setup to position the projectors relative to the screens. For quick experimentation you can build the display space using cardboard and tracing paper or drafting foil and mount the projectors on tripods providing they are not too big. Figure 4 shows what our first setup looked like. For a more permanent and robust setup you need to build the display space from 4-5mm thick projection material (available from professional photography labs) and mount the projectors on a table (Figure 5).
Figure 4. Our preliminary setup with a display space built from foamboard and drafting foil. The projectors are mounted on tripods.
Figure 5. A more robust setup. This table makes it possible to accurately line up projectors and screens.
Possible hardware configurations
Your Mac needs to generate three images, one per projection screen. You can these images to the three projectors in several ways. You can either work with one graphics board or three graphics boards and with either computer projectors or video projectors (A computer projector is a projector that can directly accept the VGA output of the graphics board of your Mac; a video projector is one which only accepts a composite or S-video signal such as produced by your home video recorder. A projector which accepts VGA usually accepts S-video too, but not all video projectors accept VGA input). This leads to four possible conÞgurations (Figure 6).
Figure 6. A 2x2 matrix leading to four different configurations.
Figure 7. The four different configuration a-d (top to bottom)
The simplest conÞguration is to work with a single graphics board, use a multiplier to split the signal of this board into three identical signals, and feed each of these identical signals to a computer beamer (Figure 7a). Each projector is placed in such a way relative to its screen that the relevant part of the image appears on the screen, while the two remaining images are simply discarded by being projected off screen.
The second way is similar to the Þrst, with the difference that we use a scan converter to convert the signal of the graphics board into a video signal, which is then split up by a video multiplier and fed to three video projectors (Figure 7b). While this results in lesser image quality it is likely to be less expensive. The extra costs for the scan converter are less than the cost you save by using consumer grade video projectors instead of computer beamers.
The disadvantage of using a single graphics board is that you do not make full use of the resolution of the projectors as approximately three quarters of the display area is discarded. A way to overcome this is to use three graphics boards instead of just one. This leads to a setup in which there is one graphics board per projector. Consequently, no splitter box is necessary. (Figure 7c). Of course, this assumes that your Mac has enough PCI slots free to add extra graphics boards.
The last conÞguration is similar to the third, the difference being that it uses video projectors rather than computer projectors (Figure 7d).
As always, there are clever ways to cut costs. For example, you can avoid the extra cost of one or more scan converters by using graphics boards which have both an output for a conventional monitor and a S-video output. Some desktop Macs and PowerBooks even have S-video outputs as part of their standard conÞguration.
A Look at the Cubby App
Before we dive into an explanation of the computer graphics behind Cubby, let's run the application to see what we are aiming for. Don't worry about connecting a head-tracker to the Mac, for the moment we run under mouse control. Start up the application 'Cubby' and open the model 'espresso.3df'. On screen you see an L-shape with an espresso pot (Figure 8). The L-shape comprises three QuickDraw 3D panes which are placed within a single window which covers the whole screen except for the menu bar (The advantage of using three panes within a single window instead of one window per view is that that the single window acts as a black backdrop to the panes. This prevents the Macintosh desktop from appearing at the outer edges of Cubby's screens). While the espressopot was loaded from disk, the background planes with the marbled texture form part of the Cubby application.
Now try moving the mouse. On the conventional 2D-monitor that you are using now, the three perspective views in the L-shape look strangely distorted. When the views are folded into a cube, as in Cubby's display space, and coupled to the user's head-position the espresso pot appears to stand within Cubby's space.
Choose Mirrored from the Images menu. This mirrors each image horizontally (Figure 9). By looking at the frames per second counter in the statusbar and choosing Normal and Mirrored repeatedly you can see how mirroring does not impose much of a speed penalty.
Figure 8. A screen shot of the Cubby application (non-mirrored).
Figure 9. A screen shot of the Cubby application (mirrored).
Multiple Head-Tracked Displays
The easiest way to think of Cubby is as three head-tracked displays. Each of the back-projection screens is a head-tracked display. For an extensive discussion of head-tracked displays please refer to MacTech July 1998 (Djajadiningrat & Gribnau, 1998). In that issue we explained that for a head-tracked display an off-axis rather than an on-axis camera is needed. In QuickDraw 3D speak this means a view plane camera rather than a aspect ratio camera. For Cubby we use three QuickDraw 3D views each of which has its own view plane camera. Each of the Figures 10 to 12 shows one view plane camera, its coordinates, its viewing pyramid and its line of sight. The point at which Cubby's three screens meet is placed at the origin of the world coordinate system. Notice how the three view plane cameras share the same location C with coordinates Cx, Cy and Cz. Later we will couple the camera's location to the head-position of the user. While the three cameras share the same location, each one has a different orientation as the line of sight of each camera is at right angles to its screen. The orientation of a view plane camera is determined by its location and the point of interest, point P in the figures. Figure 10 shows the camera for the screen in the Z=0 plane with the line of sight parallel to the z-axis. Figure 11 shows the camera for the screen in the X=0 plane with the line of sight parallel to the x-axis. Finally, Figure 12 shows the camera for the screen in the Y=0 plane with its line of sight parallel to the vertical y-axis.
Figure 10. The camera for the Z=0 plane.
Figure 11. The view plane camera for the X=0 plane.
Figure 12. The view plane camera for the Y=0 plane.
Mirroring without GWorlds
To get a convincing depth impression in Cubby the three screens need to form a single, seamless display. Therefore we cannot use conventional monitors or flat-panel displays, which always have a frame around their imageable area which would result in a disturbing seam (Figure 13). Instead we have to use projection screens. Since we are projecting on the back of the screens, we need to mirror the images that are sent to the projectors. So, how do we do this? Some projectors allow mirroring in hardware. But many projectors, especially the low-end ones, lack this feature which means that you have to mirror the images in software. The conventional approach to software mirroring would go like this. First, we would render an image to an offscreen GWorld. Then we would need to copy this image to a second offscreen GWorld so that the image is mirrored. This could be done by copying the n-th one pixel-wide column of the first GWorld to the (width-n)th column of the second GWorld. Finally, we would copy this second GWorld to the screen. There are two problems with this approach. The first is that many graphics boards do not support hardware acceleration when rendering to offscreen GWorlds with QuickDraw 3D which would make rendering slow. The second is that the copying between the GWorlds takes too much time. And to get a convincing depth impression with a head-tracked display we need all the speed we can get.
Figure 13. We cannot use ordinary monitors to build Cubby's display space because of the borders around the imageable areas.
Luckily, there is another way to mirror the images which does not require offscreen GWorlds. This involves mirroring the virtual scene, the lights and each camera to a different octant and reversing the orientation style of the scene. Remember that three-dimensional space is divided into eight octants by the three perpendicular coordinate planes. Figures 14 to 16 show the mirroring of the camera and the background planes (the mirroring of the model and the lights is not shown). Each camera is mirrored in the plane for which it is intended. Let's consider each camera in turn. The original camera position is C, the mirrored position C'. Figure 14 shows how the camera for the Z=0 plane is mirrored in the Z=0 plane. Figure 15 shows how the camera for the X=0 plane is mirrored in the X=0 plane. Finally, Figure 16 shows how the camera for the Y=0 plane is mirrored in the Y=0 plane. Enough talk, let's look at the code. We split it up into two parts: the code concerning multiple views and the code for mirroring.
Figure 14. Mirroring in the Z=0 plane.
Figure 15. Mirroring in the X=0 plane.
Figure 16. Mirroring in the Y=0 plane.
Multiple Views
In this section we discuss:
- creating multiple views
- adjusting the view plane cameras
- submitting the views for rendering
Creating multiple views
So, how do we go about creating the three views? We use a global struct gDoc of type DocumentRec with the fields fView1, fView 2 and fView3 which contain pointers to the three views of Cubby. The DocumentRec struct is defined in Shell.h. You will see that we use this struct a lot. It is a simple way of passing around often used variables such as view objects and matrices.
From Init in Shell.c (Listing 1) we call initDocumentData in (De)Init.c. The global struct gDoc is passed as a parameter.
Listing 1: Shell.c
Init
InitDocumentData(&gDoc, gWindow);
In InitDocumentData we fill in the fields fView1, fView2 and fView3 of the ioDoc parameter by calling newView in ViewCreation.c three times and passing the rectangles of the panes within our window (Listing 2).
Listing 2: (De)Init.c
InitDocumentData(DocumentPtr ioDoc, WindowPtr inWindow)
Rect theRect1={kPane1T, kPane1L, kPane1B, kPane1R};
Rect theRect2={kPane2T, kPane2L, kPane2B, kPane2R};
Rect theRect3={kPane3T, kPane3L, kPane3B, kPane3R};
ioDoc->fView1=newView(ioDoc, inWindow, &theRect1);
ioDoc->fView2=newView(ioDoc, inWindow, &theRect2);
ioDoc->fView3=newView(ioDoc, inWindow, &theRect3);
The routine newView should need no further explanation as it forms a standard part of each QuickDraw 3D application. It calls newDrawContext and passes in a rectangle to set up a pane within a window (Listing 3).
Listing 3: ViewCreation.c
newDrawContext (WindowPtr inWindow, Rect *inRect)
theDrawContextData.paneState = kQ3True;
theDrawContextData.pane.min.x = inRect->left;
theDrawContextData.pane.min.y = inRect->top;
theDrawContextData.pane.max.x = inRect->right;
theDrawContextData.pane.max.y = inRect->bottom;
The routine newDrawContext returns a new draw context to newView. It then goes on to create and set a renderer, a view plane camera and a light group. As you can see, creating multiple views is quite simple. It is a matter of repeating the view object creation as found in each basic QuickDraw 3D application.
Adjusting the three view plane cameras
Now we come to the most interesting part of the code: adjusting the view plane camera of each view to the current position of the head-tracker (Listing 4). We need to convert the raw head-position, expressed in the head-trackers own coordinate system, to the world coordinate system. This is done by calling Q3Point3D_Transform with the calibration matrix in the fCalMatrix field of our struct gDoc which was passed as a parameter to AdjustCameras. We will discuss how to create this matrix in the third part of this series.
Once we have a camera position in world coordinates, we need to ascertain that it stays within the positive XYZ octant. Just before the camera threatens to move outside this octant, the camera's hither plane is pushed through the background plane causing the latter to disappear. This is a disturbing effect which we can avoid by carefully tuning the minimum allowed distance between camera and background plane to the hither value of the camera. Here we set the minimum allowed distance between camera and background plane to kHither+kMargin QuickDraw 3D units. A user of Cubby who comes closer to the screens than the real world equivalent of kHither+kMargin QuickDraw 3D units will notice that the perspectives do not completely follow his head movements, but at least the background plane on that screen does not completely disappear. In the section Tidbits we discuss a more elegant way to solve this problem.
AdjustCameras finishes by calling AdjustOneCamera for each view and passing the camera position as a parameter.
Listing 4: ViewPlaneCamera.c
AdjustCameras
TQ3Point3D H, C;
// apply the calibration matrix to convert the
// raw head position to a camera position
// in world coordinates.
Q3Point3D_Transform(&H,
&inDoc->fCalMatrix,
&C);
// Limit the camera position so
// we do not move beyond the screens.
if (C.x<= kHither) C.x= kHither ;
if (C.y<= kHither) C.y= kHither ;
if (C.z<= kHither) C.z= kHither ;
// Adjust the camera of each view.
AdjustOneCamera(inDoc, &C, kView1);
AdjustOneCamera(inDoc, &C, kView2);
AdjustOneCamera(inDoc, &C, kView3);
return kQ3Success ;
bail:
return kQ3Failure ;
Let's look at AdjustOneCamera (Listing 6). In this routine we adjust a view plane camera of a single view by getting the camera object from the view and updating its parameters. Our objective is to be able to fill in the TQ3ViewPlaneCameraData struct as defined in QD3DCamera.h (Listing 5). This means that we also need to be able to fill in a TQ3CameraData struct, a TQ3CameraPlacement struct and a TQ3CameraRange struct. Luckily, some variables do not change as we adjust our cameras. The halfWidthAtViewPlane, halfHeightAtViewPlane and viewPort variables stay the same during the execution of our program.
Listing 5
struct TQ3ViewPlaneCameraData {
TQ3CameraData cameraData;
float viewPlane;
float halfWidthAtViewPlane;
float halfHeightAtViewPlane;
float centerXOnViewPlane;
float centerYOnViewPlane;
};
struct TQ3CameraData {
TQ3CameraPlacement placement;
TQ3CameraRange range;
TQ3CameraViewPort viewPort;
};
struct TQ3CameraPlacement {
TQ3Point3D cameraLocation; TQ3Point3D pointOfInterest;
TQ3Vector3D upVector;
};
struct TQ3CameraRange {
float hither;
float yon;
};
We start by setting the camera location C to a local copy of the camera location parameter inC that was passed to AdjustOneCamera. We also set the point of interest to the inC parameter (Notice that the term point of interest is somewhat misleading. The point of interest is the direction in which the camera is pointing but with a view plane camera it need not be visible as part of our image).
The next thing is to determine which view we are dealing with. We do this through a switch statement which looks at the inViewNumber parameter. In each case of the switch statement we get the camera object from the view, adjust the point of interest P, and set the distance to the view plane. We discuss the cases one by one.
If the inViewNumber equals kView1, we are dealing with the view on the Z=0 plane (Figure 14). First we get the camera object from the view by calling Q3View_GetCamera. As the point of interest for the camera of this view is the projection of the camera location on the Z=0 plane, we set P.z to zero. To describe the settings of a view plane camera we also need the distance from the camera to the view plane, held in theViewPlane variable. The camera for kView1 looks parallel to the Z-axis so the distance equals the z-coordinate of the camera location.
If the inViewNumber equals kView2, we are dealing with the view on the X=0 plane (Figure 15). Again we start by getting the camera object from the view by calling Q3View_GetCamera. As the point of interest for the camera of this view is the projection of the camera location on the X=0 plane, we set P.x to zero. The camera for kView2 looks parallel to the X-axis so the distance from the camera to the view plane equals the x-coordinate of the camera location.
Finally, if the inViewNumber equals kView3, we are dealing with the view on the Y=0 plane (Figure 16). Again we start by getting the camera object from the view by calling Q3View_GetCamera. As the point of interest for the camera of this view is the projection of the camera location on the Y=0 plane, we set P.y to zero. The camera for kView3 looks parallel to the Y-axis so the distance from the camera to the view plane equals the y-coordinate of the camera location. With this view we need to take care of one more thing. As the camera is looking down on the Y=0 plane as if it were a photographic enlarger, we need to change the up vector of the camera from (0,1,0) to (0,0,-1). The up vector for this view's camera points in the direction of the negative z-axis.
Now we need to look at the centerXOnViewPlane and the centerYOnViewPlane settings of the view plane camera. These determine the centre of the part of the view plane that we are interested in, indicated by an Q in Figures 14 to 16. In world coordinates point Q does not change, but as centerXOnViewPlane and centerYOnViewPlane are expressed in the camera's coordinate system they need to be adjusted if the camera is moved (For the moment we only consider non-mirrored cameras, later we will discuss what happens when we mirror the cameras. So we ignore the if (!gMirrored) statements for the moment and return to those later. ).
CenterXOnViewPlane is the x-coordinate of the projection of the vector PY on the view plane in terms of camera's coordinate system. Likewise, CenterYOnViewPlane is the y-coordinate of the projection of the vector PY on the view plane in terms of camera's coordinate system. Let's look at CenterXOnViewPlane and CenterYOnViewPlane for each of the three views.
Figure 14 shows that for kView1, looking at the plane Z=0:
theCenterX = -C.x + kHalfWidthAtViewPlane;
theCenterY = -C.y + kHalfWidthAtViewPlane;
In Listing 6 the code says:
theCenterX = -C.x + kHalfWidthAtViewPlane + kFO;
theCenterY = -C.y + kHalfWidthAtViewPlane + kFO;
So what is this mysterious kFO? Well, this is a Fiddle Offset constant. If you look in the header MyDefines.h you will see that it equals 0.001. Set it to 0, compile and run the Cubby app. You will see that suddenly the backgrounds start to flicker. Obviously, QuickDraw 3D is not very happy if the point of interest is a perfect projection on the view plane and it has to render polygons which lie exactly in the view plane. This is why we added the Fiddle Offset. kFO is so small that it does not cause any distortion but it does get rid of the flickering.
Figure 15 shows that for kView2, looking at the plane X=0:
theCenterX = +C.z - kHalfWidthAtViewPlane + kFO;
theCenterY = -C.y + kHalfWidthAtViewPlane + kFO;
Finally, Figure 16 tells us that for kView3, looking at the plane Y=0:
theCenterX = -C.x + kHalfWidthAtViewPlane + kFO;
theCenterY = +C.z - kHalfWidthAtViewPlane + kFO;
The last variables that we have to adjust are the hither and yon planes. Remember that we set a minimum distance of kHither+kMargin QuickDraw 3D units for the camera to approach the X=0, Y=0 and Z=0 planes. Here you see what the kMargin constant is good for: we avoid pushing the hither plane through the background planes by a margin of kMargin. A background plane is not only clipped if it lies between the camera location and the hither plane, but also if it lies beyond the yon plane. We prevent this from happening by setting the yon value to the view plane distance plus kMargin QuickDraw 3D units.
We complete the routine AdjustOneCamera by disposing the camera object through Q3Object_Dispose to balance the reference count.
Listing 6: ViewPlaneCamera.c
AdjustOneCamera
void AdjustOneCamera(DocumentPtr inDoc,
TQ3Point3D *inC,
short inViewNumber)
{
TQ3CameraObject theCamera ;
TQ3CameraPlacement theCameraPlacement;
TQ3Point3D C, P;
TQ3Vector3D theUp = { 0.0, 1.0, 0.0 };
float theViewPlane;
float theCenterX;
float theCenterY;
TQ3CameraRange theRange;
// Make a local copy of the camera location.
Q3Point3D_Set(&C, inC->x, inC->y, inC->z);
// Determine the point of interest.
// It is derived from the camera location,
// so start with a copy of the camera location.
Q3Point3D_Set(&P, inC->x, inC->y, inC->z);
switch (inViewNumber)
{
case kView1:
// Get the camera from the view
Q3View_GetCamera(inDoc->fView1, &theCamera);
P.z = 0.0;
theViewPlane = C.z;
break;
case kView2:
// Get the camera from the view
Q3View_GetCamera(inDoc->fView2, &theCamera);
P.x = 0.0;
theViewPlane = C.x;
break;
case kView3:
// Get the camera from the view
Q3View_GetCamera(inDoc->fView3, &theCamera);
P.y = 0.0;
theViewPlane = C.y;
// As this camera is pointing parallel to
// the Y-axis, we need to change the theUp vector.
Q3Vector3D_Set(&theUp, 0, 0, -1);
break;
}
if (!gMirrored)
{
switch (inViewNumber)
{
case kView1:
theCenterX = -C.x + kHalfWidthAtViewPlane + kFO;
theCenterY = -C.y + kHalfWidthAtViewPlane + kFO;
break;
case kView2:
theCenterX = +C.z - kHalfWidthAtViewPlane + kFO;
theCenterY = -C.y + kHalfWidthAtViewPlane + kFO;
break;
case kView3:
theCenterX = -C.x + kHalfWidthAtViewPlane + kFO;
theCenterY = +C.z - kHalfWidthAtViewPlane + kFO;
break;
}
}
else
{
switch (inViewNumber)
{
case kView1:
// Mirror in Z=0 plane.
C.z = -C.z;
theCenterX = +C.x - kHalfWidthAtViewPlane + kFO;
theCenterY = -C.y + kHalfWidthAtViewPlane + kFO;
break;
case kView2:
// Mirror in X=0 plane.
C.x = -C.x;
theCenterX = -C.z + kHalfWidthAtViewPlane + kFO;
theCenterY = -C.y + kHalfWidthAtViewPlane + kFO;
break;
case kView3:
// Mirror in Y=0 plane.
C.y = -C.y;
theCenterX = +C.x - kHalfWidthAtViewPlane + kFO;
theCenterY = +C.z - kHalfWidthAtViewPlane + kFO;
break;
}
}
// Adjust the hither and yon planes.
theRange.hither = kHither;
theRange.yon = theViewPlane + kMargin;
// Fill in the camera placement.
theCameraPlacement.cameraLocation = C;
theCameraPlacement.pointOfInterest = P;
theCameraPlacement.upVector = theUp;
// Fill in the fields of the camera
Q3Camera_SetPlacement(theCamera, &theCameraPlacement);
Q3ViewPlaneCamera_SetViewPlane (theCamera, theViewPlane);
Q3ViewPlaneCamera_SetCenterX (theCamera, theCenterX);
Q3ViewPlaneCamera_SetCenterY (theCamera, theCenterY);
Q3Camera_SetRange(theCamera, &theRange);
// Dispose of the camera object
Q3Object_Dispose( theCamera ) ;
}
Submitting three views for rendering
Now that we have adjusted the view plane camera of all three views, we can submit the views for rendering. Have a look at Listing 7 which shows the routine DocumentDraw3DData. The routine starts by calling Q3View_Sync for all three views. This is to force all three views to finish rendering the previous frame before we start rendering the next one. Again we ignore the if (gMirrored) statements for the moment. The remaining part of DocumentDraw3Ddata calls SubmitOneView in a rendering loop for each of the three views.
Listing 7: Rendering.c
DocumentDraw3DData
TQ3Status SubmitViews( DocumentPtr inDoc )
{
TQ3Status theStatus ;
Q3View_Sync(inDoc->fView1);
Q3View_Sync(inDoc->fView2);
Q3View_Sync(inDoc->fView3);
if (gMirrored)
inDoc-> fMirrorMatrix = inDoc->fMatrixMirrorZ0;
// The rendering loop for fView1
Q3View_StartRendering( inDoc->fView1 );
do
{
theStatus = SubmitOneView(inDoc, &(inDoc->fView1));
}
while ( Q3View_EndRendering(inDoc->fView1)
== kQ3ViewStatusRetraverse );
if (gMirrored)
inDoc-> fMirrorMatrix = inDoc->fMatrixMirrorX0;
// The rendering loop for fView2
Q3View_StartRendering( inDoc->fView2 );
do
{
theStatus = SubmitOneView(inDoc, &(inDoc->fView2));
}
while ( Q3View_EndRendering(inDoc->fView2)
== kQ3ViewStatusRetraverse );
if (gMirrored)
inDoc-> fMirrorMatrix = inDoc->fMatrixMirrorY0;
// The rendering loop for fView3
Q3View_StartRendering( inDoc->fView3 );
do
{
theStatus = SubmitOneView(inDoc, &(inDoc->fView3));
}
while ( Q3View_EndRendering(inDoc->fView3)
== kQ3ViewStatusRetraverse );
return theStatus ;
}
So we turn our attention to SubmitOneView (Listing 8) which, you guessed it, submits a single view for rendering. Again we ignore the if(gMirrored) statement for the moment. The most important aspect of this routine is that we first submit the display group fDisplaySpace which contains the background planes, then the shader object and finally the model which was loaded from disk. If we would submit the background planes after the shading object the lighting on the background planes would change with the user's head-position. By submitting the background planes first we avoid getting such disturbing shading effects. The model which was loaded from disk is shaded as normal as it is submitted after the shading object.
Listing 8: Rendering.c
SubmitOneView
TQ3Status SubmitOneView( DocumentPtr inDoc,
TQ3ViewObject inView)
{
// if we wish to mirror the image, we need
// to submit a mirroring matrix and change
// the orientation style from counter clockwise
// to clockwise.
if (gMirrored)
{
Q3MatrixTransform_Submit( &inDoc-> fMirrorMatrix,
inView) ;
Q3Style_Submit( inDoc->fOrientationStyle,
inView);
}
Q3Style_Submit(inDoc->fInterpolation, inView);
Q3Style_Submit(inDoc->fBackFacing, inView);
Q3Style_Submit(inDoc->fFillStyle, inView);
// submit the background planes which form
// the cubic display space before we submit
// the shader. That way we get evenly lit
// background planes, regardless of where the
// cameras are.
Q3DisplayGroup_Submit( inDoc->fDisplaySpace, inView);
// submit shader and styles
Q3Shader_Submit(inDoc->fIlluminationShader, inView);
// fit the model to the cubic display space
Q3MatrixTransform_Submit(&inDoc->fModelMatrix, inView);
// submit the model which was loaded from disk
Q3DisplayGroup_Submit( inDoc->fModel, inView);
return kQ3Success ;
}
Mirroring the Views
If your projectors do not feature hardware mirroring, you have to mirror the images in software. Perfect mirroring not only requires mirroring the camera and the virtual scene (including the background planes), but also mirroring the lights and changing the orientation style. The order in which we discuss things is:
- creating the mirroring matrices
- setting the mirroring global
- mirroring the lights
- mirroring the three view plane cameras
- mirroring the virtual scene
- adjusting the orientation style
- Creating the mirroring matrices
As we saw at the beginning of this article we need to do mirroring in the X=0, Y=0 and Z=0 planes. Listing 9 shows a code snippet for creating the three matrices that we need for these mirroring operations.
Listing 9 (De)Init.c
InitDocumentData
Q3Matrix4x4_SetScale(&ioDoc->fMatrixMirrorX0, -1, 1, 1);
Q3Matrix4x4_SetScale(&ioDoc->fMatrixMirrorY0, 1, -1, 1);
Q3Matrix4x4_SetScale(&ioDoc->fMatrixMirrorZ0, 1, 1, -1);
We use the QuickDraw 3D routine Q3Matrix4x4_SetScale to create each matrix. Scaling by -1 along one axis results in mirroring in the perpendicular plane. For example, scaling by -1 in along the x-axis results in mirroring in the plane X=0.
Setting the mirroring global
Next we look what happens when the user chooses Normal or Mirrored from the Images menu. Menu interaction is handled by the routine DoMenuCommand in Shell.c. Listing 10 shows an excerpt for the images menu. First HandleMenuCheckedItem is called which handles the checkmark in front of the menu items. Then the boolean gMirrored is toggled which determines whether to mirror the images or not. Finally, we call the routine mirrorLights for each view and pass the appropriate mirroring matrix as a parameter.
Listing 10 Shell.c
DoMenuCommand
case mImages:
switch (item)
{
case iNormal:
if (gMirrored == true)
{
HandleMenuCheckedItem(item);
gMirrored = false;
mirrorLights(gDoc.fView1, gDoc.fMatrixMirrorZ0);
mirrorLights(gDoc.fView2, gDoc.fMatrixMirrorX0);
mirrorLights(gDoc.fView3, gDoc.fMatrixMirrorY0);
}
break;
case iMirrored:
if (gMirrored == false)
{
HandleMenuCheckedItem(item);
gMirrored = true;
mirrorLights(gDoc.fView1, gDoc.fMatrixMirrorZ0);
mirrorLights(gDoc.fView2, gDoc.fMatrixMirrorX0);
mirrorLights(gDoc.fView3, gDoc.fMatrixMirrorY0);
}
break;
}
Mirroring the lights
We need to make sure that we mirror all the lights that belong to a view. When we created the view (newView in ViewCreation.c) we put all the lights in a light group object. What we have to do now is to obtain the light group from the view, traverse the light group, determine the type of each light and mirror the aspects relevant to that light. This is detailed in Listing 11.
We use the QuickDraw 3D routine Q3Light_GetType to determine the type of light. Through a switch statement we get cases for all possible types of light: ambient lights, point lights, directional lights and spot lights. An ambient light needs no mirroring as it has neither a location nor a direction. A point light has a location but no direction so we only need to mirror the former. A directional light can be thought of as the opposite of a point light: it has no location but it does have a direction so we only need to mirror the latter. Finally, there are spot lights which have both a location and a direction, so we need to mirror both.
Listing 11 MirrorLights.c
mirrorLights
TQ3Status mirrorLights(TQ3ViewObject inView,
TQ3Matrix4x4 inMatrix)
{
TQ3GroupObject theGroup; // the view's light group
TQ3GroupPosition thePos; // a group position
TQ3Object theLight; // a light
TQ3Status theResult; // a result code
TQ3ObjectType theType;
TQ3Point3D theLoc;
TQ3Vector3D theDir;
// Get the light group from the view.
theResult = Q3View_GetLightGroup(inView, &theGroup);
if (theResult == kQ3Failure) goto bail;
// Traverse the light group and mirror the positions
// and direcitons of all light types as needed.
for ( Q3Group_GetFirstPosition(theGroup, &thePos);
thePos != NULL;
Q3Group_GetNextPosition(theGroup, &thePos))
{
theResult = Q3Group_GetPositionObject(theGroup,
thePos,
&theLight);
if (theResult == kQ3Failure) goto bail;
theType = Q3Light_GetType(theLight);
// What we mirror depends on the type of light.
switch (theType)
{
case kQ3LightTypeAmbient: break;
case kQ3LightTypePoint:
Q3PointLight_GetLocation(theLight, &theLoc);
Q3Point3D_Transform(&theLoc, &inMatrix, &theLoc);
Q3PointLight_SetLocation(theLight, &theLoc);
break;
case kQ3LightTypeDirectional:
Q3DirectionalLight_GetDirection(theLight, &theDir);
Q3Vector3D_Transform(&theDir,
&inMatrix,
&theDir);
Q3DirectionalLight_SetDirection(theLight, &theDir);
break;
case kQ3LightTypeSpot:
Q3SpotLight_GetLocation(theLight, &theLoc);
Q3Point3D_Transform(&theLoc, &inMatrix, &theLoc);
Q3SpotLight_SetLocation(theLight, &theLoc);
Q3SpotLight_GetDirection(theLight, &theDir);
Q3Vector3D_Transform( &theDir,
&inMatrix,
&theDir);
Q3SpotLight_SetDirection(theLight, &theDir);
break;
}
// balance reference count of light
Q3Object_Dispose(theLight);
}
Q3Object_Dispose(theGroup);
return(kQ3Success);
bail:
return(kQ3Failure);
}
Mirroring the three view plane cameras
We now return to Listing 6 AdjustOneCamera to look what happens if we wish to mirror the cameras. Look for the else-branch of the conditional statement if(!gMirrored). First we determine which of the three views we are dealing with. Then we mirror the location of the camera and adjust the CenterXOnViewPlane and CenterYOnViewPlane variables. By now you should be able to relate this code to the mirrored cameras in Figures 14 to 16.
Mirroring the virtual scene
Of course we also need to mirror the model which we loaded from disk and the background planes. Look at the conditional statements if(gMirrored) in SubmitViews in Rendering.c. Just before we call SubmitOneView for each view, we set the field fMirrorMatrix of the struct gDoc to the mirroring matrix for that view. In SubmitOneView we submit the matrix in the fMirrorMatrix field before anything else. This is what mirrors the background planes and the model.
Adjusting the orientation style
Think about what we did when we did our mirroring. We mirrored our model, so we mirrored its polygons and its vertices. That means we reversed the direction in which the vertices of a polygon are listed. This is important as the direction in which the vertices of a polygon are listed determines which side of the polygon is considered as the front face which in turn influences shading. We can change what QuickDraw 3D considers to be the front face by changing the orientation style. The default is kQ3OrientationStyleCounterClockwise, which means that the front face is the side from which the vertices are listed counterclockwise. By changing the orientation style to kQ3OrientationStyleClockwise the front and back face are flipped. This is what happens in Listing 8, SubmitOneView in Rendering.c. Notice that we do not explicitly toggle between kQ3OrientationStyleCounterClockwise and kQ3OrientationStyleClockwise. In non-mirrored mode we simply rely on the default. Only in the mirrored mode do we actually submit an orientation style using Q3OrientationStyle_Submit. The style is created in InitDocumentData in (De)Init.c through the call Q3OrientationStyle_New (Listing 12). You may wish to read up on the orientation style object in the QuickDraw 3D 1.5.4 manual (Chapter 6, page 550).
Listing 12 (De)Init.c
InitDocumentData
ioDoc->fOrientationStyle = Q3OrientationStyle_New(kQ3OrientationStyleClockwise);
Troubleshooting
Make sure you have a look at the troubleshooting sections of our previous articles in MacTech July 1998 and August 1998. In addition, have a look whether your problem is among these.
One of the background planes remains untextured
Explanation: You may have run out of VRAM. Running in a high resolution (say 1024x768 or above), in millions of colours and a couple of textures may simply be too much for the VRAM of your graphics board, especially if it has only 4MB or 8MB.
Solution: Try running in thousands rather than millions of colours, reduce the sizes of your textures or use fewer textures. If these solutions are out of the question, consider a graphics board with more VRAM.
Tidbits
As always there are ways to improve the code. Here are some things you may wish to look at.
Delays between views
In the MacTech article on single screen head-tracked displays we mentioned the problem caused by delay: the perspective shown on the screen does not correspond to the head position of the user, causing the virtual scene to appear distorted. Cubby's three views cause an additional delay related problem. Because the three views do not render equally fast, we can get discontinuities on the edge of two views. This effect become particularly noticeable under quick camera movements. One way to eliminate this problem would be to use a TQ3PixmapDrawContext. All three views are rendered to an offscreen GWorld and copy this GWorld to the screen. The awkward thing is that not all 3D graphics boards support accelerated rendering to offscreen GWorlds. For graphics accelerators by ATI you can get code from ATI developer support which makes accelerated offscreen rendering possible.
Limiting the camera position
We discussed the problem that we had with the hither plane of a camera being pushed through the background planes, causing the latter to disappear. Just as the delays between the views, this can be fixed by using a TQ3PixmapDrawContext. A 2D background texture is copied to a Gworld which is used as the pixmap. The virtual scene is rendered over this background texture. So instead of making the background planes part of the virtual scene, the background planes are a simple 2D background texture.
Conclusions
By now you should have a good idea of how the visualization part of Cubby works. You learnt how to configure three views with view plane cameras. You also learnt how to mirror the resulting images. But before you can enjoy a virtual scene in Cubby there's some more work to do. You need to know how to build an InputSprocket driver and how to calibrate the head-tracker so that the user's head movements give the correct perspectives in Cubby. Tune in next month and we'll show you how.
Bibliography and References
- Cruz-Neira, C., Sandin, D.J., & DeFanti, T.A. (1993). Surround-screen projection based virtual reality: The design and implementation of the CAVE. Proceedings of SIGGRAPH'93, 135-142.
- Fernicola, P. and Thompson, N. (1995, June). QuickDraw 3D: a New Dimension in Macintosh Graphics. Develop 22, pp.6-28
- Djajadiningrat, J.P. (1998). Cubby: What you see is where you act. Interlacing the display and manipulation spaces. Doctoral dissertation, Delft University of Technology, Delft.
- Djajadiningrat, J.P., Smets, G.J.F., & Overbeeke, C.J. (1997). Cubby: a multiscreen movement parallax display for direct manual manipulation. Displays, 17, 191-197.
- Djajadiningrat, J.P., & Gribnau, M.W. (1998, July). Desktop VR using QuickDraw 3D, Part I. Using the View Plane Camera to implement a head-tracked display. MacTech, 14(7), 32-43.
- Greenstone, B. (1995). QuickDraw 3D. In McCornack et al. (eds.), Tricks of the Mac Game Programming Gurus (pp. 491-546). Indianapolis, IN: Hayden Books.
- Gribnau, M.W., & Djajadiningrat, J.P. (1998, August). Desktop VR using QuickDraw 3D, Part II. Using the Pointing Device Manager to implement a head-tracked display. MacTech, 14(8), 26-34.
Tom has calculated that if he could convince the current top contestants to refrain from taking part in the Programmer's Challenge and he were to submit a proposal for a Challenge every month, he could be leading the pack by as early as Christmas 2010. Just in case this clever ploy to achieve fame fails, he continues to hone his personal collection of irreconcilable skills.
Maarten lives in a binary world. His research is about two-handed interaction with 3D graphics. But recently, he realized that his bimanual interfaces fall short when his twins started to interact with his computer. He is now thinking about rewriting Nanosaur and implement four-handed interaction.