Organizer:
Tom McReynolds
SGI
Copyright ©1998 by Tom McReynolds and David Blythe.
All rights reserved
SIGGRAPH
`98 Course
(Also in the SIGGRAPH
'99 Course)
OpenGL supports stereo viewing, with left and right versions of the front and back buffers. In normal, non-stereo viewing, when not using both buffers, the default buffer is the left one for both front and back buffers. Since OpenGL is window system independent, there are no interfaces in OpenGL for stereo glasses, or other stereo viewing devices. This functionality is part of the OpenGL/Window system interface library; the style of support varies widely.
In order to render a frame in stereo:
Instead of assigning units to it, think of the fusion distance as a dimensionless quantity, relative to location of the front and back clipping planes. For example, you may want to set the fusion distance to be halfway between the front and back clipping planes. This way it is independent of the application's coordinate system, which makes it easier to position objects appropriately in the scene.
To model viewer attention realistically, the fusion distance should be adjusted to match the object in the scene that the viewer is looking at. This requires knowing where the viewer is looking. If head and eye tracking equipment is being used in the application finding the center of interest is straightforward. A more indirect approach is to have the user consciously designate the object being viewed. Clever psychology can sometimes substitute for eye tracking hardware. If the animated scene is designed in such a way as to draw the viewer's attention in a predictable way, or if the scene is very sparse, intelligent guesses can be made as to the viewers center of interest.
The view direction vector and the vector separating the left and right eye position are perpendicular to each other. The two view points are located along a line perpendicular to the direction of view and the ``up'' direction. The fusion distance is measured along the view direction. The position of the viewer can be defined to be at one of the eye points, or halfway between them. In either case, the left and right eye locations are positioned relative to it.
If the viewer is taken to be halfway between the stereo eye positions,
and assuming gluLookAt() has been called to put the viewer position
at the origin in eye space, then the fusion distance is measured along
the negative Z axis (like the near and far clipping planes), and the two
viewpoints are on either side of the origin along the X axis, at (-IOD/2,
0, 0) and (IOD/2, 0, 0).
The order of matrix operations should be:
glMatrixMode(GL_MODELVIEW); glLoadIdentity(); /* the default matrix */ glPushMatrix() glDrawBuffer(GL_BACK_LEFT) glTranslatef(-IOD/2.f, 0, 0) glRotatef(-angle, 0.f, 1.f, 0.f) <viewing transforms> <modeling transforms> draw() glPopMatrix(); glPushMatrix() glDrawBuffer(GL_BACK_RIGHT) glTranslatef(IOD/2, 0., 0.) glRotatef(angle, 0.f, 1.f, 0.f) <viewing transforms> <modeling transforms> draw() glPopMatrix()Where angle is the inverse tangent of the ratio between the fusion distance and half of the interocular distance.
Another approach to implementing stereo transforms is to change the viewing tranform directly. Instead of adding an extra rotation and translation, use a separate call to gluLookAt() for each eye view. Move fusion distance along the viewing direction from the viewer position, and use that point for the center of interest of both eyes. Translate the eye position to the appropriate eye, then render the stereo view for the corresponding buffer.
The difficulty with this technique is finding the left/right eye axis
to translate along from the viewer position to find the left and right
eye viewpoints. Since your now computing the left/right eye axis in object
space, it is no longer constrained to be the X axis. Find the left/right
eye axis in object space by taking the cross product of the direction of
view and the up vector.
Although shearing each eye's view instead of rotating is less physically accurate, sheared stereo views can be easier for viewers to achieve stereo fusion. This is because the two eye views have the same orientation and lighting.
For objects that are far from the eye, the differences between the two approaches are small.