Elisha Peterson, United States Military Academy

**Abstract:** This preliminary note will describe how I go about rendering points, curves, and surfaces in three dimensions for blaise. See http://blaisemath.googlecode.com/ for the complete source code.

# Scene Modeling and Projections

In what follows, we use lowercase (*d*) for scalars, uppercase (*P*) for points in space, lowercase bold (**x**) for arbitrary vectors, and uppercase bold (**T**) for unit vectors.

## Camera Setup

Let *C* be the camera location, with orientation **T**, **N**, **B**, where **T** is the direction the camera is facing, **N** is the "up" direction, and **B**=**T**x**N** is the perpendicular direction. Assume the viewing screen is located at a distance *d* from the camera, and is centered relative to the camera. Thus, we assume the the viewscreen is perpendicular to **T** and the center *V* of the viewscreen is such that $\overrightarrow{CV} = d \vec{\mathbf{T}}$.

As indicated above, the center of the viewscreen is $V=C+d\vec{\mathbf{T}}$. Since the screen is parallel to **B** and **N**, an arbitrary point on the screen may be written as *V*+*x***B**+*y***N**. In this case, we call $\langle x,y\rangle_\pi$, the **viewscreen coordinates**. Let *w* be the width of the viewscreen and *h* the height; the viewscreen's four corner points are $V\pm\frac{w}{2}\vec{\mathbf{B}}\pm\frac{h}{2}\vec{\mathbf{N}}$. In general, a vector **y** starting at the camera may be expressed as

If the vector ends on the viewscreen, its viewscreen coordinates are therefore $\langle\vec{\mathbf{y}}\cdot\vec{\mathbf{B}}, \vec{\mathbf{y}}\cdot\vec{\mathbf{N}}\rangle_\pi$.

## Projection of an Arbitrary Point

Let *A _{π}* denote the projection of a point

*A*onto the viewscreen; this is where the viewscreen intersects the line between

*A*and

*C*. Let $\vec{\mathbf{x}}=\overrightarrow{CA}$ be the vector pointing from the camera

*C*to the point. We can express this point as

*A*=

_{π}*C*+λ

**x**for some λ. The value λ is easiest to compute by focusing on the

**T**direction. Since

*d*is the point from

*C*to the plane, and

**x**·

**T**is the distance in the same direction from the camera to the point, one has simply $\lambda=d/\vec{\mathbf{x}}\cdot\vec{\mathbf{T}}$. So

In viewscreen coordinates, one considers the vector $\overrightarrow{CA_\pi}=\left(\frac{d}{\vec{\mathbf{x}}\cdot\vec{\mathbf{T}}}\right)\overrightarrow{CA}$, so that the point *A* has representation in viewscreen coordinates as follows:

## Pixel Conversion

As a final step, one must convert the viewscreen coordinates $\langle x,y\rangle_\pi$ to window coordinates. If the window has pixels ranging from $(0,0)$ in the upper-lefthand corner to $(W,H)$ in the lower-righthand corner (the usual way of marking window coordinates), then the pixel coordinates of this point are

(4)Note that the ratios $\frac{W}{w}$ and $\frac{H}{h}$ will typically be the same and represent *the number of pixels per unit length*. If this is represented by a constant factor η, the formula becomes $\eta\left(x+\frac{w}{2},-y+\frac{h}{2}\right) = \left(\frac{W}{2} + \eta x, \frac{H}{2}-\eta y\right)$.

Combining the above discussion, if **x** represents the vector from *C* to *A*, then the pixel coordinates of *A*, as it appears on the viewscreen, should be given by the mapping

## Inverse Transformation

A point (*x _{p}*,

*y*) in pixel coordinates transforms in viewscreen coordinates to $\left\langle\frac{x_p-W/2}{\eta}, -\frac{y_p-H/2}{\eta}\right\rangle = \left\langle\frac{x_p}{\eta}-\frac{w}{2}, -\frac{y_p}{\eta}+\frac{h}{2}\right\rangle$. Hence, the corresponding point on the viewscreen is

_{p}The full line of potential inverse mappings is

(7)# Multiple Viewpoints & Anaglyphs

Anaglyphs are images rendered separately for each eye, typically with a different color so they can be seen using 3D glasses. In this case, one must use two different projections to obtain the image of the two cameras.

We will look at how the above formulas change for a camera *C _{ε}* that is shifted from the original location so that

*C*=

_{ε}*C*+

**ε**=

*C*+

*ε*

_{x}**B**+

*ε*

_{y}**N**(a translation parallel to the viewscreen). Let

*A*represent the point on the viewscreen as seen by the new camera. Note that

_{πε}*V*=

_{ε}*V*+

**ε**as well, since the translation is parallel to the viewscreen. One therefore has

This can be further simplifed. Since $\vec{\mathbf{x}}_\varepsilon=\overrightarrow{C_\varepsilon A}=\overrightarrow{CA}-\mathbf{\vec\varepsilon}=\vec{\mathbf{x}}-\mathbf{\vec\varepsilon}$, we have $\vec{\mathbf{x}}_\varepsilon\cdot\vec{\mathbf{T}}=\vec{\mathbf{x}}\cdot\vec{\mathbf{T}}$, $\vec{\mathbf{x}}_\varepsilon\cdot\vec{\mathbf{B}}=\vec{\mathbf{x}}\cdot\vec{\mathbf{B}}-\varepsilon_x$, and $\vec{\mathbf{x}}_\varepsilon\cdot\vec{\mathbf{N}}=\vec{\mathbf{x}}\cdot\vec{\mathbf{N}}-\varepsilon_y$. This shows that

(9)Therefore, the parallel camera which is shifted by **ε** will give points that are shifted by (1-λ)**ε** (with the vector **ε** considered in the proper coordinate system.)

When reduced to the window's coordinate system, recall that $\langle x,y\rangle_\pi \leftrightarrow (\frac{W}{2},\frac{H}{2}) + \eta(x,-y)$, where $\eta=\frac{W}{w}=\frac{H}{h}$ is the number of pixels per unit length. Therefore, one has

(10)So the endstate of a camera shift is a change of

(11)in window coordinates.

A typical two-camera construction (e.g. our eyes) will have cameras at *C* ± *ε***B** for a scalar *ε* representing one half the distance between the eyes. So in the viewscreen coordinates, the *x* values are shifted by ±(1-λ)*ε*. And in the window coordinates, the values are shifted by ±η(1-λ)*ε*.

## Color Filtering

Two-color 3d images work by providing two images with different color schemes, that depend upon the color of lenses in a pair of 3d glasses. When looking through a red lens, one cannot see the color red; therefore, the image seen through the red lens should be comprised of colors using only green and blue channels. When looking through a second lens (e.g. cyan), one may use the red channel. For example, if the left lens is red, the image projected by the left camera (left eye) should be colored with only green and blue. Similarly, if the right lens is cyan, the image projected by the right camera (right eye) should be colored by red only.

# Rotations

Rotations are achieved by the following formula [1]:

(12)Here, the vector **r** is being rotated by an angle of Φ around the axis **n**.

The motion for rotating the figure is "dragging" the screen. The natural interaction is to treat the rendered scene as existing inside a ball, and rotating that ball around its center to see how the image changes. In our case, the center of the ball will be denoted by *S*=*C*+Δ**T**, where Δ>*d* and the radius of that ball will be Δ–*d*. (Recall that *d* is the distance to the viewscreen. Think of Δ as the distance to the center of the scene of interest, since it defines the point of rotation.) When the user drags the mouse, points are created for the start and end of that motion, call them *R*_{1} and *R*_{2}. Let **r**_{i} be the unit vector $\overrightarrow{SR_i}/\overrightarrow{SR_i}$ formed by normalizing the vector from *S* to *R*_{i}. Then the axis of rotation is the unit vector

and the angle of rotation is given by the dot product formula

(14)The Euler rotation formula can then be used to transform the camera's **T**,**N**,**B** frame, and thereby recenter the camera to the location *S*–Δ**T** (in the transformed **T**).

In some cases, it may be preferable to allow for the user to make an image rotate faster or slower. This is easily accomplished by scaling the rotation angle Φ to some *C*Φ. One may also animate a rotation, by keeping the axis of rotation fixed, and increasing the rotation by a small angle dΦ at each time step.

# Source Code

*Coming soon… see page history for old source code.*

# References

*Please up-vote this page if you like its contents!*

*Leave a comment or a question below:*