3D computer graphics have many uses -- from games to data visualization, virtual reality, and beyond. More often than not, speed is of prime importance, making specialized software and hardware a must to get the job done. Special-purpose graphics libraries provide a high-level API, but hide how the real work is done. As nose-to-the-metal programmers, though, that's not good enough for us! We're going to put the API in the closet and take a behind-the-scenes look at how images are actually generated -- from the definition of a virtual model to its actual rendering onto the screen.

We'll be looking at a fairly specific subject: generating and rendering terrain maps, such as the surface of Mars or a few atoms of gold. Terrain-map rendering can be used for more than just aesthetic purposes -- many data-visualization techniques produce data that can be rendered as terrain maps. My intentions are, of course, entirely artistic, as you can see by the picture below! Should you so desire, the code that we will produce is general enough that with only minor tweaking it can also be used to render 3D structures other than terrains.

Click here to view and manipulate the terrain applet.

In preparation for our discussion today, I suggest that you read June's "Draw textured spheres" if you haven't already done so. The article demonstrates a ray-tracing approach to rendering images (firing rays into a virtual scene to produce an image). In this article, we'll be rendering scene elements directly onto the display. Although we're using two different techniques, the first article contains some background material on the `java.awt.image`

package that I will not rehash in this discussion.

### Terrain maps

Let's start by defining a

*terrain map*

. A terrain map is a function that maps a 2D coordinate

*(x,y)*

to an altitude

*a*

and color

*c*

. In other words, a terrain map is simply a function that describes the topography of a small area.

Let us define our terrain as an interface:

public interface Terrain { public double getAltitude (double i, double j); public RGB getColor (double i, double j); }

For the purpose of this article we will assume that *0.0 <= i,j,altitude <= 1.0*. This is not a requirement, but will give us a good idea where to find the terrain that we'll be viewing.

The color of our terrain is described simply as an RGB triplet. To produce more interesting images we might consider adding other information such as the surface shininess, etc. For now, however, the following class will do:

public class RGB { private double r, g, b; public RGB (double r, double g, double b) { this.r = r; this.g = g; this.b = b; } public RGB add (RGB rgb) { return new RGB (r + rgb.r, g + rgb.g, b + rgb.b); } public RGB subtract (RGB rgb) { return new RGB (r - rgb.r, g - rgb.g, b - rgb.b); } public RGB scale (double scale) { return new RGB (r * scale, g * scale, b * scale); } private int toInt (double value) { return (value < 0.0) ? 0 : (value > 1.0) ? 255 : (int) (value * 255.0); } public int toRGB () { return (0xff << 24) | (toInt (r) << 16) | (toInt (g) << 8) | toInt (b); } }

The `RGB`

class defines a simple color container. We provide some basic facilities for performing color arithmetic and converting a floating-point color to packed-integer format.

### Transcendental terrains

We'll start by looking at a transcendental terrain -- fancyspeak for a terrain computed from sines and cosines:

public class TranscendentalTerrain implements Terrain { private double alpha, beta; public TranscendentalTerrain (double alpha, double beta) { this.alpha = alpha; this.beta = beta; } public double getAltitude (double i, double j) { return .5 + .5 * Math.sin (i * alpha) * Math.cos (j * beta); } public RGB getColor (double i, double j) { return new RGB (.5 + .5 * Math.sin (i * alpha), .5 - .5 * Math.cos (j * beta), 0.0); } }

Our constructor accepts two values that define the frequency of our terrain. We use these to compute altitudes and colors using `Math.sin()`

and `Math.cos()`

. Remember, those functions return values *-1.0 <= sin(),cos() <= 1.0*, so we must adjust our return values accordingly.

### Fractal terrains

Simple mathematical terrains are no fun. What we want is something that looks at least passably real. We could use real topography files as our terrain map (the San Francisco Bay or the surface of Mars, for example). While this is easy and practical, it's somewhat dull. I mean, we've

*been*

there. What we really want is something that looks passably real

*and*

has never been seen before. Enter the world of fractals.

A fractal is something (a function or object) that exhibits *self-similarity*. For example, the Mandelbrot set is a fractal function: if you magnify the Mandelbrot set greatly you will find tiny internal structures that resemble the main Mandelbrot itself. A mountain range is also fractal, at least in appearance. From close up, small features of an individual mountain resemble large features of the mountain range, even down to the roughness of individual boulders. We will follow this principal of self-similarity to generate our fractal terrains.

Essentially what we'll do is generate a coarse, initial random terrain. Then we'll recursively add additional random details that mimic the structure of the whole, but on increasingly smaller scales. The actual algorithm that we will use, the Diamond-Square algorithm, was originally described by Fournier, Fussell, and Carpenter in 1982 (see Resources for details).

These are the steps we'll work through to build our fractal terrain:

We first assign a random height to the four corner points of a grid.

We then take the average of these four corners, add a random perturbation and assign this to the midpoint of the grid (

*ii*in the following diagram). This is called the*diamond*step because we are creating a diamond pattern on the grid. (At the first iteration the diamonds don't look like diamonds because they are at the edge of the grid; but if you look at the diagram you'll understand what I'm getting at.)We then take each of the diamonds that we have produced, average the four corners, add a random perturbation and assign this to the diamond midpoint (

*iii*in the following diagram). This is called the*square*step because we are creating a square pattern on the grid.- Next, we reapply the diamond step to each square that we created in the square step, then reapply the
*square*step to each diamond that we created in the diamond step, and so on until our grid is sufficiently dense.

An obvious question arises: How much do we perturb the grid? The answer is that we start out with a roughness coefficient *0.0 < roughness < 1.0*. At iteration *n* of our Diamond-Square algorithm we add a random perturbation to the grid: *-roughness ^{n} <= perturbation <= roughness^{n}*. Essentially, as we add finer detail to the grid, we reduce the scale of changes that we make. Small changes at a small scale are fractally similar to large changes at a larger scale.

If we choose a small value for *roughness*, then our terrain will be very smooth -- the changes will very rapidly diminish to zero. If we choose a large value, then the terrain will be very rough, as the changes remain significant at small grid divisions.

Here's the code to implement our fractal terrain map:

public class FractalTerrain implements Terrain { private double[][] terrain; private double roughness, min, max; private int divisions; private Random rng; public FractalTerrain (int lod, double roughness) { this.roughness = roughness; this.divisions = 1 << lod; terrain = new double[divisions + 1][divisions + 1]; rng = new Random (); terrain[0][0] = rnd (); terrain[0][divisions] = rnd (); terrain[divisions][divisions] = rnd (); terrain[divisions][0] = rnd (); double rough = roughness; for (int i = 0; i < lod; ++ i) { int q = 1 << i, r = 1 << (lod - i), s = r >> 1; for (int j = 0; j < divisions; j += r) for (int k = 0; k < divisions; k += r) diamond (j, k, r, rough); if (s > 0) for (int j = 0; j <= divisions; j += s) for (int k = (j + s) % r; k <= divisions; k += r) square (j - s, k - s, r, rough); rough *= roughness; } min = max = terrain[0][0]; for (int i = 0; i <= divisions; ++ i) for (int j = 0; j <= divisions; ++ j) if (terrain[i][j] < min) min = terrain[i][j]; else if (terrain[i][j] > max) max = terrain[i][j]; } private void diamond (int x, int y, int side, double scale) { if (side > 1) { int half = side / 2; double avg = (terrain[x][y] + terrain[x + side][y] + terrain[x + side][y + side] + terrain[x][y + side]) * 0.25; terrain[x + half][y + half] = avg + rnd () * scale; } } private void square (int x, int y, int side, double scale) { int half = side / 2; double avg = 0.0, sum = 0.0; if (x >= 0) { avg += terrain[x][y + half]; sum += 1.0; } if (y >= 0) { avg += terrain[x + half][y]; sum += 1.0; } if (x + side <= divisions) { avg += terrain[x + side][y + half]; sum += 1.0; } if (y + side <= divisions) { avg += terrain[x + half][y + side]; sum += 1.0; } terrain[x + half][y + half] = avg / sum + rnd () * scale; } private double rnd () { return 2. * rng.nextDouble () - 1.0; } public double getAltitude (double i, double j) { double alt = terrain[(int) (i * divisions)][(int) (j * divisions)]; return (alt - min) / (max - min); } private RGB blue = new RGB (0.0, 0.0, 1.0); private RGB green = new RGB (0.0, 1.0, 0.0); private RGB white = new RGB (1.0, 1.0, 1.0); public RGB getColor (double i, double j) { double a = getAltitude (i, j); if (a < .5) return blue.add (green.subtract (blue).scale ((a - 0.0) / 0.5)); else return green.add (white.subtract (green).scale ((a - 0.5) / 0.5)); } }

In the constructor, we specify both the roughness coefficient `roughness`

and the level of detail `lod`

. The level of detail is the number of iterations to perform -- for a level of detail *n*, we produce a grid of *(2 ^{n}+1 x 2^{n}+1)* samples. For each iteration, we apply the diamond step to each square in the grid and then the square step to each diamond. Afterwards, we compute the minimum and maximum sample values, which we'll use to scale our terrain altitudes.

To compute the altitude of a point, we scale and return the *closest* grid sample to the requested location. Ideally, we would actually interpolate between surrounding sample points, but this method is simpler, and good enough at this point. In our final application this issue will not arise because we will actually match the locations where we sample the terrain to the level of detail that we request. To color our terrain, we simply return a value between blue, green, and white, depending upon the altitude of the sample point.

### Tessellating our terrain

We now have a terrain map defined over a square domain. We need to decide how we are going to actually draw this onto the screen. We could fire rays into the world and try to determine which part of the terrain they strike, as we did in the previous article. This approach would, however, be extremely slow. What we'll do instead is approximate the smooth terrain with a bunch of connected triangles -- that is, we'll tessellate our terrain.

*Tessellate: to form into or adorn with mosaic (from the Latin* tessellatus*).*

To form the triangle mesh, we will evenly sample our terrain into a regular grid and then cover this grid with triangles -- two for each square of the grid. There are many interesting techniques that we could use to simplify this triangle mesh, but we'd only need those if speed was a concern.

The following code fragment populates the elements of our terrain grid with fractal terrain data. We scale down the vertical axis of our terrain to make the altitudes a bit less exaggerated.

double exaggeration = .7; int lod = 5; int steps = 1 << lod; Triple[] map = new Triple[steps + 1][steps + 1]; Triple[] colors = new RGB[steps + 1][steps + 1]; Terrain terrain = new FractalTerrain (lod, .5); for (int i = 0; i <= steps; ++ i) { for (int j = 0; j <= steps; ++ j) { double x = 1.0 * i / steps, z = 1.0 * j / steps; double altitude = terrain.getAltitude (x, z); map[i][j] = new Triple (x, altitude * exaggeration, z); colors[i][j] = terrain.getColor (x, z); } }

You may be asking yourself: So why triangles and not squares? The problem with using the squares of the grid is that they're not flat in 3D space. If you consider four random points in space, it's extremely unlikely that they'll be coplanar. So instead we decompose our terrain to triangles because we can guarantee that any three points in space will be coplanar. This means that there'll be no gaps in the terrain that we end up drawing.

public class Triangle { private int[] i = new int[3]; private int[] j = new int[3]; private Triple n; private RGB[] rgb = new RGB[3]; private Color color; public Triangle (int i0, int j0, int i1, int j1, int i2, int j2) { i[0] = i0; i[1] = i1; i[2] = i2; j[0] = j0; j[1] = j1; j[2] = j2; } }

Our `Triangle`

datastructure stores each vertex as an index *(i,j)* into the gridpoints of our terrain, along with normal and color information that we will fill in later. We create our array of triangles with the following piece of code:

int numTriangles = (steps * steps * 2); Triangle[] triangles = new Triangle[numTriangles]; int triangle = 0; for (int i = 0; i < steps; ++ i) { for (int j = 0; j < steps; ++ j) { triangles[triangle ++] = new Triangle (i, j, i + 1, j, i, j + 1); triangles[triangle ++] = new Triangle (i + 1, j, i + 1, j + 1, i, j + 1); } }

We'll look shortly at how to actually render these triangles onto the screen. But first, some lighting issues.

### Lighting and shadows

Okay, so we have a colored terrain. Not much to boast about. Without performing some lighting and shadow computation we would have a picture that looks like this:

We'll do two things to spruce this up. First, we'll add ambient and diffuse light sources so that elements of our landscape are lit according to their orientation relative to the sun. Then, we'll add shadows so that virtual matter actually impedes the progress of virtual photons.

**Lighting**

As you recall, we thoroughly covered ambient and diffuse lighting in June's article. In a nutshell, if a diffuse (matte) surface is facing a light source, the amount of reflected light will be proportional to the cosine of the angle of incidence of the light on the surface. Light falling directly on a surface will be strongly reflected; light falling at an oblique angle will be weakly reflected. In addition, all surfaces will be evenly lit by an amount of ambient, directionless light.

We will apply this simple lighting model to our terrain. One option is to take each point on the terrain and compute whether the point is facing the sun, and if so, what the angle of incidence is; the cosine of the angle of incidence is equal to the dot product of the terrain normal and sunlight direction. Of course, doing this for every point in the scene is rather expensive. Instead, we'll perform this calculation once for every triangle. The result should be plenty good enough.

Unlike my last article, we'll consider the sun to be a point this time around. Let's forget that it's about 100 times the size of the earth and allow it to be positioned anywhere in the scene. First, we'll compute a vector from the sun to our terrain:

sun: *p _{sun} = (x_{sun}, y_{sun}, z_{sun})* terrain point:

*p = (x, y, z)*light vector:

*v*

_{light}= p - p_{sun}= (x - x_{sun}, y - y_{sun}, z - z_{sun})To aid in these computations, we'll use a helper class:

public class Triple { private double x, y, z; public Triple (double x, double y, double z) { this.x = x; this.y = y; this.z = z; } public Triple add (Triple t) { return new Triple (x + t.x, y + t.y, z + t.z); } public Triple subtract (Triple t) { return new Triple (x - t.x, y - t.y, z - t.z); } public Triple cross (Triple t) { return new Triple (y * t.z - z * t.y, z * t.x - x * t.z, x * t.y - y * t.x); } public double dot (Triple t) { return x * t.x + y * t.y + z * t.z; } public double length2 () { return dot (this); } public Triple normalize () { return scale (1.0 / Math.sqrt (length2 ())); } public Triple scale (double scale) { return new Triple (x * scale, y * scale, z * scale); } }

The `Triple`

class represents a point or vector in space, and we provide methods that allow various mathematical operations to be performed. Note that we store information to double precision; if speed were more important, we'd use `float`

elements. Similarly, note that our computations result in many temporary `Triple`

objects being created. Again, if speed were important, we would (as the Java 3D API does) allow these objects to be reused.

Next, we need the surface normal. What we want to calculate is a vector pointing straight up from each triangle. If we take two (non-parallel) vectors in the plane of the triangle and compute their *cross product*, we have the surface normal. And, to select two vectors in the plane of the triangle, we can just take two sides of the triangle:

Triangle: *T* = *[p _{0}, p_{1}, p_{2}]*

Sides: *v _{a} = (p_{1} - p_{0}) = (x_{1} - x_{0}, y_{1} - y_{0}, z_{1} - z_{0}); v_{b} = (p_{2} - p_{0}) = (x_{2} - x_{0}, y_{2} - y_{0}, z_{2} - z_{0})*

Surface normal: *n _{T} = v_{a}* x

*v*

_{b}= (y_{a}z_{b}- z_{a}y_{b}, z_{a}x_{b}- x_{a}z_{b}, x_{a}y_{b}- y_{a}x_{b})Next, we need to remember that the strength of the light drops off in proportion to the inverse square of its distance from its source. This makes sense if you remember that the surface area of a sphere is *4.pi.r ^{2}*, so at a distance from the sun

*d*, the light emitted from the sun will be spread over an area of

*4.pi.d*. For us on earth, it doesn't really matter; the strength of light from the sun is about one one hundredth of a percent weaker on the far side of the earth than the near (okay, okay, so it's dark on the far side, but if the earth wasn't there...) For our scene, however, the sun will be very close, so we care about this computation.

^{2}Now we're going to compute the illumination of a point on our terrain. We have our surface normal, the light direction, and the distance from the sun. The diffuse lighting is proportional to the cosine of the angle of the light's incidence, which is the dot product of two unit vectors (as we saw last time). The ambient lighting is a constant. So...

Ambient lighting:

*a*

Sun's strength: *s*

Unit surface normal: *n' = n /* |*n*|

Light distance: *d =* |*v _{light}*|

Unit light vector: *v' _{light} = v / d*

Light incidence: *i = v' _{light} . n'*

Illumination facing the sun: *(i < 0)* : *l = a - i . s / d ^{2}*

Illumination facing away from the sun: *(i >= 0)* : *l = a*

That's it. We can now compute the amount of light reflected by any triangle in our terrain map.

**Shading**

An immediate fault apparent in the above image is that the triangles are *flat shaded*: each triangle has but a single color. This goes somewhat against the grain of our terrain definition -- our terrains are *D _{0}* continuous in altitude and color, but we then render each triangle in a single color. This means that our image is not

*D*continuous. Adjacent triangles have different colors, and our image shouldn't have any absolute color discontinuities.

_{0}To overcome this dilemma, we need to introduce a better shading model. Flat shading means that each triangle has a separate color. *Gouraud shading*, however, allows us to compute a color for each vertex of each triangle and then smoothly interpolate between these vertex colors within the triangle.

The Gouraud shading model allows us to easily ensure *D _{0}* continuity. We simply compute a color at each grid point on our terrain, and then assign our triangle vertices the appropriate colors from the grid. If two triangles are adjacent, they will share the same vertices along their common edge, and so will share exactly the same color along that edge.

To determine the color at a point on our grid, we use the exact computations outlined earlier. The only additional information that we need is the surface normal at our terrain vertices. We can compute this in couple of ways. Given that we will be computing triangle surface normals anyway, one easy option is to simply average the surface normals of all triangles sharing the vertex. Another, slightly more complex option is to compute the average plane through the surrounding terrain vertices and to use the normal of this plane. For simplicity, we'll go with the first option.

The following code computes the normals and lighting information for our array of triangles:

double ambient = .3; double diffuse = 4.0; Triple[][] normals = new Triple[steps + 1][steps + 1]; Triple sun = new Triple (3.6, 3.9, 0.6); for (int i = 0; i < numTriangles; ++ i) for (int j = 0; j < 3; ++ j) normals[i][j] = new Triple (0.0, 0.0, 0.0); /* compute triangle normals and vertex averaged normals */ for (int i = 0; i < numTriangles; ++ i) { Triple v0 = map[triangles[i].i[0]][triangles[i].j[0]], v1 = map[triangles[i].i[1]][triangles[i].j[1]], v2 = map[triangles[i].i[2]][triangles[i].j[2]]; Triple normal = v0.subtract (v1).cross (v2.subtract (v1)).normalize (); triangles[i].n = normal; for (int j = 0; j < 3; ++ j) { normals[triangles[i].i[j]][triangles[i].j[j]] = normals[triangles[i].i[j]][triangles[i].j[j]].add (normal); } } /* compute vertex colors and triangle average colors */ for (int i = 0; i < numTriangles; ++ i) { RGB avg = new RGB (0.0, 0.0, 0.0); for (int j = 0; j < 3; ++ j) { int k = triangles[i].i[j], l = triangles[i].j[j]; Triple vertex = map[k][l]; RGB color = colors[k][l]; Triple normal = normals[k][l].normalize (); Triple light = vertex.subtract (sun); double distance2 = light.length2 (); double dot = light.normalize ().dot (normal); double lighting = ambient + diffuse * ((dot < 0.0) ? - dot : 0.0) / distance2; color = color.scale (lighting); triangles[i].color[j] = color; avg = avg.add (color); } triangles[i].color = new Color (avg.scale (1.0 / 3.0).toRGB ()); }

One obvious fault with this implementation is that we will have no sharp edges, even where there are meant to be sharp edges. For example, along the edge of a shadow there should be a sharp discontinuity. In our implementation, however, triangles on both sides of the shadow will blur the edge. This is even worse along a sharp color discontinuity in the terrain -- for example, where the blue sea meets the yellow sand. Solving this problem is simply a matter of adding more information to our model, and not sharing certain vertex colors. That, of course, is left as an exercise for the diligent reader.

**Shadows**

We've done almost all of the static precomputations we need to generate our terrain datastructure. One last thing I want you to notice is that when we compute whether a given part of our terrain is facing the sun or not, we don't check to see if the terrain element is shadowed by another part of the terrain.

Doing a fully accurate shadow computation would involve projecting our entire terrain back down onto itself, as seen from the sun, and then subdiving triangles into shaded and unshaded parts.

This process, however, is slow and difficult. Instead, we'll go with a stopgap measure that gives us some degree of realism at greatly reduced complexity. For every point on the terrain grid we will trace a line from the terrain to the sun, and determine whether this line intersects the terrain at any other point. If it does, our point is in shadow; if it does not, we compute lighting as usual.

Furthermore, in order to make this computation more efficient, we won't perform a proper terrain intersection test. Instead, we'll simply sample the line at various points along its length, and check whether the terrain altitude at these points is higher than the line.

The following code performs these shadow calculations. We use the computed shadow array in a slightly modified version of the earlier shadow computation: if a vertex is in shadow then there is no diffuse lighting; only ambient.

double[][] shade = new double[steps + 1][steps + 1]; for (int i = 0; i <= steps; ++ i) { for (int j = 0; j <= steps; ++ j) { shade[i][j] = 1.0; Triple vertex = map[i][j]; Triple ray = sun.subtract (vertex); double distance = steps * Math.sqrt (ray.x * ray.x + ray.z * ray.z); /* step along ray in horizontal units of grid width */ for (double place = 1.0; place < distance; place += 1.0; { Triple sample = vertex.add (ray.scale (place / distance)); double sx = sample.x, sy = sample.y, sz = sample.z; if ((sx < 0.0) || (sx > 1.0) || (sz < 0.0) || (sz > 1.0)) break; /* steppd off terrain */ double ground = exaggeration * terrain.getAltitude (sx, sz); if (ground >= sy) { shade[i][j] = 0.0; break; } } } } /* modified lighting computation: ... double shadow = shade[k][l]; double lighting = ambient + diffuse * ((dot < 0.0) ? - dot : 0.0) / distance2 * shadow; */

I want to point out that this solution is not perfect. If there is a sharp ridge or peak in the terrain, we may find that some vertices in the shadow of the feature realize that they are in shadow, and others do not. The typical problem is that our line sampling for some vertices falls on the feature (we get shadow), and for others it just skips over the feature (we get light). To overcome this problem, we need a proper intersection test. Practically speaking, we just need to sample the shadow line more frequently. For the purposes of this article, we will step along the shadow line in jumps the size of our terrain grid. If you find shadow artifacts, reduce the step.

We're halfway there. We have a fractally generated, colored, lit, shaded, shadowed terrain. We have a screen. We need the twain to meet. To this end, we must accomplish two more tasks. First, we need to figure out where on the screen each of our terrain triangles fall. Then, we need to draw the triangles.

### Viewpoint calculation

To compute where triangles from the terrain should be drawn on the screen, we need to know the viewer's location in the virtual world and what direction he or she is looking in. We'll look at that problem in this section.

**Transforming the world**

Let's take a look at the real world for a moment. Assume that the *Z* axis is pointing straight out ahead of you. Take a step plus one meter along this *Z* axis and see what you see. Step back to where you were. I now want you to move the entire world minus one meter along the *Z* axis. You should see exactly the same thing as before. You can, in fact, repeat this for any movement that you make in your viewing position: a world, as seen from location *p*, looks exactly the same as a world translated by *-p* and viewed from location *(0,0,0)*.

Exactly the same applies to your viewing orientation: a world, as seen from a head rotation of *r*, looks exactly the same as a world rotated by *-r* and viewed straight ahead. And if we combine translation and rotation, we find that a world viewed from location *p* at rotation *r* looks exactly the same as a world translated by *-p*, rotated by *-r*, and viewed straight ahead from the origin.

You might wonder why I'm having you act all this out. Well, it turns out that it's much easier for us to draw a scene onto the screen if we are viewing it straight ahead from the origin than if we are positioned arbitrarily. So, to allow us to view our terrain from any position, we will first perform this inverse transformation of the world, and then perform the simplified world-to-screen transformation.

Let's define a few graphics-related buzzwords:

*Worldspace*is the world in its original form -- it is defined with respect to an arbitrary coordinate system that we chose when we defined the terrain.*Eyespace*is the world as seen from your current viewing position -- after performing the aforementioned inverse transformation. If you take the viewer's location and orientation in worldspace, and then transform the world by the inverse of these values, you have the world in eyespace. The origin is at the eye, the*Z*axis sticks straight out from the eye,*Y*up,*X*to the right.*Screenspace*is the eyespace world transformed into*(x,y)*pixel locations on the screen.

**Viewer representation**

To perform this worldspace-to-eyespace transformation, we need a way to represent the viewing location and orientation. Representing the location is easy: *p _{you} = (x_{you}, y_{you}, z_{you})*. Representing the viewing direction is also easy. The complexity is that we need to choose which one of many representations to use.

Unfortunately, we can't simply use a vector to represent the direction in which you are looking, because this does not include information about your current *up* direction.

One option is to use three angles *(roll, pitch, yaw)* to represent the angles of rotation in three dimensions needed to transform from looking straight ahead to your current viewing direction. This is, however, inefficient and awkward to manipulate.

Another option is to use a 3x3 rotation matrix *M*. This matrix essentially consists of the unit vector facing along the direction that you are looking, the unit vector facing directly up, and the unit vector facing to your right. This representation is fairly common, as well as easy to manipulate and understand. It is, however, slow for some operations and subject to numerical error.

We will instead go with a third option: a rotational *quaternion* -- a four-element datastructure *(w, x, y, z)*. The *(x, y, z)* part represents a vector in space, and the *w* part represents a rotation about this vector. *Any* viewing direction can be represented by this compact, efficient structure.

**Quaternions for rotation**

Quaternions can be used for many purposes. However, we will consider them strictly for their application to three-dimensional rotations. Let's start with some general quaternion mathematics.

*q = (w, x, y, z)*

|*q*|^{2} = w^{2} + x^{2} + y^{2} + z^{2}

*q ^{-1} = (w /* |

*q*|

*|*

^{2}, -x /*q*|

*|*

^{2}, -y /*q*|

*|*

^{2}, -z /*q*|

^{2})*q _{a}q_{b} = (w_{a}w_{b} - x_{a}x_{b} - y_{a}y_{b} - z_{a}z_{b}, w_{a}x_{b} + x_{a}w_{b} + y_{a}z_{b} - z_{a}y_{b}, w_{a}y_{b} + y_{a}w_{b} + z_{a}x_{b} - x_{a}z_{b}, w_{a}z_{b} + z_{a}w_{b} + x_{a}y_{b} - y_{a}x_{b})*

Okay, we have some quaternion mathematics, but how the heck do we rotate a point using quaternions? Say we have a point *p* and a rotational quaternion *q _{r}*. To rotate the point, we simply convert it to a quaternion with a zero

*w*coefficient, perform some multiplications, and convert it back:

*p = (x, y, z)*

*q _{p} = (0.0, x, y, z)*

*q' _{p} = q_{r}q_{p}q_{r}^{-1} = (w', x', y', z')*

*p' = (x', y', z')*

We have taken our rotational quaternion, multiplied it by the point to be transformed, and multiplied this by the inverse of the rotational quaternion. The result is our rotated point.

If speed is of prime importance, there is an alternative: convert from a rotational quaternion to a rotational matrix, and then rotate a series of points with this matrix. But let's not get too sidetracked!

I've already mentioned that a rotational quaternion is a rotation about a particular vector, or axis. So how do we calculate the quaternion for a particular rotation? Well, if we have a unit-vector *v*, and we want a quaternion to represent a rotation of *w* about this, we compute the quaternion as follows:

*Vector: v = (x, y, z);* |*v*| *= 1.0;*

*Rotation: w*

*Rotational quaternion: (cos(w), x . sin(w), y . sin(w), z . sin(w))*

Next, what if we have one rotation that we wish to concatenate with another? We could either explicitly perform the two rotations in sequence on our points, or we can simply multiply the two quaternions and use the result as the combined rotation operation. Remember, however, that *q _{a}q_{b} != q_{b}q_{a}*.

For example, start by looking straight forward. Then rotate your head down as far as it can go. Then, from this perspective, rotate your head right as far as it can go. You should end up looking to the right. Now, start again by looking straight forward. Rotate your head right as far as it can go. Then, from this perspective, rotate your head down as far as it can go. You should end up looking down. We did the same operations; look down and look right, just in different orders, and we ended up looking in completely different directions.

The following code implements the basic quaternion mathematics that we will need:

public class Quaternion { private double w, x, y, z; private Quaternion inverse; public Quaternion (double w, double x, double y, double z) { this.w = w; this.x = x; this.y = y; this.z = z; this.inverse = null; } public Quaternion inverse () { double scale = 1.0 / (x * x + y * y + z * z + w * w); return new Quaternion (w * scale, - x * scale, - y * scale, - z * scale); } public Quaternion multiply (Quaternion q) { double qx = q.x, qy = q.y, qz = q.z, qw = q.w; double rw = w * qw - x * qx - y * qy - z * qz; double rx = w * qx + x * qw + y * qz - z * qy; double ry = w * qy + y * qw + z * qx - x * qz; double rz = w * qz + z * qw + x * qy - y * qx; return new Quaternion (rw, rx, ry, rz); } public Triple rotate (Triple t) { if (inverse == null) inverse = inverse (); double iw = inverse.w, ix = inverse.x, iy = inverse.y, iz = inverse.z; double tx = t.x, ty = t.y, tz = t.z; double aw = - x * tx - y * ty - z * tz; double ax = w * tx + y * tz - z * ty; double ay = w * ty + z * tx - x * tz; double az = w * tz + x * ty - y * tx; double bx = aw * ix + ax * iw + ay * iz - az * iy; double by = aw * iy + ay * iw + az * ix - ax * iz; double bz = aw * iz + az * iw + ax * iy - ay * ix; return new Triple (bx, by, bz); } public static Quaternion newRotation (double r, double x, double y, double z) { double len = Math.sqrt (x * x + y * y + z * z); double sin = Math.sin (r / 2.0); double cos = Math.cos (r / 2.0); double tmp = sin / len; return new Quaternion (cos, x * tmp, y * tmp, z * tmp); } }

**Worldspace to eyespace**

We now have the basics of the mathematics that we need to make the transformation from worldspace to eyespace. Now, how about performing the computations themselves:

Triple loc = new Triple (0.5, 3.0, -2.0); Quaternion rot = Quaternion.newRotation (-.82, 1.0, 0.0, 0.0); Triple[][] eyeMap = new Triple[steps + 1][steps + 1]; for (int i = 0; i <= steps; ++ i) { for (int j = 0; j <= steps; ++ j) { Triple p = map[i][j]; Triple t = p.subtract (loc); Triple r = rot.rotate (t); eyeMap[i][j] = r; } }

**Eyespace to screenspace**

We have the world transformed so that we can view it looking straight ahead from the origin. Now we need to transform this new world onto our screen. Recall from the last article that we considered the screen to be a window in space; we fired rays from the eye onto the pixels in this window and onwards into the virtual scene. We determined what each ray intersected and this was how we computed pixel colors.

In this application we're performing the exact same calculation, but in reverse. Again, the screen is a window in space between the viewer and the scene. This time, however, we will draw a line between each vertex of our terrain and the viewer, and then determine where this line intersects the viewing window. By using this approach, we can determine the closest screen pixel that corresponds to each vertex point, and then use these pixel locations to draw the terrain triangles on screen.

This calculation is actually quite straightforward. We must first decide how far away our virtual window is from the origin. (We'll call this distance `hither`

.) If any vertex falls on the near side of this plane, we won't transform it -- we're only drawing what we can see *through* the window. Otherwise, we take the vertex *x* and *y* coordinates, divide these by *z*, and then scale the result up to fit on the screen.

The amount we scale the result depends upon how wide a field of view (*fov*) we want. If we want a horizontal field of view , then we want a scale of *width / tan(fov / 2)*. A commonly used field of view is about 60 degrees (stick your nose up to your monitor to get a somewhat geometrically accurate picture). The value depends on how much of your reality is taken up by the window displaying this applet.

double hither = .1; double fov = Math.PI / 3; XY[][] eyeMap = new XY[steps + 1][steps + 1]; int width = getSize ().width, height = getSize ().height; double scale = width / 2 / Math.tan (fov / 2); for (int i = 0; i <= steps; ++ i) { for (int j = 0; j <= steps; ++ j) { Triple p = eyeMap[i][j]; double x = p.x, y = p.y, z = p.z; if (z >= hither) { double tmp = scale / z; scrMap[i][j] = new XY (width / 2 + (int) (x * tmp), height / 2 - (int) (y * tmp)); } else { scrMap[i][j] = null; } } }

So we now have screenspace coordinates of each terrain vertex. This means that we have pixel locations for each triangle vertex of our scene. Now all we need do is draw these triangles!

### Rasterization

Rasterization is a term meaning to paint objects onto the screen. Some people call it rendering; others call it scan conversion. We'll stick with the big word for this section.

The simplest way for us to rasterize our terrain is to call `Graphics.fillPolygon()`

for each triangle in the scene. This is not, however, particularly useful. We end up with flat-shaded triangles and, worse still, we end up with some triangles from the back of our terrain being drawn in front of triangles that should be at the front.

public void paint (Graphics g) { for (int i = 0; i < numTriangles; ++ i) { XY xy0 = scrMap[triangles[i].i[0]][triangles[i].j[0]], xy1 = scrMap[triangles[i].i[1]][triangles[i].j[1]], xy2 = scrMap[triangles[i].i[2]][triangles[i].j[2]]; double dot = - map[triangles[i].i[0]][triangles[i].j[0]] .subtract (loc).normalize ().dot (triangles[i].n); if ((dot > 0.0) && (xy0 != null) && (xy1 != null) && (xy2 != null)) { int[] x = { xy0.x, xy1.x, xy2.x }, y = { xy0.y, xy1.y, xy2.y }; g.setColor (triangles[i].color); g.fillPolygon (x, y, 3); } } }

You might think that we could simply sort our triangles and draw the furthest away first. As the following diagram shows, however, this approach doesn't always work. Some situations just can't be resolved with a simple depth sort. The two depicted triangles share a common edge. In each configuration of the following diagram, the third point of the smaller triangle (in relation to the Z axis) is behind this common edge and in front of the third point of the larger triangle -- so a depth sort won't change their ordering. In the configuration on the left, however, the small triangle is in front of its partner and, thus, should be drawn last. In the configuration on the right, it is behind and, thus, should be drawn first. To resolve this type of situation we need a more comprehensive algorithm.

There are a variety of solutions to this problem. One solution is to build span lists describing the portion of each scan line (row of pixels) on the screen that is covered by each triangle, and resolve simple overlaps using this list (Romney, Watkins and Evans, 1969). Instead, however, we will go with the *z-buffer* solution.

When drawing our triangles into an image buffer, in addition to storing the red, green, and blue color coefficients of each pixel, we will also store (in a separate z-buffer) the pixel's depth in the scene. Before we draw any pixel to the screen, we first consult this z-buffer to see if we've already drawn a closer point. If we have, then we don't draw the pixel; otherwise, we store the color and depth information as usual. In this way, we resolve any triangle overlap to pixel precision.

**Rasterizing a triangle**

We now have some triangle coordinates on-screen, and an algorithm for resolving polygon overlap. All we need to do now is draw the triangles. For each triangle we'll use a simple algorithm that sorts the triangle vertices vertically, then steps from top to bottom, computing the left and right edges for each scan line of the triangle and drawing the spans using a simple `setPixel()`

method. We'll draw our triangles into a 32-bit RGB memory buffer and display the result using the `java.awt.Image`

class, just as we did last time.

To further simplify things, we are going to implement an algorithm that assumes that a triangle always has either a flat top or a flat bottom. If a triangle does not have a flat top or bottom, we simply divide it into two triangles, as in the following illustration.

Remember, however, that our triangles are not flat shaded. We must compute a smooth fade between the colors (and depths) of each vertex. This is actually quite easy to resolve: we compute horizontal and vertical color and depth differentials. These are the amounts by which the color and depth change for every pixel that we step along a span, and for every pixel that we step down the left edge. Because our triangles are flat, these values are constant over the triangle. We can then simply modify our rasterizing algorithm to add these differentials during the final drawing process.

The code for this rasterizer is fairly long, and in the interest of brevity, I'll let you examine it on your own time (see Resources for the complete source). The `Rasterizer`

class allows us to rasterize Gouraud-shaded triangles into a z-buffered memory array for display.

### Finishing touches

That's it. We've built, lit, shaded, and shadowed some terrains; transformed them according to an arbitrary viewing location; and then rasterized the resulting triangles to screen. What more could we want?

Well, it turns out that there are a couple of things we can do to make our pictures a bit more interesting. Consider, for example, the image at the very start of this article. There is a deep blue space/sky background and a fractal planet. How did I generate these?

First, I modified the rasterizer class to load a pregenerated background image instead of always clearing to black. Then, I used the fractal terrain generator to generate a terrain that faded from blue to black, and changed my display code to view this from above. Presto, a deep blue (unrealistic) space background. A tiny change and we have clouds. If we want the clouds to distort in perspective, as if curving around a planet's horizon, we just need to define an appropriate terrain function.

The next change I made was to add oceans. For this I took the fractal generator and modified the `getAltitude()`

method to apply a fixed cutoff so that any altitude below a certain value was reported as a constant sea level (and blue). For interest's sake, I actually allowed a small amount of variation in sea level. I then used this new fractal terrain to generate a planetary surface from above.

Stick this fractal terrain image in my last article as a texture map and we have a planet. Finally, combine the spacescape and planet as a background, draw an orange-red terrain in front, and voilà, the picture is complete. Well, after adding another light source to soften the shadows...

### Conclusion

Computer graphics is a fascinating topic with many different facets. We've looked at two ways of drawing pictures. Last time around, we fired rays into the scene, tried to determine what the rays hit, and drew our picture from the outside in. This time around, we took the opposite approach. We took the elements of our picture and drew them one by one to the screen. The more that you strive for speed and realism, the more that you will end up drawing from these and other techniques. The boundaries are practically limitless, and thanks to current CPU speeds, much more accessible than a few years ago when custom graphics hardware was almost vital.

Some might say that going to the depth I have in this article -- covering details of viewer transformations and rasterizing individual triangles -- is unnecessary; the Java 3D API does it all for you. And, if your goal is simply to produce images, then they would be right. Some of us, however, can't keep our hands out of the mix; we need to know what is going on, right down to the metal.

### Learn more about this topic

- JavaSoft's Java 3D APIA comprehensive graphics library http://java.sun.com/products/java-media/3D/
- UCB CS184Using Quaternions to Represent Rotation http://www.cs.berkeley.edu/~laura/cs184/quat/quaternion.html
- Stop by Michael Garnland's terrain simplification page, which includes sample terrains http://www.cs.cmu.edu/~garland/scape/
- Read Ken Musgrave's "Building Fractal Planets" http://www.seas.gwu.edu/faculty/musgrave/article.html
- Read "Computer Rendering of Stochastic Models" by Alain Fournier, Don Fussell, and Loren Carpenter (Communications of the ACM, Jun 1982, p.371)
- Download the complete source for this article as a gzipped tar file http://www.javaworld.com/jw-08-1998/step/jw-08-step.tar.gz
- Download the complete source for this article as a zip file http://www.javaworld.com/jw-08-1998/step/jw-08-step.zip
- Read all the previous
**Step by Step**articles from Shoffner and Hughes http://www.javaworld.com/topicalindex/jw-ti-step.html

Be the first to comment on this article.Our Commenting Policies