3D Graphic Java: Render fractal landscapes

Get a behind-the-scenes look at 3D graphics rendering with this hands-on discussion of fractals, quaternion transformations, shadows, rasterization, and Gouraud shading

1 2 3 4 Page 2
Page 2 of 4
public class Triangle {
  private int[] i = new int[3];
  private int[] j = new int[3];
  private Triple n;
  private RGB[] rgb = new RGB[3];
  private Color color;
  public Triangle (int i0, int j0, int i1, int j1, int i2, int j2) {
    i[0] = i0;
    i[1] = i1;
    i[2] = i2;
    j[0] = j0;
    j[1] = j1;
    j[2] = j2;

Our Triangle datastructure stores each vertex as an index (i,j) into the gridpoints of our terrain, along with normal and color information that we will fill in later. We create our array of triangles with the following piece of code:

int numTriangles = (steps * steps * 2);
Triangle[] triangles = new Triangle[numTriangles];
int triangle = 0;
for (int i = 0; i < steps; ++ i) {
  for (int j = 0; j < steps; ++ j) {
    triangles[triangle ++] = new Triangle (i, j, i + 1, j, i, j + 1);
    triangles[triangle ++] = new Triangle (i + 1, j, i + 1, j + 1, i, j + 1);

We'll look shortly at how to actually render these triangles onto the screen. But first, some lighting issues.

Lighting and shadows

Okay, so we have a colored terrain. Not much to boast about. Without performing some lighting and shadow computation we would have a picture that looks like this:

A terrain without lighting

We'll do two things to spruce this up. First, we'll add ambient and diffuse light sources so that elements of our landscape are lit according to their orientation relative to the sun. Then, we'll add shadows so that virtual matter actually impedes the progress of virtual photons.


As you recall, we thoroughly covered ambient and diffuse lighting in June's article. In a nutshell, if a diffuse (matte) surface is facing a light source, the amount of reflected light will be proportional to the cosine of the angle of incidence of the light on the surface. Light falling directly on a surface will be strongly reflected; light falling at an oblique angle will be weakly reflected. In addition, all surfaces will be evenly lit by an amount of ambient, directionless light.

A diffusely lit terrain

We will apply this simple lighting model to our terrain. One option is to take each point on the terrain and compute whether the point is facing the sun, and if so, what the angle of incidence is; the cosine of the angle of incidence is equal to the dot product of the terrain normal and sunlight direction. Of course, doing this for every point in the scene is rather expensive. Instead, we'll perform this calculation once for every triangle. The result should be plenty good enough.

Unlike my last article, we'll consider the sun to be a point this time around. Let's forget that it's about 100 times the size of the earth and allow it to be positioned anywhere in the scene. First, we'll compute a vector from the sun to our terrain:

sun: psun = (xsun, ysun, zsun) terrain point: p = (x, y, z) light vector: vlight = p - psun = (x - xsun, y - ysun, z - zsun)

To aid in these computations, we'll use a helper class:

public class Triple {
  private double x, y, z;
  public Triple (double x, double y, double z) {
    this.x = x;
    this.y = y;
    this.z = z;
  public Triple add (Triple t) {
    return new Triple (x + t.x, y + t.y, z + t.z);
  public Triple subtract (Triple t) {
    return new Triple (x - t.x, y - t.y, z - t.z);
  public Triple cross (Triple t) {
    return new Triple (y * t.z - z * t.y, z * t.x - x * t.z,
      x * t.y - y * t.x);
  public double dot (Triple t) {
    return x * t.x + y * t.y + z * t.z;
  public double length2 () {
    return dot (this);
  public Triple normalize () {
    return scale (1.0 / Math.sqrt (length2 ()));
  public Triple scale (double scale) {
    return new Triple (x * scale, y * scale, z * scale);

The Triple class represents a point or vector in space, and we provide methods that allow various mathematical operations to be performed. Note that we store information to double precision; if speed were more important, we'd use float elements. Similarly, note that our computations result in many temporary Triple objects being created. Again, if speed were important, we would (as the Java 3D API does) allow these objects to be reused.

Next, we need the surface normal. What we want to calculate is a vector pointing straight up from each triangle. If we take two (non-parallel) vectors in the plane of the triangle and compute their cross product, we have the surface normal. And, to select two vectors in the plane of the triangle, we can just take two sides of the triangle:

Triangle: T = [p0, p1, p2]

Sides: va = (p1 - p0) = (x1 - x0, y1 - y0, z1 - z0); vb = (p2 - p0) = (x2 - x0, y2 - y0, z2 - z0)

Surface normal: nT = va x vb = (yazb - zayb, zaxb - xazb, xayb - yaxb)

Next, we need to remember that the strength of the light drops off in proportion to the inverse square of its distance from its source. This makes sense if you remember that the surface area of a sphere is 4.pi.r2, so at a distance from the sun d, the light emitted from the sun will be spread over an area of 4.pi.d2. For us on earth, it doesn't really matter; the strength of light from the sun is about one one hundredth of a percent weaker on the far side of the earth than the near (okay, okay, so it's dark on the far side, but if the earth wasn't there...) For our scene, however, the sun will be very close, so we care about this computation.

Computing the diffuse surface lighting

Now we're going to compute the illumination of a point on our terrain. We have our surface normal, the light direction, and the distance from the sun. The diffuse lighting is proportional to the cosine of the angle of the light's incidence, which is the dot product of two unit vectors (as we saw last time). The ambient lighting is a constant. So...

Ambient lighting:


Sun's strength: s

Unit surface normal: n' = n / |n|

Light distance: d = |vlight|

Unit light vector: v'light = v / d

Light incidence: i = v'light . n'

Illumination facing the sun: (i < 0) : l = a - i . s / d2

Illumination facing away from the sun: (i >= 0) : l = a

That's it. We can now compute the amount of light reflected by any triangle in our terrain map.


An immediate fault apparent in the above image is that the triangles are flat shaded: each triangle has but a single color. This goes somewhat against the grain of our terrain definition -- our terrains are D0 continuous in altitude and color, but we then render each triangle in a single color. This means that our image is not D0 continuous. Adjacent triangles have different colors, and our image shouldn't have any absolute color discontinuities.

To overcome this dilemma, we need to introduce a better shading model. Flat shading means that each triangle has a separate color. Gouraud shading, however, allows us to compute a color for each vertex of each triangle and then smoothly interpolate between these vertex colors within the triangle.

A Gouraud-shaded triangle

The Gouraud shading model allows us to easily ensure D0 continuity. We simply compute a color at each grid point on our terrain, and then assign our triangle vertices the appropriate colors from the grid. If two triangles are adjacent, they will share the same vertices along their common edge, and so will share exactly the same color along that edge.

A terrain with Gouraud shading

To determine the color at a point on our grid, we use the exact computations outlined earlier. The only additional information that we need is the surface normal at our terrain vertices. We can compute this in couple of ways. Given that we will be computing triangle surface normals anyway, one easy option is to simply average the surface normals of all triangles sharing the vertex. Another, slightly more complex option is to compute the average plane through the surrounding terrain vertices and to use the normal of this plane. For simplicity, we'll go with the first option.

Computing vertex normals

The following code computes the normals and lighting information for our array of triangles:

double ambient = .3;
double diffuse = 4.0;
Triple[][] normals = new Triple[steps + 1][steps + 1];
Triple sun = new Triple (3.6, 3.9, 0.6);
for (int i = 0; i < numTriangles; ++ i)
  for (int j = 0; j < 3; ++ j)
    normals[i][j] = new Triple (0.0, 0.0, 0.0);
/* compute triangle normals and vertex averaged normals */
for (int i = 0; i < numTriangles; ++ i) {
  Triple v0 = map[triangles[i].i[0]][triangles[i].j[0]],
    v1 = map[triangles[i].i[1]][triangles[i].j[1]],
    v2 = map[triangles[i].i[2]][triangles[i].j[2]];
  Triple normal = v0.subtract (v1).cross (v2.subtract (v1)).normalize ();
  triangles[i].n = normal;
  for (int j = 0; j < 3; ++ j) {
    normals[triangles[i].i[j]][triangles[i].j[j]] =
      normals[triangles[i].i[j]][triangles[i].j[j]].add (normal);
/* compute vertex colors and triangle average colors */
for (int i = 0; i < numTriangles; ++ i) {
  RGB avg = new RGB (0.0, 0.0, 0.0);
  for (int j = 0; j < 3; ++ j) {
    int k = triangles[i].i[j], l = triangles[i].j[j];
    Triple vertex = map[k][l];
    RGB color = colors[k][l];
    Triple normal = normals[k][l].normalize ();
    Triple light = vertex.subtract (sun);
    double distance2 = light.length2 ();
    double dot = light.normalize ().dot (normal);
    double lighting = ambient + diffuse * ((dot < 0.0) ? - dot : 0.0) / distance2;
    color = color.scale (lighting);
    triangles[i].color[j] = color;
    avg = avg.add (color);
  triangles[i].color = new Color (avg.scale (1.0 / 3.0).toRGB ());

One obvious fault with this implementation is that we will have no sharp edges, even where there are meant to be sharp edges. For example, along the edge of a shadow there should be a sharp discontinuity. In our implementation, however, triangles on both sides of the shadow will blur the edge. This is even worse along a sharp color discontinuity in the terrain -- for example, where the blue sea meets the yellow sand. Solving this problem is simply a matter of adding more information to our model, and not sharing certain vertex colors. That, of course, is left as an exercise for the diligent reader.


We've done almost all of the static precomputations we need to generate our terrain datastructure. One last thing I want you to notice is that when we compute whether a given part of our terrain is facing the sun or not, we don't check to see if the terrain element is shadowed by another part of the terrain.

Doing a fully accurate shadow computation would involve projecting our entire terrain back down onto itself, as seen from the sun, and then subdiving triangles into shaded and unshaded parts.

Computing accurate shadows

This process, however, is slow and difficult. Instead, we'll go with a stopgap measure that gives us some degree of realism at greatly reduced complexity. For every point on the terrain grid we will trace a line from the terrain to the sun, and determine whether this line intersects the terrain at any other point. If it does, our point is in shadow; if it does not, we compute lighting as usual.

A terrain with shadows

Furthermore, in order to make this computation more efficient, we won't perform a proper terrain intersection test. Instead, we'll simply sample the line at various points along its length, and check whether the terrain altitude at these points is higher than the line.

Computing simple shadows

The following code performs these shadow calculations. We use the computed shadow array in a slightly modified version of the earlier shadow computation: if a vertex is in shadow then there is no diffuse lighting; only ambient.

double[][] shade = new double[steps + 1][steps + 1];
for (int i = 0; i <= steps; ++ i) {
  for (int j = 0; j <= steps; ++ j) {
    shade[i][j] = 1.0;
    Triple vertex = map[i][j];
    Triple ray = sun.subtract (vertex);
    double distance = steps * Math.sqrt (ray.x * ray.x + ray.z * ray.z);
    /* step along ray in horizontal units of grid width */
    for (double place = 1.0; place < distance; place += 1.0; {
      Triple sample = vertex.add (ray.scale (place / distance));
      double sx = sample.x, sy = sample.y, sz = sample.z;
      if ((sx < 0.0) || (sx > 1.0) || (sz < 0.0) || (sz > 1.0))
        break; /* steppd off terrain */
      double ground = exaggeration * terrain.getAltitude (sx, sz);
      if (ground >= sy) {
        shade[i][j] = 0.0;
/* modified lighting computation:
    double shadow = shade[k][l];
    double lighting = ambient + diffuse * ((dot < 0.0) ? - dot : 0.0) /
                                  distance2 * shadow;

I want to point out that this solution is not perfect. If there is a sharp ridge or peak in the terrain, we may find that some vertices in the shadow of the feature realize that they are in shadow, and others do not. The typical problem is that our line sampling for some vertices falls on the feature (we get shadow), and for others it just skips over the feature (we get light). To overcome this problem, we need a proper intersection test. Practically speaking, we just need to sample the shadow line more frequently. For the purposes of this article, we will step along the shadow line in jumps the size of our terrain grid. If you find shadow artifacts, reduce the step.

We're halfway there. We have a fractally generated, colored, lit, shaded, shadowed terrain. We have a screen. We need the twain to meet. To this end, we must accomplish two more tasks. First, we need to figure out where on the screen each of our terrain triangles fall. Then, we need to draw the triangles.

Viewpoint calculation

To compute where triangles from the terrain should be drawn on the screen, we need to know the viewer's location in the virtual world and what direction he or she is looking in. We'll look at that problem in this section.

Transforming the world

1 2 3 4 Page 2
Page 2 of 4