Navigate through virtual worlds using Java 3D

Use level-of-detail and fly-through behaviors

Java 3D is a full-featured 3D graphics API that has been evolving over the past five years. Resources provides a link to Sun Microsystems' Website, where download and installation notes can be found. Java 3D supports a range of applications, including computer-aided design, Web advertising, motion picture special effects, and, of course, computer games. Java 3D uses conventions resembling OpenGL, but is actually a layer that sits atop low-level graphics APIs such as OpenGL and DirectX. Java 3D, like Java, is platform independent.

Geographic information system (GIS) developers wishing to expand into the 3D world and game developers looking for an alternative to virtual world representation appreciate the difficulty in creating virtual worlds that accurately portray the real world and can be navigated by interactive programs. Java 3D provides powerful and efficient mechanisms to display virtual worlds and convenient user interfaces to manipulate the worlds' views. In this article, I describe how to load and display data available from the US Geological Survey (USGS) in DEM (digital elevation model) format and how to create a user interface that allows keyboard and mouse controls to navigate through the virtual world. Figure 1 presents an aerial view of the Grand Canyon created by the application. The application's complete source code and Javadoc HTML files are available in Resources, as is a link to the USGS Website where the data resides.

Figure 1. Aerial view generated by demonstration application

To display a virtual world, a data file containing the coordinates of the hills, valleys, rivers, and oceans is necessary. You can either create this yourself or use an existing one. The USGS has such files available for download in DEM format. A digital elevation model consists of a sampled array of ground elevations (in meters) for positions at regularly spaced intervals. The basic elevation model is produced for the Defense Mapping Agency (DMA) and is distributed by the USGS and EROS (Earth Resources Observation System) Data Center in the DEM data record format. The 1-Degree DEM (3-by-3 arc-second data spacing) provides coverage in 1-by-1 degree blocks (about 68 miles by 68 miles) for the entire continental United States, Hawaii, and parts of Alaska. The 3-by-3 arc-second spacing results in a 1,201-by-1,201 array of elevations for each 1-degree block, or roughly one elevation measurement every 92 meters or 100 yards.

A newer STDS (Spatial Data Transfer Standard) format is also available. These elevation readings can be used to create 3D representations of the landscape, referred to as terrain modeling. When combined with other data types, such as stream locations and weather data, these readings can also be used to assist in forest fire control, determine the volume of proposed reservoirs, calculate the amount of cut-and-fill materials, and assist in determining landslide probability. Along with these useful applications, DEM data can make for realistic virtual worlds for gaming.

Application architecture

Computer games and 3D graphical information systems typically allow the user to change what is displayed on the screen by either manipulating the location of objects in a scene or by moving through the scene, often refereed to as flying through the scene.

To support fly-through and other real-time screen update sequences, developers must implement the most efficient data structures and scene-rendering techniques. One scene-rendering technique used to display virtual worlds is called level-of-detail (LOD) optimizations. This technique draws objects close to the viewer in fine detail, and objects farther away in less detail. For example, when viewing a person's face up close, all the details of nose, eyes, mouth, teeth, and so on, would display on the screen. A simple oval may represent the same person viewed from afar. This can save much processor time, especially if you are depicting a stadium full of people!

Some graphics programmers have shunned Java for its perceived inefficiency and overhead in allocating objects. Java 3D gives the programmer numerous ways to represent a scene's geometry. For example, a coordinate with x, y, z values can be stored as 3 ints, 3 floats, 3 doubles, a Vector3f object, a Point3f object, or a Tuple3f object, to name just a few. A TriangleFanArray, TriangleArray, or TriangleStripArray can represent a triangle whose attributes are passed by value or by reference, indexed or nonindexed, and interleaved (where coordinate, color, and texture data are stored in the same array) or noninterleaved (where coordinate, color, and texture data are stored in separate arrays).

Knowing I would be programming many data points, I tested the Java 3D geometric primitives to see which were the most efficient. I tested several different configurations comparing TriangleArrays versus TriangleStripArrays, interleaved versus noninterleaved attributes, and indexed versus nonindexed attributes. I found that the combination of using TriangleStripArrays (explained more in the section below) with interleaved arrays of floating point data was by far the fastest configuration for rendering the screen and consumed the least space. This means not using low-level objects such as Color3f, Point3f, and Vector3f to store data. Instead, I use arrays of floats. I describe these arrays' formats in more detail later. Since I scaled the scene such that 1 unit equals 1 meter, floats gives me more than enough precision; also, using doubles serves no point, as the documentation indicates that Java 3D converts everything to floats anyway.

To demonstrate the fly-through, terrain-modeling, and level-of-detail optimizations, I created an application capable of reading in a DEM file and allowing the user to navigate through the world. The examples shown in this article use the Grand Canyon East dataset. Figure 2 presents the application's structure. The shaded ovals represent objects I created.

Figure 2. Application architecture

Object responsibilities

This section briefly overviews the functions performed by each of the high-level objects that make up my demonstration application.

  • Main: based on JFrame, creates the application's framework
  • InstructionPanel: based on JPanel, displays mouse and keyboard commands available to the user
  • View3DPanel: based on JPanel, responsible for creating the Java 3D environment (lights and view platforms) and terrain data
  • DemFile: based on ElevationFile, responsible for parsing and loading DEM data into memory
  • ElevationModel: based on BranchGroup, divides the terrain data into segments and creates a LODSegment for each
  • LODSegment: based on BranchGroup, responsible for creating ElevationSegments at various resolutions and setting up the level-of-detail optimization
  • ElevationSegment: based on Shape3D, creates the geometry for a given segment of the terrain data at a given resolution
  • InterleavedTriangleStripArray: based on TriangleStripArray, provides in-place generation of normal vectors

Implementation notes

The following sections describe how critical aspects of these objects were implemented and factors related to their design. In particular, I detail DEM data parsing, creation of Java 3D geometry objects, and user viewpoint manipulation.

Decoding the DEM files

DEM files are available from the USGS as compressed text files. Resources provides links for downloading these files and file format specifications. Each file is composed of 1,024 byte records with fixed-length fields. The first record is referred to as a type "A" record and contains header information, including the quadrangle name, maximum and minimum elevations, latitude and longitude coordinates, and the number of data rows and data columns. One "B" record also exists for each column of data. B records hold the actual elevations (in meters). As a rule, the DEM files contain 1,201 rows by 1,201 columns of data. The specification also mentions an optional type "C" record that has information about data accuracy. Double precision numbers are stored in the old FORTRAN format using a "D" record instead of an "E" record, e.g., 0.1404000000000D+06. Therefore, as the code reads and parses the data, the D must be changed to an E.

I created two classes to handle terrain data. ElevationFile is an abstract class used as a basis for creating format-specific files. ElevationFile defines the minimum common fields needed for constructing 3D images from terrain data, including the minimum and maximum elevations, latitude and longitude of the ground coordinates being mapped, number of rows and columns of data, and a 2D array of elevations (row-column order). By generalizing the file's common aspects, I can more easily modify my application in the future to support other file formats. The source for ElevationFile is shown below:

/**
 * ElevationFile is an abstract base class used to define the interface
 * between files holding terrain data and the Java 3D classes that
 * convert the terrain data into geometric primitives.
 *
 * @author  Mark Pendergast
 * @version 1.0 February 2003
 * Known subclasses @see DemFile
*/
public abstract class ElevationFile{
/** Data file name. */
public String fileName;
/** Geographic name or title of the data. */
public String quadrangleName = "";
/** Minimum elevation in meters. */
public int minElevation = 0;
/** Maximum elevation in meters. */
public int maxElevation = 0;
/** Two-dimensional array of elevation data in meters, array represents equally 
    spaced data points across the groundCoordinates. The first dimension is 
    the row, the second, the column.*/
public int elevations[][] = null; // array of raw elevation data
/** Number of data rows. */
public int nRows;
/** Number of data columns. */
public int nColumns;
/** Holds ground coordinates of the four corners in arc seconds. */
public GroundCoordinates groundCoordinates = new GroundCoordinates();
 public ElevationFile()
 { }
}

The second class I created, DemFile, inherits ElevationFile and provides code to handle DEM file format specifics. The constructor is passed a filename, which it opens as a BufferedReader. The entire A record is then read into a character array, and the desired fields parsed out and stored in appropriate class variables. I wrote the parseDemDouble() method to parse the old FORTRAN double precision format. Once the A record is read in and processed, the code can read in and parse each B record. To ease this job, I converted BufferedReader into a StreamTokenizer. Since the A record is read in its entirety, the stream is already positioned at the beginning of the first B record.

Now it is just a matter of using a set of nested for loops to read in the B records one at a time (each representing one column of data), and the row data itself. When the process is complete, the streams close. All elevation data is now stored in the 2D elevations array. The elevations array is an integer array since each elevation is to the nearest meter. The source for DemFile is shown below:

import java.io.*;
/**
 *  This class is a specialization of the ElevationFile class created
 * specifically to load DEM format data from the USGS archives.
 *
 * @author  Mark Pendergast
 * @version 1.0 February 2003
 *  @see ElevationFile
 */
public class DemFile extends ElevationFile {
public static final int ARECORD_LENGTH = 1024; public static final int QUADRANGLE_NAME_LENGTH = 144; public static final int MIN_ARECORD_TOKENS = 39;
/**
*  Create DemFile object from data contained in specified file.
*
*  @param aFileName name of the DEM file to load. File name should be a
*  fully qualified file name.
*  @exception IllegalArgumnetException thrown whenever an invalid data
* file is given as an argument.
*/
public DemFile(String aFileName) throws IllegalArgumentException
{
  try{
      char[] Arecord = new char[ARECORD_LENGTH];
      fileName = new String(aFileName);
      FileReader file = new FileReader(fileName);   
      BufferedReader bReader = new BufferedReader(file);  
//
// Read and parse out A record.
//
      if(bReader.read(Arecord,0,ARECORD_LENGTH) == -1)
      {
       bReader.close();
       System.out.println("Invalid file format (bad arecord) : "+fileName);
       throw(new IllegalArgumentException("Invalid file format : "+ fileName));
     }
      quadrangleName = new String(Arecord, 0, QUADRANGLE_NAME_LENGTH);
      quadrangleName = quadrangleName.trim();
      minElevation = (int)parseDemDouble(new String(Arecord,738,24));
      maxElevation = (int)parseDemDouble(new String(Arecord,762,24));
      groundCoordinates.sw[GroundCoordinates.LONGITUDE] = 
         Math.abs(parseDemDouble(new String(Arecord,546,24)));
      groundCoordinates.sw[GroundCoordinates.LATITUDE] = 
         Math.abs(parseDemDouble(new String(Arecord,570,24)));
      groundCoordinates.nw[GroundCoordinates.LONGITUDE] = 
         Math.abs(parseDemDouble(new String(Arecord,594,24)));
      groundCoordinates.nw[GroundCoordinates.LATITUDE] =
         Math.abs(parseDemDouble(new String(Arecord,618,24)));
      groundCoordinates.ne[GroundCoordinates.LONGITUDE] =
         Math.abs(parseDemDouble(new String(Arecord,642,24)));
      groundCoordinates.ne[GroundCoordinates.LATITUDE] = 
         Math.abs(parseDemDouble(new String(Arecord,666,24)));
      groundCoordinates.se[GroundCoordinates.LONGITUDE] =
         Math.abs(parseDemDouble(new String(Arecord,690,24)));
      groundCoordinates.se[GroundCoordinates.LATITUDE] = 
         Math.abs(parseDemDouble(new String(Arecord,714,24)));
      nColumns = (int)parseDemDouble(new String(Arecord,858,6));
//
//  Use a StreamTokenizer to parse B records, one record for each column.
//  Set the StreamTokenizer to use a space a delimiter and convert all
//  tokens to strings.
//
    StreamTokenizer st = new StreamTokenizer(bReader); // Stream prepositioned to start of B record.
    st.resetSyntax();
    st.whitespaceChars(' ',' ');
    st.wordChars(' '+1,'z');
    for(int column = 0; column < nColumns; column++)
    {
     int ttype;
     double rowCoordinateLat, rowCoordinateLong;
     st.nextToken(); // Skip row ID.
     st.nextToken(); // Skip column ID.
     ttype = st.nextToken(); // Number of rows.
     nRows = (int)parseDemInt(st.sval);
     if(elevations == null) // Allocate array if necessary.
       elevations = new int[nRows][nColumns];
     for(int i=0; i < 6; i++)  // Skip 6 fields.
       st.nextToken();
     for(int row = 0; row<nRows; row++) // Read in elevation data.
     {
       st.nextToken();
       elevations[row][column]=  parseDemInt(st.sval);
     }
    }
    bReader.close();
   } // End try.
   catch(IOException e){
       System.out.println("IE Exception when loading from [" + fileName + "] error: " + 
       e.getMessage());
       throw new IllegalArgumentException("File I/O failure : "+fileName);
   }
   catch(NumberFormatException e){
       System.out.println("NumberFormat Exception when loading from [" + fileName + "] ");
       throw new IllegalArgumentException("Invalid file format : "+fileName);
   }
   System.gc();  // Clean out memory.
}
/**
 * This method parses a double from a string.  Note, DEM data uses
 * the old FORTRAN notation for storing doubles using a 'D' instead of an
 * 'E'.
 * @param in  string to parse
 * @return double value from string
 * @exception NumberFormatException thrown when string is not a valid double
 */
public double parseDemDouble(String in) throws NumberFormatException
{
  String st = in.replace('D','E');  // Convert FORTRAN format to modern.
  return Double.parseDouble(st.trim());
}
public int parseDemInt(String in) throws NumberFormatException
{
  String st = in.replace('D','E');  // Convert FORTRAN format to modern.
  return Integer.parseInt(st.trim());
}
}

Create the geometry

Once the DEM file has loaded, the elevation data can convert into Java 3D geometry objects. As stated previously, to support fly-through and other real-time screen update sequences, we must use the most efficient data structures and level-of-detail optimizations. My experience has shown that TriangleStripArrays using interleaved-by-reference data handling is the most efficient in terms of memory and processor usage. While modeling the entire terrain model as a single TriangleStripArray object is possible, that does not allow you to take full advantage of Java 3D's level-of-detail feature. For LOD to work, you must have some distant objects drawn at low resolution, and closer objects drawn at full resolution. Therefore, the region divides into numerous segments; in my demonstration program, I divided the region into a six-by-six grid, as illustrated in Figure 3.

Figure 3. ElevationModel LOD grid

ElevationModel

The ElevationModel object is the top-level object in the terrain-modeling hierarchy. It divides the terrain data into a series of segments. Each segment is implemented as a LODSegment. Each LODSegment object creates three different ElevationSegment objects for its segment of the terrain. One segment is at full resolution, another plots every fifth elevation, and the third segment plots every tenth elevation. The system is created such that the segment where the viewer is located and the next adjacent segment are seen in full resolution, middle range segments are drawn at every fifth level, and distant segments are drawn at every tenth level. Figure 4 depicts what a cliff face looks like when viewed in the various levels of detail (from the same observation point).

Figure 4. Level-of-detail differences

The code segment below details how ElevationModel constructs the LODSegment objects. ElevationModel first determines how many segments are needed by dividing the SECONDS_PER_SEGMENT constant into the geographic length and width. A LODSegment array is then allocated to hold references to the segments. A set of nested for loops does all the work. During each iteration, a set of groundCoordinates, maximum and minimum x, z display coordinates, and the starting and stopping indexes into the elevation array are calculated. These pass to the LODSegment constructor. An important note: the end of one segment matches the start of the next segment. This prevents visible seams in the display. For example, one segment's maxX and stopColumn equals the minX and startColumn of the segment to its right.

Once the LODSegment is created, as shown in the code below, it is added as a child to the ElevationModel object (recall, ElevationModel is a BranchGroup). Once all segments have been created, the normals along their edges are adjusted to remove seams, and the scene is compiled to enhance performance. Normals are vectors attached to each vertex indicating the surface's orientation for lighting purposes. Refer to the section describing the ElevationSegment for more information on normals. To make the display more interesting, the ElevationModel class uses an exaggeration factor to make the elevation differences more apparent:

//
 //   Create LODSegments
 //
   sColumns = (int)Math.ceil(groundCoordinates.lengthSeconds()/SECONDS_PER_SEGMENT);
   sRows = (int)Math.ceil(groundCoordinates.widthSeconds()/SECONDS_PER_SEGMENT);
   segments = new LODSegment[sRows][sColumns];
   GroundCoordinates gc = new GroundCoordinates();
   int rowRatio = (int) (1.0d*file.nRows/sRows); int colRatio = (int) (1.0d*file.nColumns/sColumns);
   deltaRow = (north_Z-south_Z)/sRows;
   deltaCol = (east_X-west_X)/sColumns;
   for(int row = 0; row < sRows; row++)
   {
     float minX, maxX, minZ, maxZ;
     int startRow, stopRow, startCol, stopCol;
     gc.sw[GroundCoordinates.LATITUDE] = groundCoordinates.sw[GroundCoordinates.LATITUDE] + 
        row*SECONDS_PER_SEGMENT;
     gc.se[GroundCoordinates.LATITUDE] = groundCoordinates.sw[GroundCoordinates.LATITUDE]+ 
        row*SECONDS_PER_SEGMENT;
     gc.nw[GroundCoordinates.LATITUDE] = groundCoordinates.sw[GroundCoordinates.LATITUDE] + 
        row+1)*SECONDS_PER_SEGMENT;
     gc.ne[GroundCoordinates.LATITUDE] = groundCoordinates.sw[GroundCoordinates.LATITUDE]+ 
        (row+1)*SECONDS_PER_SEGMENT;
     minZ = south_Z + row*deltaRow;
     maxZ = south_Z +(row+1.0f)*deltaRow;
     startRow = row*(rowRatio);
     stopRow = (row+1)*(rowRatio);
     for(int col = 0 ; col < sColumns; col++)
     {
      if(stat != null)
         stat.setLabel2("Creating geometry segment ",row*sColumns+col+1,sRows*sColumns);
     minX = west_X + col*deltaCol;
     maxX = west_X +(col+1.0f)*deltaCol;
     startCol = col*(colRatio);
     stopCol = (col+1)*(colRatio);
     gc.sw[GroundCoordinates.LONGITUDE] = groundCoordinates.sw[GroundCoordinates.LONGITUDE] 
        - col*SECONDS_PER_SEGMENT;
     gc.nw[GroundCoordinates.LONGITUDE] = roundCoordinates.sw[GroundCoordinates.LONGITUDE] 
        -  col*SECONDS_PER_SEGMENT;
     gc.se[GroundCoordinates.LONGITUDE] = groundCoordinates.sw[GroundCoordinates.LONGITUDE]
        -  (col+1)*SECONDS_PER_SEGMENT;
     gc.ne[GroundCoordinates.LONGITUDE] = groundCoordinates.sw[GroundCoordinates.LONGITUDE]
        - (col+1)*SECONDS_PER_SEGMENT;
     segments[row][col] = new LODSegment(file.elevations,  startRow,startCol, stopRow,stopCol,
        minElevation, maxElevation, gc, exageration, minX, maxX, minZ, maxZ);
      addChild(segments[row][col]);
    }
 }
...
if(stat != null)
     stat.setLabel2("Compiling/Optimizing the geometry");
  compile(); // Compile the model

LODSegment

The LODSegment object creates the level-of-detail components for one segment of the terrain model. To implement a level of detail, we must create Switch and DistanceLOD objects. Switch provides the capability to selectively display one of its children (ElevationSegment). The DistanceLOD object is a behavior object that tells Switch which of its children to display. LODSegment is based on a BranchGroup so that it can be used as a single point of reference for the DistanceLOD, Switch, and all ElevationSegments.

The LODSegment constructor is shown in the code segment below. LODSegment first creates a Switch object, then creates three ElevationSegment objects at different resolutions, and adds them to Switch. LODSegment then creates a DistanceLOD object and initializes it with its position, distance array, and bounds. The position passed to the DistanceLOD object is calculated to be a location at the center of the region and at the model's highest elevation. The bounds passed to the DistanceLOD object are set to infinite so the segment can be seen from any location. The distance array, sized to have one fewer items than the resolution array, determines which segment will display. If the distance from the segment to the viewer is less than the first entry, then the first segment is used; if the distance from the segment to the view is less than the second entry, then the second segment is used; and so forth.

In my code, I calculated the distance array such that distances are based on twice a segment's length. This ensures that the segment where the viewer is currently located and the one immediately adjacent to it display in full detail. LODSegment then passes to the DistanceLOD object a reference to the Switch object that it will control. LODSegment must add both the DistanceLOD and the Switch objects. Examine LODSegment below:

public LODSegment( int elevations[][], int startRow, int startColumn, int stopRow, int stopColumn,
        int minEl, int maxEl, GroundCoordinates gc ,float exageration, 
       float minX, float maxX, float minZ, float maxZ)
  {
   super();
   groundCoordinates = gc;
  //
  // Initialize the switch node and create the child segments in varying resolutions
  //
 switchNode.setCapability(Switch.ALLOW_SWITCH_WRITE);
 segments = new ElevationSegment[resolutions.length];
  for(int i = 0; i < resolutions.length; i++)
  {
   segments[i] = new ElevationSegment(elevations, startRow,startColumn, stopRow,stopColumn,
      minEl,maxEl,groundCoordinates,
      exageration,minX,maxX,minZ,maxZ,resolutions[i]);
  switchNode.addChild(segments[i]);
 }
 //
 // Set the position and bounds of the object
 //
 Point3f position = new Point3f((float)((maxX+minX)/2),  maxEl*exageration,(float)((maxZ+minZ)/2));
 Bounds bounds = new BoundingSphere(new Point3d(0,0,0),Double.MAX_VALUE);
 //
 //  Calculate distances based on size of segment (east-west length)
 //
 distances = new float[resolutions.length-1];
 for(int i=0; i < distances.length; i++)
      distances[i] = Math.abs((float)((i+1)*2*(maxX-minX)));
//
//  Create the distanceLOD object
//
  dLOD = new DistanceLOD(distances,position);
  dLOD.setSchedulingBounds(bounds);
  dLOD.addSwitch(switchNode);
//
// Add the switch and the distance LOD to this object
//
 addChild(dLOD);
 addChild(switchNode);
 }

ElevationSegment

ElevationSegment is based on the Shape3D Java 3D object. Shape3D is a leaf node object that contains the actual geometry displayed on the screen. This geometry is based on the TriangleStripArray using the interleaved and by-reference parameters. A TriangleStripArray is a geometric primitive consisting of an array of vertices that form a series of triangles. The set of vertices is divided into a number of strips; each strip holds many triangles, as shown in Figure 5.

Figure 5. Triangle strips

Vertices 0, 1, and 2 create the first triangle; 1, 2, and 3 make up the second; 2, 3, and 4 create the third; and so on. The interleaved parameter indicates that all data for each vertex is contained in the same array. In my demonstration application, this data includes color, normal, and coordinate. The data array contains the color for vertex 0, normal for vertex 0, coordinate for vertex 0, then the color for vertex 1, normal for vertex 1, coordinate for vertex 1, and so forth. Colors require three floats (red, green, blue), normals require three floats (x, y, z), and coordinates require three floats (x, y, z). Thus, each vertex requires nine float data items in the array. Using interleaved data complicates coding, but speeds rendering. The by-reference parameter indicates that the Java 3D rendering routines and the application code share the data, saving space and time.

The ElevationSegment constructor's main task is to create the array of float data used as the basis for the TriangleStripArray. Class ElevationSegment is given the starting and stopping indexes into the elevations array, x and z resolutions, the y exaggeration amount, and ranges for the x and z coordinates. It then becomes a task of allocating the vertexData array to a proper size and filling in the values. My code completes this task with nested for loops, a strip at a time by a row at a time. Notice in the inner loop of the ElevationSegment constructor (shown below), each row requires the calculation of two vertices; also, at this time, only the color and coordinate information is filled in. Normal vectors are calculated later.

Once the vertexData array has generated, an InterleavedTriangleStripArray object can be created and attached to the Shape3D. I created InterleavedTriangleStripArray, a specialization of the Java 3D TriangleStripArray, because I wanted a reusable object that supported the in-place generation of normals for the vertices. Java 3D does provide a NormalsGenerator object capable of calculating for you. However, it does not use memory efficiently. In addition, since my terrain scene is divided into adjacent regions, it is desirable to have the normals set to the same values where the edges meet. This prevents visible seams from appearing along the joints. The algorithms used to calculate the normals and average them along the edges reaches beyond this article's scope, but the code is in Resources for the interested reader.

Back to creating an InterleavedTriangleStripArray: First, you create an array indicating the number of vertices in each strip, then you create the InterleavedTriangleStripArray object itself using the interleaved and by-reference flags. Other flags indicate that colors, normals, and coordinates are all included in the interleaved data. You then tell the InterleavedTriangleStripArray to calculate the normals.

ElevationSegment's constructor also calls a method to set up Shape3D's appearance, including material color, shading, and lighting properties. These properties must be set for the scene lighting to work. In my example, I selected the SHADE_GOURAUD color attribute. This attribute causes Java 3D to use smooth shading to vary the colors across the face of a triangle based on the color specified for each vertex. Optionally, the code could have used SHADE_FLAT, in which case each triangle would be given a single color for its entire face.

To get interesting color variations, I used a simple ratio based on the vertex's elevation divided by the (maxElevation-minElevation). For red and green values, I multiplied the material colors by this ratio, resulting in darker hues at lower elevations, as shown in Figure 6. I set the blue value to 1 ratio, thus more blue is present at lower elevations, such as lakes, rivers, and oceans. An alternate strategy would be to create a table of colors to be used based on the elevation. For example, elevations near the minimum could be shades of blue, those above the tree line could be gray, those above the snow line could be white. The least complex method would be to just use the material color and allow the lighting calculation to account for all color variations.

Figure 6. Color variation by elevation

The source code for the ElevationSegment object is shown below:

public ElevationSegment( int elevations[][],   int startRow, int startColumn, int stopRow, int stopColumn, 
        int minEl, int maxEl, GroundCoordinates gc,    float exageration, 
        float lowX, float highX, float lowZ, float highZ, int resolution)
{
//
// Save the ground coordinates
//
  groundCoordinates = gc;
 //
 // Set up material properties
 //
 setupAppearance();
//
// Process the 2D elevation array
//
  dRows = (int)Math.ceil((stopRow-startRow+1)/(double)resolution);
  dColumns = (int)Math.ceil((stopColumn-startColumn+1)/(double)resolution);
  xStart = lowX;
  zStart = lowZ;
  deltaX = (highX-lowX)/(stopColumn-startColumn);
  deltaZ = (highZ-lowZ)/(stopRow-startRow);
//
// First, create an interleaved array of colors, normals, and points
//
  vertexData = new float[FLOATSPERVERTEX*(dRows)*2*(dColumns-1)];
  if(vertexData == null)
  {
    System.out.println("Elevation segment: memory allocation failure");
    return;
  }
//
// Populate vertexData a strip at a time
//
  int row, col; // Used as indexes into the elevations array
  int i; // Used as an index into vertexData
  for( col = startColumn, i = 0; col <= stopColumn-resolution; col += resolution)
  {
   for(row = startRow; row <= stopRow; row += resolution)
   {
      if(row+resolution > stopRow) // Always use last data line to prevent seams
         row = stopRow;
     setColor(i+COLOR_OFFSET,elevations[row][col],minEl,maxEl);
      setCoordinate(elevations, i+COORD_OFFSET,row,col,startRow,startColumn,exageration);
      i += FLOATSPERVERTEX;
      int c = col;
      if(c+resolution > stopColumn-resolution) // Always use last data line to prevent seams
      c = stopColumn-resolution;
       setColor(i+COLOR_OFFSET,elevations[row][c+resolution],minEl,maxEl);
       setCoordinate(elevations, i+COORD_OFFSET,row,c+resolution,startRow,startColumn,exageration);
       i += FLOATSPERVERTEX;
   }
  }
//
// Create a stripCount array showing the number of vertices in each
// strip
//
 int[] stripCounts = new int[dColumns-1];
 for(int strip = 0; strip < dColumns-1; strip++)
   stripCounts[strip] = (dRows)*2;
//
// Create and set the geometry
//
tStrip = new InterleavedTriangleStripArray(vertexData.length/FLOATSPERVERTEX,
GeometryArray.COORDINATES|GeometryArray.COLOR_3|GeometryArray.NORMALS
         |GeometryArray.BY_REFERENCE|GeometryArray.INTERLEAVED, stripCounts);
 tStrip.setInterleavedVertices(vertexData);
 tStrip.generateNormals(true);
 setGeometry(tStrip);
}
/**
 *  Set up the material properties and coloring attributes
 *
 */
private void setupAppearance()
{
    Appearance app = new Appearance(); // Create an appearance
    Material mat = new Material(); // Create a material
    // Select shading
   ColoringAttributes ca = new ColoringAttributes(matColor,ColoringAttributes.SHADE_GOURAUD);
    app.setColoringAttributes(ca); // Add coloring attributes to the appearance
    mat.setLightingEnable(true); // Allow lighting
    mat.setDiffuseColor(matColor); // Set diffuse color (used by directional lights)
    mat.setAmbientColor(matColor); // Set ambient color (used by ambient lights)
    mat.setSpecularColor(0f,0f,0f); // No specular color
    mat.setShininess(1.0f); // No shininess
    mat.setLightingEnable(true); // Allow lighting
    app.setMaterial(mat); // Add the material to the appearance setAppearance(app); // Add the appearance to the object // Allows calls to setgeometry setCapability(Shape3D.ALLOW_GEOMETRY_WRITE|Shape3D.ALLOW_GEOMETRY_READ); 
}
/**
 * Store coordinate data into vertex data array
 *
 *  @param elevations array of elevations
 *  @param i index into vertexData to store coordinate
 *  @param row elevation row
 * @param col elevation column
 * @param startRow first row used in elevations
 *  @param stopColumn first column used in elevations
 *  @param exageration elevation exageration factor
 */
 public void setCoordinate(int[][] elevations, int i, int row, int col, int 
   startRow, int startColumn, float exageration)
 {
    vertexData[i] = (float)(((col-startColumn)*deltaX)+xStart);
   vertexData[i+1] = elevations[row][col]*exageration;
   vertexData[i+2] = (float)(zStart+((row-startRow)*deltaZ));
 }
/**
 * Store color data into vertex data array, compute color based
 * on the elevation's distance between min and max elevations
 *
 *  @param i index into vertexData to store coordinate
 *  @param elevation  vertex elevation (no exageration)
 *  @param minElevation minimum elevation in model
 *  @param maxElevation maximum elevation in model
 */
public void setColor(int i, int elevation,int minElevation, int maxElevation)
{
  float ratio = ((float) elevation)/(float) (maxElevation-minElevation);
  vertexData[i] = matColor.x*ratio; // Set red
  vertexData[i+1] = matColor.y*ratio; // Set green
  vertexData[i+2] = (float)(1-ratio); // Trick to bring blue for the lowest elevations
}

Control the view

In Java 3D, the image painted on the Canvas3D object is created based on where the user is located in the virtual world, what direction she is looking, and her vision characteristics. A ViewPlatform object represents the user's location and orientation. Think of this object as an airplane's cockpit. The pilot (user) looks straight ahead, but the plane itself can rotate along any of the three axes and move anywhere in space. The ViewPlatform object has a built-in transformation object that controls this movement.

A second object that affects what displays on the screen is the View object. This object can be thought of as defining the pilot's vision characteristics. The View's important aspects are the field of view, front clipping distance, and back clipping distance. The field of view specifies how wide an angle the pilot sees; a small angle resembles a horse wearing blinders, a wide angle resembles a camera's fish-eye lens. The front clipping distance determines how close something can be and still be seen; likewise, the back clipping distance determines how far something can be and still be seen.

FlyingPlatform, an object based on the abstract ViewPlatformAWTBehavior, controls the interaction between input devices, mouse/keyboard, and the ViewPlatform. This object is attached to both the ViewPlatform and Canvas3D objects, as depicted in Figure 7.

Figure 7. ViewPlatform object interactions

View3DPanel

View3Dpanel is responsible for creating the Java 3D environment. This includes the canvas, universe, lights, view platforms, and loading the terrain data. Both the View and the ViewPlatform objects are created when the SimpleUniverse object is created.

The code segment below shows the portions of code that create the Canvas3D, lights, SimpleUniverse, and the FlyingPlatform objects, as well as the lines that initialize the View object and link them together. In this case, I use a front clipping distance of 1 (meter) to ensure that the user can go right up beside a cliff and not have it disappear. I use a back clipping distance equal to twice the overall length (the model's east-to-west distance) so the user can look down on the screen from above and see the entire DEM data segment (approximately 100,000 meters wide). The textbooks and tutorials caution against having a ratio of front-to-back clipping distance greater than 3,000 as some of the low-level OpenGL drivers and graphics hardware may have a hard time dealing with it. I have not had a problem thus far with my system.

The field-of-view constant I use in the program is 45 degrees, as that provides a natural-looking translation onto the screen. I could have used different values to create special effects (e.g., looking through a periscope or telescope).

Following View's initialization, a FlyingPlatform is created. Its constructor is passed a reference to the Canvas3D object as well as a reference to the ElevationModel object holding the terrain data. The FlyingPlatform object needs the reference to the Canvas3D object to establish communications for mouse and keyboard events. A reference to the ElevationModel object is necessary so the FlyingPlatform can query it for elevations at specific points when the terrain-following function is enabled.

The next line of code sets an infinite scheduling bounds for the FlyingPlatform so it is always active; following that, the FlyingPlatform is set as the behavior for the ViewPlatform object. The FlyingPlatform.goHome() method is then called to set the ViewPlatform's initial position to a predictable location. The code segment shown below is from View3DPanel constructor:

//
// Add a Canvas to the center of the panel
//
    setLayout(new BorderLayout());
    GraphicsConfiguration config =
        SimpleUniverse.getPreferredConfiguration();
    canvas = new Canvas3D(config);
    canvas.stopRenderer();
    add("Center", canvas);
//
// Create the branch group to hold the world
//
    world  =    new BranchGroup();
    world.setCapability(Group.ALLOW_CHILDREN_EXTEND);
 //
 // Create the background
 //
      Background bg = new Background(backgroundColor);
      bg.setApplicationBounds(infiniteBounds);
      world.addChild(bg);
   //
   // Create lights
   //
      BranchGroup lights = new BranchGroup();
      // Create the ambient light
      AmbientLight ambLight = new AmbientLight(true,ambientColor);
      ambLight.setInfluencingBounds(infiniteBounds);
      ambLight.setCapability(Light.ALLOW_STATE_WRITE);
      ambLight.setEnable(true);
      lights.addChild(ambLight);
      // Create the directional lights
       Vector3f dir = new Vector3f(1,-1,1);
       DirectionalLight dirLight = new DirectionalLight(true,directionalColor, dir);
      dirLight.setCapability(Light.ALLOW_INFLUENCING_BOUNDS_WRITE);
      dirLight.setCapability(Light.ALLOW_STATE_WRITE);
      dirLight.setInfluencingBounds(infiniteBounds);
      dirLight.setEnable(true);
      lights.addChild(dirLight);
       world.addChild(lights);
//
// Create a universe and attach the branch group
//
   universe = new SimpleUniverse(canvas);
  universe.addBranchGraph(world);
   }
  /**
  *  Loads elevation data file, creates and initializes the view
  * and viewing platform to conform to the terrain model
  */
  public void load(String fileName,StatusWindow stat)
  {
    model = new ElevationModel(fileName,stat);
    world.addChild(model);
 //
//  Adjust the view based on the size of the model
//
  View view = universe.getViewer().getView();
  view.setFrontClipDistance(1);  // Allow user to get close to objects
  view.setBackClipDistance(model.length*2);  // Allow user to see far off objects
  view.setFieldOfView(Math.toRadians(FIELD_OF_VIEW));
//
//  Set up the viewing platform
//
  platform = new FlyingPlatform(canvas, model);
  platform.setSchedulingBounds(infiniteBounds);
  universe.getViewingPlatform().setViewPlatformBehavior(platform);
  platform.goHome();

FlyingPlatform

FlyingPlatform is based on the ViewPlatformAWTBehavior abstract object. As previously described, the FlyingPlatform object processes mouse and keyboard events generated in the Canvas3D object and converts them into changes to the transform function that controls the ViewPlatform object. In addition to processing keyboard and mouse input, FlyingPlatform also provides functionality for a pop-up menu, as shown in Figure 8, and a navigation control panel, illustrated in Figure 9.

Figure 8. Navigation control menu
Figure 9. Navigation control panel

FlyingPlatform's constructor first invokes the constructor of its base class, passing the reference to the Canvas3D object and flags indicating that it will process mouse events, mouse motion events, and keyboard events. These events are processed by the creation of routines that override the ones defined in ViewPlatformAWTBehavior. These routines include: mouseDragged, mouseMoved, mouseClicked, and keyPressed, among others. FlyingPlatform then sets the input focus to the Canvas3D object, computes an initial elevation for home base and initial movement increments, and creates a Vector3f object to store the initial ViewPlatform position.

The navigation control dialog SettingsDialog is then constructed but not displayed. It only displays in response to a pop-up menu request. Note the while loop in the code segment below that searches up the chain of windows to find the parent frame. This allows the dialog box to be tied to the application, making it easier to position and control.

After the SettingsDialog is created, FlyingPlatform's constructor creates the pop-up menu components and attaches them to the Canvas3D object. Special note: I had to use AWT (Abstract Window Toolkit) PopupMenu objects because, try as I might, I could not get the Swing JPopupMenu object to display (I'd be interested in seeing a solution). ItemListeners and ActionListeners were added to the menu items. FlyingPlatform's source code is shown below:

public FlyingPlatform(Canvas3D aCanvas, ElevationModelInterface aModel)
{
  super(aCanvas,MOUSE_MOTION_LISTENER|MOUSE_LISTENER|KEY_LISTENER);
  aCanvas.requestFocus();  // Get the focus to the Canvas, allows keyboard inputs
  model = aModel;
  canvas = aCanvas;
  HOME_Y = model.getElevationAt(0,0)+INITIAL_TERRAIN_FOLLOW_ALTITUDE;
  moveAmt = Math.round(model.getModelLength()/100);
  platformVect = new Vector3f(HOME_X,HOME_Y,HOME_Z);
  Container c = canvas.getParent();
  while(c.getParent() != null)
      c= c.getParent();
  settingsDialog = new SettingsDialog((Frame)c);
  popupMenu.add(settingsMenu);
  popupMenu.add(levelOffMenu);
  popupMenu.add(terrainFollowMenu);
  popupMenu.addSeparator();
  popupMenu.add(aerialViewMenu);
  popupMenu.add(homeBaseMenu);
  canvas.add(popupMenu);
  terrainFollowMenu.addItemListener(this);
  settingsMenu.addActionListener(this);
  homeBaseMenu.addActionListener(this);
  levelOffMenu.addActionListener(this);
  aerialViewMenu.addActionListener(this);
}

The heart of FlyingPlatform's functionality lies in the class's ability to maintain the transformation that controls the ViewPlatform. The FlyingPlatform has four class variables that hold a vector for the location and three rotation angles, one for each axis (x, y, z). Initially, these values are set to the home base values. Home base is defined as the center of the x, z plane, 100 meters above the ground looking north. Each time the user changes the ViewPlatform's location or rotation, one or more of the class variables change and the integrateTransforms() method is called to reset the ViewPlatform's transformation. integrateTransforms() creates a separate Transform3D object for the location and the rotation angles, then the Transform3D objects are multiplied together to create one transform representing their total effect on the ViewPlatform. The visual effect that the transformations have on the view is the reverse order of their multiplication. In this case, we rotate on the z axis first, then the x axis, then the y axis, and lastly move to a location in space.

Special note: For this application, the rotation transformation order is important. The y rotation is performed last so it does not affect the z and x rotations. Once these operations are complete, ViewPlatform.setTransform is called to apply the new transform. Thus, any mouse or keyboard event-processing routine need only change one of these class variables and then call integrateTransforms() to take effect. The following code segment is from FlyingPlatform:

 /** Holds view platform location*/
 private Vector3f platformVect;
/** Holds current X axis attitude */
 private float xAngle = HOME_XANGLE; // Degrees
/** Holds current Y axis attitude */
 private float yAngle = HOME_YANGLE; // Degrees
/** Holds current Z axis attitude */
 private float zAngle = HOME_ZANGLE; // Degrees
...
/**
 *  Reset the view platform transformation based on
 * the x, y, z rotation and location information
 *
 */
protected void integrateTransforms()
{
   Transform3D tVect = new Transform3D();
   Transform3D tXRot = new Transform3D();
   Transform3D tYRot = new Transform3D();
   Transform3D tZRot = new Transform3D();
   tVect.set(platformVect);
   tXRot.set(new AxisAngle4d(1.0,0.0,0.0,Math.toRadians(xAngle)));
   tYRot.set(new AxisAngle4d(0.0,1.0,0.0,Math.toRadians(yAngle)));
   tZRot.set(new AxisAngle4d(0.0,0.0,1.0,Math.toRadians(zAngle)));
   tVect.mul(tYRot);
   tVect.mul(tXRot);
   tVect.mul(tZRot);
   targetTransform = tVect;
   vp.getViewPlatformTransform().setTransform(tVect);
}

You'll find creating methods to listen for keyboard and mouse input, change a variable, and then call a method pretty straightforward. The more difficult part comes when you want to move forward/backward through a scene. What values are added to or subtracted from x, y, z when you want to move 20 meters forward and you are in a 33-degree nose-up position, with a 25-degree bank and headed northeast? Don't go hunting for your ninth grade trigonometry book and try to generate the proper equation. Instead, use the transformation functions built into Java 3D. Here's how:

The ViewPlatform is initially set at the origin pointing down the negative z axis. Therefore, moving forward 10 units requires decreasing z by 10. This movement needs to translate into movement in all three directions. moveForward(float amt) completes this task by creating a Transform3D object based on the x, y, z axis rotations applied in the correct order (same order used by integrateTransforms()), then applying this transformation to a Vector3f stored in the tv object based on the movement. moveForward() creates a set of transformations for rotations, multiplies them together, creates a Vector3f representing movement in the z direction, then calls the transform function to translate tv. In my example, the vector (0, 0, -10) translates into (5.930, 5.44, -5.93). This vector can now be added to the platformVect that maintains the ViewPlatform's location. If terrain-following is enabled, then platformVect's y coordinate is updated based on the ground elevation below the new x, z location and the terrain-following altitude. The code segment below performs the moveForward function of FlyingPlatform:

/**
 * Move the ViewPlatform forward by desired number of meters.
 * Forward implies in the direction that it is currently pointed.
 * If terrain-following is enabled, then keep the altitude a steady
 * amount above the ground.
 * @param amt number of meters to move forward
 */
  public void moveForward(float amt)
  {
  //
  //  Calculate x, y, z movement.
  //
  // Set up Transforms
     Transform3D tTemp = new Transform3D();
     Transform3D tXRot = new Transform3D();
     Transform3D tYRot = new Transform3D();
     Transform3D tZRot = new Transform3D();
     tXRot.set(new AxisAngle4d(1.0,0.0,0.0,Math.toRadians(xAngle)));
     tYRot.set(new AxisAngle4d(0.0,1.0,0.0,Math.toRadians(yAngle)));
     tZRot.set(new AxisAngle4d(0.0,0.0,1.0,Math.toRadians(zAngle)));
     tTemp.mul(tYRot);
     tTemp.mul(tXRot);
     tTemp.mul(tZRot);
  //
  // Move forward in z direction.
  // Implies decreasing z since we are looking at the origin from the pos z.
     Vector3f tv = new Vector3f(0,0,-amt);
     tTemp.transform(tv);  // Translates z movement into x, y, z movement.
  //
  //  Set new values for the platform location vector.
  // If terrain-following is on, then find the terrain elevation at the new x, z
  // coordinate and base the new altitude on that. Else, use the computed altitude.
  //
    if(followTerrain)
    {
     platformVect.y = model.getElevationAt(platformVect.x+tv.x,platformVect.z+tv.z)
                      +terrainFollowAltitude;
    }
    else
     platformVect.y += tv.y;
    platformVect.x += tv.x;
    platformVect.z += tv.z;
    integrateTransforms();  // Apply transformations.
  }

One final trick I use when implementing FlyingPlatform is adding a level of sensitivity to mouse moves. In the routines for processing mouse motions shown below, I use the variables oldx, oldy to store the last mouse drag's location. If the mouse moves without a button pressed down (mouseMoved()) then these variables set to invalid values (-1). The first time the mouse moves with a button pressed down (mouseDragged()), the location is saved in oldx, oldy. Subsequent mouseDragged() calls compare the values in oldx, oldy with the current x, y values to determine the direction of motion (up, down, left, right). Changes to the ViewPlatform location and orientation are made based on the movement direction and which mouse button is pressed. A sensitivity (3 pixels) value causes the program to ignore small, perhaps unintentional movements. It is important to note that the x, y mouse coordinates have no relation to the x, y, z terrain coordinates. Mouse coordinates are in screen pixel units (with 0, 0 being at the top of the screen). The code segment shown below contains the MouseEvent processing functions of FlyingPlatform:

public void mouseMoved(MouseEvent e)
 {
  oldx = -1;
  oldy = -1;
 }
public void mouseDragged(MouseEvent e)
{
  int mods = e.getModifiersEx();
  int x = e.getX();
  int y = e.getY();
  if(oldx < 0 || oldy < 0)
  {
   oldx = x;
   oldy = y;
   return;
  }
//
// Skip the event if it moved just a little
//
  if(Math.abs(y-oldy) < sensitivity &&
     Math.abs(x-oldx) < sensitivity)
     return;
//
// First, check to see if both buttons are down
//
   if((mods & MouseEvent.BUTTON1_DOWN_MASK) != 0
      && (mods & MouseEvent.BUTTON3_DOWN_MASK) != 0)
   {
     if(y > oldy+sensitivity)
       increaseXRotate(turnAmt);
     if(y < oldy-sensitivity)
       increaseXRotate(-turnAmt);
     return;
   }
//
// Process left only down
//
   if((mods & MouseEvent.BUTTON1_DOWN_MASK) != 0)
   {
     if(y > oldy+sensitivity) //Mouse moves down screen
       moveForward(-moveAmt);
     if(y < oldy-sensitivity) // Mouse moves up screen
       moveForward(moveAmt);
     if(x > oldx+sensitivity)
       increaseYRotate(-turnAmt);
     if(x < oldx-sensitivity)
       increaseYRotate(turnAmt);
   }
//
// Process right button down
//
   if((mods & MouseEvent.BUTTON3_DOWN_MASK) != 0)
    {
      if(y > oldy+sensitivity)// Mouse moves down screen
        increaseY(-moveAmt);
      if(y < oldy-sensitivity)// Mouse moves up screen
        increaseY(moveAmt);
      if(x > oldx+sensitivity)
        increaseZRotate(turnAmt);
      if(x < oldx-sensitivity)
        increaseZRotate(-turnAmt);
    }
   oldx = x;  // Save for comparison on next mouse move
   oldy = y;
 }

Lessons learned

Java 3D depends on the correct installation of graphics cards and drivers, which it relies on. Before working with Java 3D, I suggest getting the latest version of drivers and firmware for your system. In my case, I use a Pentium 3 with an ATI Radeon AGP graphics adapter running Windows 2000 Professional. For Java 3D to work correctly I had to turn off some of the card's hardware acceleration (the symptom in my case was that the Canvas3D would not refresh when the window resized).

Also, Java 3D can require a lot of memory. Depending on your system's configuration and the Java engine you run, you might (probably will) have to run applications using the -Xmx switch to increase the maximum amount of memory Java is allowed to allocate. I've set my system to always use -Xmx256m so that I don't have to worry about it. The default is only 64 megabytes.

In addition, the demonstration program can be fine-tuned by changing the exaggeration constant to show greater differences in elevations and the SECONDS_PER_SEGMENT constant to have larger or smaller segments in ElevationModel. Also, the resolutions array initial values in LODSegment can be modified to give sharper pictures.

Some specific recommendations for developers:

  • Use the "strip" object forms (TriangleStripArray, LineStripArray, and so on) whenever possible. Interleaved and by-reference options provide the best use of memory and processor resources.
  • Since Java 3D internally converts geometry data to floats, the use of doubles is not warranted.
  • Call the garbage collector once the model has been created. Incurring this overhead predictably during initialization is better than having it start up during use. Opening and closing buffered files and creating geometry can leave a lot of trash.
  • Unless you need a particular method provided by Color3f, Point3f, or Vector3f, store coordinate and normal information as arrays of floats or other native data formats. Object-oriented purists could make a case for the use of standard objects over arrays of primitive data types on ideological grounds, however, doing so might compromise performance.
  • The use of indexed geometry is questionable. While it appears to save memory by not having to store vertex information more than once, the complexity of creating the index arrays (a separate one is required for coordinates, colors, and normals) and the fact that Java 3D converts the data in the indexed arrays to a nonindexed format anyway erases any perceived advantage. Future graphics hardware might support indexed geometry on a cross-platform basis.

Start navigating in Java 3D

In this article, I demonstrated how to efficiently create and navigate through 3D worlds using Java 3D and DEM data files available from USGS. The objects developed for the application demonstrate how to parse the DEM data and create geometry from it (DemFile, ElevationFile); how to create efficient Java 3D data structures using interleaved arrays and level-of-detail representation (ElevationModel, ElevationSegment, LODSegment); and how to create a user interface that allows the user to navigate through the virtual world (View3DPanel, FlyingPlatform). These objects are documented using javadoc and available for download in Resources.

The work presented here is part of a larger ongoing project; its goal is to create a large virtual world based on USGS mapping data that allows real-time display and fly-through capabilities. This virtual world can then be used as a basis for GIS applications in areas such as meteorology, flood and erosion control, wilderness fire-fighting, environmental and growth management, among others. My current work includes creating extensions to the LODSegment and DistanceLOD objects to allow segments to be swapped in and out of memory. This will allow mapping beyond the 1-degree-by-1-degree area to be created and interactively navigated.

Dr. Pendergast is an associate professor at Florida Gulf Coast University. Dr. Pendergast received an MS and PhD from the University of Arizona and a BSE in electrical computer engineering from the University of Michigan. He has worked as an engineer for Control Data Corporation, Harris Corporation, Ventana Corporation, and taught at the University of Florida and Washington State University. His works have appeared in books and journals, and he has presented his work at numerous conferences. He is a member of the ACM (Association for Computer Machinery) and IEEE (Institute of Electrical and Electronics Engineers) computer societies.

Learn more about this topic

Join the discussion
Be the first to comment on this article. Our Commenting Policies