Newsletter sign-up
View all newsletters

Enterprise Java Newsletter
Stay up to date on the latest tutorials and Java community news posted on JavaWorld

Sponsored Links

Optimize with a SATA RAID Storage Solution
Range of capacities as low as $1250 per TB. Ideal if you currently rely on servers/disks/JBODs

Navigate through virtual worlds using Java 3D

Use level-of-detail and fly-through behaviors

  • Print
  • Feedback

Page 6 of 6

Figure 5. Triangle strips

Vertices 0, 1, and 2 create the first triangle; 1, 2, and 3 make up the second; 2, 3, and 4 create the third; and so on. The interleaved parameter indicates that all data for each vertex is contained in the same array. In my demonstration application, this data includes color, normal, and coordinate. The data array contains the color for vertex 0, normal for vertex 0, coordinate for vertex 0, then the color for vertex 1, normal for vertex 1, coordinate for vertex 1, and so forth. Colors require three floats (red, green, blue), normals require three floats (x, y, z), and coordinates require three floats (x, y, z). Thus, each vertex requires nine float data items in the array. Using interleaved data complicates coding, but speeds rendering. The by-reference parameter indicates that the Java 3D rendering routines and the application code share the data, saving space and time.

The ElevationSegment constructor's main task is to create the array of float data used as the basis for the TriangleStripArray. Class ElevationSegment is given the starting and stopping indexes into the elevations array, x and z resolutions, the y exaggeration amount, and ranges for the x and z coordinates. It then becomes a task of allocating the vertexData array to a proper size and filling in the values. My code completes this task with nested for loops, a strip at a time by a row at a time. Notice in the inner loop of the ElevationSegment constructor (shown below), each row requires the calculation of two vertices; also, at this time, only the color and coordinate information is filled in. Normal vectors are calculated later.

Once the vertexData array has generated, an InterleavedTriangleStripArray object can be created and attached to the Shape3D. I created InterleavedTriangleStripArray, a specialization of the Java 3D TriangleStripArray, because I wanted a reusable object that supported the in-place generation of normals for the vertices. Java 3D does provide a NormalsGenerator object capable of calculating for you. However, it does not use memory efficiently. In addition, since my terrain scene is divided into adjacent regions, it is desirable to have the normals set to the same values where the edges meet. This prevents visible seams from appearing along the joints. The algorithms used to calculate the normals and average them along the edges reaches beyond this article's scope, but the code is in Resources for the interested reader.

Back to creating an InterleavedTriangleStripArray: First, you create an array indicating the number of vertices in each strip, then you create the InterleavedTriangleStripArray object itself using the interleaved and by-reference flags. Other flags indicate that colors, normals, and coordinates are all included in the interleaved data. You then tell the InterleavedTriangleStripArray to calculate the normals.

ElevationSegment's constructor also calls a method to set up Shape3D's appearance, including material color, shading, and lighting properties. These properties must be set for the scene lighting to work. In my example, I selected the SHADE_GOURAUD color attribute. This attribute causes Java 3D to use smooth shading to vary the colors across the face of a triangle based on the color specified for each vertex. Optionally, the code could have used SHADE_FLAT, in which case each triangle would be given a single color for its entire face.

To get interesting color variations, I used a simple ratio based on the vertex's elevation divided by the (maxElevation-minElevation). For red and green values, I multiplied the material colors by this ratio, resulting in darker hues at lower elevations, as shown in Figure 6. I set the blue value to 1 ratio, thus more blue is present at lower elevations, such as lakes, rivers, and oceans. An alternate strategy would be to create a table of colors to be used based on the elevation. For example, elevations near the minimum could be shades of blue, those above the tree line could be gray, those above the snow line could be white. The least complex method would be to just use the material color and allow the lighting calculation to account for all color variations.

Figure 6. Color variation by elevation

The source code for the ElevationSegment object is shown below:

public ElevationSegment( int elevations[][],   int startRow, int startColumn, int stopRow, int stopColumn, 
        int minEl, int maxEl, GroundCoordinates gc,    float exageration, 
        float lowX, float highX, float lowZ, float highZ, int resolution)
{
//
// Save the ground coordinates
//
  groundCoordinates = gc;
 //
 // Set up material properties
 //
 setupAppearance();
//
// Process the 2D elevation array
//
  dRows = (int)Math.ceil((stopRow-startRow+1)/(double)resolution);
  dColumns = (int)Math.ceil((stopColumn-startColumn+1)/(double)resolution);
  xStart = lowX;
  zStart = lowZ;
  deltaX = (highX-lowX)/(stopColumn-startColumn);
  deltaZ = (highZ-lowZ)/(stopRow-startRow);
//
// First, create an interleaved array of colors, normals, and points
//
  vertexData = new float[FLOATSPERVERTEX*(dRows)*2*(dColumns-1)];
  if(vertexData == null)
  {
    System.out.println("Elevation segment: memory allocation failure");
    return;
  }
//
// Populate vertexData a strip at a time
//
  int row, col; // Used as indexes into the elevations array
  int i; // Used as an index into vertexData
  for( col = startColumn, i = 0; col <= stopColumn-resolution; col += resolution)
  {
   for(row = startRow; row <= stopRow; row += resolution)
   {
      if(row+resolution > stopRow) // Always use last data line to prevent seams
         row = stopRow;
     setColor(i+COLOR_OFFSET,elevations[row][col],minEl,maxEl);
      setCoordinate(elevations, i+COORD_OFFSET,row,col,startRow,startColumn,exageration);
      i += FLOATSPERVERTEX;
      int c = col;
      if(c+resolution > stopColumn-resolution) // Always use last data line to prevent seams
      c = stopColumn-resolution;
       setColor(i+COLOR_OFFSET,elevations[row][c+resolution],minEl,maxEl);
       setCoordinate(elevations, i+COORD_OFFSET,row,c+resolution,startRow,startColumn,exageration);
       i += FLOATSPERVERTEX;
   }
  }
//
// Create a stripCount array showing the number of vertices in each
// strip
//
 int[] stripCounts = new int[dColumns-1];
 for(int strip = 0; strip < dColumns-1; strip++)
   stripCounts[strip] = (dRows)*2;
//
// Create and set the geometry
//
tStrip = new InterleavedTriangleStripArray(vertexData.length/FLOATSPERVERTEX,
GeometryArray.COORDINATES|GeometryArray.COLOR_3|GeometryArray.NORMALS
         |GeometryArray.BY_REFERENCE|GeometryArray.INTERLEAVED, stripCounts);
 tStrip.setInterleavedVertices(vertexData);
 tStrip.generateNormals(true);
 setGeometry(tStrip);
}
/**
 *  Set up the material properties and coloring attributes
 *
 */
private void setupAppearance()
{
    Appearance app = new Appearance(); // Create an appearance
    Material mat = new Material(); // Create a material
    // Select shading
   ColoringAttributes ca = new ColoringAttributes(matColor,ColoringAttributes.SHADE_GOURAUD);
    app.setColoringAttributes(ca); // Add coloring attributes to the appearance
    mat.setLightingEnable(true); // Allow lighting
    mat.setDiffuseColor(matColor); // Set diffuse color (used by directional lights)
    mat.setAmbientColor(matColor); // Set ambient color (used by ambient lights)
    mat.setSpecularColor(0f,0f,0f); // No specular color
    mat.setShininess(1.0f); // No shininess
    mat.setLightingEnable(true); // Allow lighting
    app.setMaterial(mat); // Add the material to the appearance setAppearance(app); // Add the appearance to the object // Allows calls to setgeometry setCapability(Shape3D.ALLOW_GEOMETRY_WRITE|Shape3D.ALLOW_GEOMETRY_READ); 
}
/**
 * Store coordinate data into vertex data array
 *
 *  @param elevations array of elevations
 *  @param i index into vertexData to store coordinate
 *  @param row elevation row
 * @param col elevation column
 * @param startRow first row used in elevations
 *  @param stopColumn first column used in elevations
 *  @param exageration elevation exageration factor
 */
 public void setCoordinate(int[][] elevations, int i, int row, int col, int 
   startRow, int startColumn, float exageration)
 {
    vertexData[i] = (float)(((col-startColumn)*deltaX)+xStart);
   vertexData[i+1] = elevations[row][col]*exageration;
   vertexData[i+2] = (float)(zStart+((row-startRow)*deltaZ));
 }
/**
 * Store color data into vertex data array, compute color based
 * on the elevation's distance between min and max elevations
 *
 *  @param i index into vertexData to store coordinate
 *  @param elevation  vertex elevation (no exageration)
 *  @param minElevation minimum elevation in model
 *  @param maxElevation maximum elevation in model
 */
public void setColor(int i, int elevation,int minElevation, int maxElevation)
{
  float ratio = ((float) elevation)/(float) (maxElevation-minElevation);
  vertexData[i] = matColor.x*ratio; // Set red
  vertexData[i+1] = matColor.y*ratio; // Set green
  vertexData[i+2] = (float)(1-ratio); // Trick to bring blue for the lowest elevations
}


Control the view

In Java 3D, the image painted on the Canvas3D object is created based on where the user is located in the virtual world, what direction she is looking, and her vision characteristics. A ViewPlatform object represents the user's location and orientation. Think of this object as an airplane's cockpit. The pilot (user) looks straight ahead, but the plane itself can rotate along any of the three axes and move anywhere in space. The ViewPlatform object has a built-in transformation object that controls this movement.

A second object that affects what displays on the screen is the View object. This object can be thought of as defining the pilot's vision characteristics. The View's important aspects are the field of view, front clipping distance, and back clipping distance. The field of view specifies how wide an angle the pilot sees; a small angle resembles a horse wearing blinders, a wide angle resembles a camera's fish-eye lens. The front clipping distance determines how close something can be and still be seen; likewise, the back clipping distance determines how far something can be and still be seen.

FlyingPlatform, an object based on the abstract ViewPlatformAWTBehavior, controls the interaction between input devices, mouse/keyboard, and the ViewPlatform. This object is attached to both the ViewPlatform and Canvas3D objects, as depicted in Figure 7.

Figure 7. ViewPlatform object interactions

View3DPanel

View3Dpanel is responsible for creating the Java 3D environment. This includes the canvas, universe, lights, view platforms, and loading the terrain data. Both the View and the ViewPlatform objects are created when the SimpleUniverse object is created.

The code segment below shows the portions of code that create the Canvas3D, lights, SimpleUniverse, and the FlyingPlatform objects, as well as the lines that initialize the View object and link them together. In this case, I use a front clipping distance of 1 (meter) to ensure that the user can go right up beside a cliff and not have it disappear. I use a back clipping distance equal to twice the overall length (the model's east-to-west distance) so the user can look down on the screen from above and see the entire DEM data segment (approximately 100,000 meters wide). The textbooks and tutorials caution against having a ratio of front-to-back clipping distance greater than 3,000 as some of the low-level OpenGL drivers and graphics hardware may have a hard time dealing with it. I have not had a problem thus far with my system.

The field-of-view constant I use in the program is 45 degrees, as that provides a natural-looking translation onto the screen. I could have used different values to create special effects (e.g., looking through a periscope or telescope).

Following View's initialization, a FlyingPlatform is created. Its constructor is passed a reference to the Canvas3D object as well as a reference to the ElevationModel object holding the terrain data. The FlyingPlatform object needs the reference to the Canvas3D object to establish communications for mouse and keyboard events. A reference to the ElevationModel object is necessary so the FlyingPlatform can query it for elevations at specific points when the terrain-following function is enabled.

The next line of code sets an infinite scheduling bounds for the FlyingPlatform so it is always active; following that, the FlyingPlatform is set as the behavior for the ViewPlatform object. The FlyingPlatform.goHome() method is then called to set the ViewPlatform's initial position to a predictable location. The code segment shown below is from View3DPanel constructor:

//
// Add a Canvas to the center of the panel
//
    setLayout(new BorderLayout());
    GraphicsConfiguration config =
        SimpleUniverse.getPreferredConfiguration();
    canvas = new Canvas3D(config);
    canvas.stopRenderer();
    add("Center", canvas);
//
// Create the branch group to hold the world
//
    world  =    new BranchGroup();
    world.setCapability(Group.ALLOW_CHILDREN_EXTEND);
 //
 // Create the background
 //
      Background bg = new Background(backgroundColor);
      bg.setApplicationBounds(infiniteBounds);
      world.addChild(bg);
   //
   // Create lights
   //
      BranchGroup lights = new BranchGroup();
      // Create the ambient light
      AmbientLight ambLight = new AmbientLight(true,ambientColor);
      ambLight.setInfluencingBounds(infiniteBounds);
      ambLight.setCapability(Light.ALLOW_STATE_WRITE);
      ambLight.setEnable(true);
      lights.addChild(ambLight);
      // Create the directional lights
       Vector3f dir = new Vector3f(1,-1,1);
       DirectionalLight dirLight = new DirectionalLight(true,directionalColor, dir);
      dirLight.setCapability(Light.ALLOW_INFLUENCING_BOUNDS_WRITE);
      dirLight.setCapability(Light.ALLOW_STATE_WRITE);
      dirLight.setInfluencingBounds(infiniteBounds);
      dirLight.setEnable(true);
      lights.addChild(dirLight);
       world.addChild(lights);
//
// Create a universe and attach the branch group
//
   universe = new SimpleUniverse(canvas);
  universe.addBranchGraph(world);
   }
  /**
  *  Loads elevation data file, creates and initializes the view
  * and viewing platform to conform to the terrain model
  */
  public void load(String fileName,StatusWindow stat)
  {
    model = new ElevationModel(fileName,stat);
    world.addChild(model);
 //
//  Adjust the view based on the size of the model
//
  View view = universe.getViewer().getView();
  view.setFrontClipDistance(1);  // Allow user to get close to objects
  view.setBackClipDistance(model.length*2);  // Allow user to see far off objects
  view.setFieldOfView(Math.toRadians(FIELD_OF_VIEW));
//
//  Set up the viewing platform
//
  platform = new FlyingPlatform(canvas, model);
  platform.setSchedulingBounds(infiniteBounds);
  universe.getViewingPlatform().setViewPlatformBehavior(platform);
  platform.goHome();


FlyingPlatform

FlyingPlatform is based on the ViewPlatformAWTBehavior abstract object. As previously described, the FlyingPlatform object processes mouse and keyboard events generated in the Canvas3D object and converts them into changes to the transform function that controls the ViewPlatform object. In addition to processing keyboard and mouse input, FlyingPlatform also provides functionality for a pop-up menu, as shown in Figure 8, and a navigation control panel, illustrated in Figure 9.

Figure 8. Navigation control menu

Figure 9. Navigation control panel

FlyingPlatform's constructor first invokes the constructor of its base class, passing the reference to the Canvas3D object and flags indicating that it will process mouse events, mouse motion events, and keyboard events. These events are processed by the creation of routines that override the ones defined in ViewPlatformAWTBehavior. These routines include: mouseDragged, mouseMoved, mouseClicked, and keyPressed, among others. FlyingPlatform then sets the input focus to the Canvas3D object, computes an initial elevation for home base and initial movement increments, and creates a Vector3f object to store the initial ViewPlatform position.

The navigation control dialog SettingsDialog is then constructed but not displayed. It only displays in response to a pop-up menu request. Note the while loop in the code segment below that searches up the chain of windows to find the parent frame. This allows the dialog box to be tied to the application, making it easier to position and control.

After the SettingsDialog is created, FlyingPlatform's constructor creates the pop-up menu components and attaches them to the Canvas3D object. Special note: I had to use AWT (Abstract Window Toolkit) PopupMenu objects because, try as I might, I could not get the Swing JPopupMenu object to display (I'd be interested in seeing a solution). ItemListeners and ActionListeners were added to the menu items. FlyingPlatform's source code is shown below:

public FlyingPlatform(Canvas3D aCanvas, ElevationModelInterface aModel)
{
  super(aCanvas,MOUSE_MOTION_LISTENER|MOUSE_LISTENER|KEY_LISTENER);
  aCanvas.requestFocus();  // Get the focus to the Canvas, allows keyboard inputs
  model = aModel;
  canvas = aCanvas;
  HOME_Y = model.getElevationAt(0,0)+INITIAL_TERRAIN_FOLLOW_ALTITUDE;
  moveAmt = Math.round(model.getModelLength()/100);
  platformVect = new Vector3f(HOME_X,HOME_Y,HOME_Z);
  Container c = canvas.getParent();
  while(c.getParent() != null)
      c= c.getParent();
  settingsDialog = new SettingsDialog((Frame)c);
  popupMenu.add(settingsMenu);
  popupMenu.add(levelOffMenu);
  popupMenu.add(terrainFollowMenu);
  popupMenu.addSeparator();
  popupMenu.add(aerialViewMenu);
  popupMenu.add(homeBaseMenu);
  canvas.add(popupMenu);
  terrainFollowMenu.addItemListener(this);
  settingsMenu.addActionListener(this);
  homeBaseMenu.addActionListener(this);
  levelOffMenu.addActionListener(this);
  aerialViewMenu.addActionListener(this);
}


The heart of FlyingPlatform's functionality lies in the class's ability to maintain the transformation that controls the ViewPlatform. The FlyingPlatform has four class variables that hold a vector for the location and three rotation angles, one for each axis (x, y, z). Initially, these values are set to the home base values. Home base is defined as the center of the x, z plane, 100 meters above the ground looking north. Each time the user changes the ViewPlatform's location or rotation, one or more of the class variables change and the integrateTransforms() method is called to reset the ViewPlatform's transformation. integrateTransforms() creates a separate Transform3D object for the location and the rotation angles, then the Transform3D objects are multiplied together to create one transform representing their total effect on the ViewPlatform. The visual effect that the transformations have on the view is the reverse order of their multiplication. In this case, we rotate on the z axis first, then the x axis, then the y axis, and lastly move to a location in space.

Special note: For this application, the rotation transformation order is important. The y rotation is performed last so it does not affect the z and x rotations. Once these operations are complete, ViewPlatform.setTransform is called to apply the new transform. Thus, any mouse or keyboard event-processing routine need only change one of these class variables and then call integrateTransforms() to take effect. The following code segment is from FlyingPlatform:

 /** Holds view platform location*/
 private Vector3f platformVect;
/** Holds current X axis attitude */
 private float xAngle = HOME_XANGLE; // Degrees
/** Holds current Y axis attitude */
 private float yAngle = HOME_YANGLE; // Degrees
/** Holds current Z axis attitude */
 private float zAngle = HOME_ZANGLE; // Degrees
...
/**
 *  Reset the view platform transformation based on
 * the x, y, z rotation and location information
 *
 */
protected void integrateTransforms()
{
   Transform3D tVect = new Transform3D();
   Transform3D tXRot = new Transform3D();
   Transform3D tYRot = new Transform3D();
   Transform3D tZRot = new Transform3D();
   tVect.set(platformVect);
   tXRot.set(new AxisAngle4d(1.0,0.0,0.0,Math.toRadians(xAngle)));
   tYRot.set(new AxisAngle4d(0.0,1.0,0.0,Math.toRadians(yAngle)));
   tZRot.set(new AxisAngle4d(0.0,0.0,1.0,Math.toRadians(zAngle)));
   tVect.mul(tYRot);
   tVect.mul(tXRot);
   tVect.mul(tZRot);
   targetTransform = tVect;
   vp.getViewPlatformTransform().setTransform(tVect);
}


You'll find creating methods to listen for keyboard and mouse input, change a variable, and then call a method pretty straightforward. The more difficult part comes when you want to move forward/backward through a scene. What values are added to or subtracted from x, y, z when you want to move 20 meters forward and you are in a 33-degree nose-up position, with a 25-degree bank and headed northeast? Don't go hunting for your ninth grade trigonometry book and try to generate the proper equation. Instead, use the transformation functions built into Java 3D. Here's how:

The ViewPlatform is initially set at the origin pointing down the negative z axis. Therefore, moving forward 10 units requires decreasing z by 10. This movement needs to translate into movement in all three directions. moveForward(float amt) completes this task by creating a Transform3D object based on the x, y, z axis rotations applied in the correct order (same order used by integrateTransforms()), then applying this transformation to a Vector3f stored in the tv object based on the movement. moveForward() creates a set of transformations for rotations, multiplies them together, creates a Vector3f representing movement in the z direction, then calls the transform function to translate tv. In my example, the vector (0, 0, -10) translates into (5.930, 5.44, -5.93). This vector can now be added to the platformVect that maintains the ViewPlatform's location. If terrain-following is enabled, then platformVect's y coordinate is updated based on the ground elevation below the new x, z location and the terrain-following altitude. The code segment below performs the moveForward function of FlyingPlatform:

/**
 * Move the ViewPlatform forward by desired number of meters.
 * Forward implies in the direction that it is currently pointed.
 * If terrain-following is enabled, then keep the altitude a steady
 * amount above the ground.
 * @param amt number of meters to move forward
 */
  public void moveForward(float amt)
  {
  //
  //  Calculate x, y, z movement.
  //
  // Set up Transforms
     Transform3D tTemp = new Transform3D();
     Transform3D tXRot = new Transform3D();
     Transform3D tYRot = new Transform3D();
     Transform3D tZRot = new Transform3D();
     tXRot.set(new AxisAngle4d(1.0,0.0,0.0,Math.toRadians(xAngle)));
     tYRot.set(new AxisAngle4d(0.0,1.0,0.0,Math.toRadians(yAngle)));
     tZRot.set(new AxisAngle4d(0.0,0.0,1.0,Math.toRadians(zAngle)));
     tTemp.mul(tYRot);
     tTemp.mul(tXRot);
     tTemp.mul(tZRot);
  //
  // Move forward in z direction.
  // Implies decreasing z since we are looking at the origin from the pos z.
     Vector3f tv = new Vector3f(0,0,-amt);
     tTemp.transform(tv);  // Translates z movement into x, y, z movement.
  //
  //  Set new values for the platform location vector.
  // If terrain-following is on, then find the terrain elevation at the new x, z
  // coordinate and base the new altitude on that. Else, use the computed altitude.
  //
    if(followTerrain)
    {
     platformVect.y = model.getElevationAt(platformVect.x+tv.x,platformVect.z+tv.z)
                      +terrainFollowAltitude;
    }
    else
     platformVect.y += tv.y;
    platformVect.x += tv.x;
    platformVect.z += tv.z;
    integrateTransforms();  // Apply transformations.
  }


One final trick I use when implementing FlyingPlatform is adding a level of sensitivity to mouse moves. In the routines for processing mouse motions shown below, I use the variables oldx, oldy to store the last mouse drag's location. If the mouse moves without a button pressed down (mouseMoved()) then these variables set to invalid values (-1). The first time the mouse moves with a button pressed down (mouseDragged()), the location is saved in oldx, oldy. Subsequent mouseDragged() calls compare the values in oldx, oldy with the current x, y values to determine the direction of motion (up, down, left, right). Changes to the ViewPlatform location and orientation are made based on the movement direction and which mouse button is pressed. A sensitivity (3 pixels) value causes the program to ignore small, perhaps unintentional movements. It is important to note that the x, y mouse coordinates have no relation to the x, y, z terrain coordinates. Mouse coordinates are in screen pixel units (with 0, 0 being at the top of the screen). The code segment shown below contains the MouseEvent processing functions of FlyingPlatform:

public void mouseMoved(MouseEvent e)
 {
  oldx = -1;
  oldy = -1;
 }
public void mouseDragged(MouseEvent e)
{
  int mods = e.getModifiersEx();
  int x = e.getX();
  int y = e.getY();
  if(oldx < 0 || oldy < 0)
  {
   oldx = x;
   oldy = y;
   return;
  }
//
// Skip the event if it moved just a little
//
  if(Math.abs(y-oldy) < sensitivity &&
     Math.abs(x-oldx) < sensitivity)
     return;
//
// First, check to see if both buttons are down
//
   if((mods & MouseEvent.BUTTON1_DOWN_MASK) != 0
      && (mods & MouseEvent.BUTTON3_DOWN_MASK) != 0)
   {
     if(y > oldy+sensitivity)
       increaseXRotate(turnAmt);
     if(y < oldy-sensitivity)
       increaseXRotate(-turnAmt);
     return;
   }
//
// Process left only down
//
   if((mods & MouseEvent.BUTTON1_DOWN_MASK) != 0)
   {
     if(y > oldy+sensitivity) //Mouse moves down screen
       moveForward(-moveAmt);
     if(y < oldy-sensitivity) // Mouse moves up screen
       moveForward(moveAmt);
     if(x > oldx+sensitivity)
       increaseYRotate(-turnAmt);
     if(x < oldx-sensitivity)
       increaseYRotate(turnAmt);
   }
//
// Process right button down
//
   if((mods & MouseEvent.BUTTON3_DOWN_MASK) != 0)
    {
      if(y > oldy+sensitivity)// Mouse moves down screen
        increaseY(-moveAmt);
      if(y < oldy-sensitivity)// Mouse moves up screen
        increaseY(moveAmt);
      if(x > oldx+sensitivity)
        increaseZRotate(turnAmt);
      if(x < oldx-sensitivity)
        increaseZRotate(-turnAmt);
    }
   oldx = x;  // Save for comparison on next mouse move
   oldy = y;
 }


Lessons learned

Java 3D depends on the correct installation of graphics cards and drivers, which it relies on. Before working with Java 3D, I suggest getting the latest version of drivers and firmware for your system. In my case, I use a Pentium 3 with an ATI Radeon AGP graphics adapter running Windows 2000 Professional. For Java 3D to work correctly I had to turn off some of the card's hardware acceleration (the symptom in my case was that the Canvas3D would not refresh when the window resized).

Also, Java 3D can require a lot of memory. Depending on your system's configuration and the Java engine you run, you might (probably will) have to run applications using the -Xmx switch to increase the maximum amount of memory Java is allowed to allocate. I've set my system to always use -Xmx256m so that I don't have to worry about it. The default is only 64 megabytes.

In addition, the demonstration program can be fine-tuned by changing the exaggeration constant to show greater differences in elevations and the SECONDS_PER_SEGMENT constant to have larger or smaller segments in ElevationModel. Also, the resolutions array initial values in LODSegment can be modified to give sharper pictures.

Some specific recommendations for developers:

  • Use the "strip" object forms (TriangleStripArray, LineStripArray, and so on) whenever possible. Interleaved and by-reference options provide the best use of memory and processor resources.
  • Since Java 3D internally converts geometry data to floats, the use of doubles is not warranted.
  • Call the garbage collector once the model has been created. Incurring this overhead predictably during initialization is better than having it start up during use. Opening and closing buffered files and creating geometry can leave a lot of trash.
  • Unless you need a particular method provided by Color3f, Point3f, or Vector3f, store coordinate and normal information as arrays of floats or other native data formats. Object-oriented purists could make a case for the use of standard objects over arrays of primitive data types on ideological grounds, however, doing so might compromise performance.
  • The use of indexed geometry is questionable. While it appears to save memory by not having to store vertex information more than once, the complexity of creating the index arrays (a separate one is required for coordinates, colors, and normals) and the fact that Java 3D converts the data in the indexed arrays to a nonindexed format anyway erases any perceived advantage. Future graphics hardware might support indexed geometry on a cross-platform basis.


Start navigating in Java 3D

In this article, I demonstrated how to efficiently create and navigate through 3D worlds using Java 3D and DEM data files available from USGS. The objects developed for the application demonstrate how to parse the DEM data and create geometry from it (DemFile, ElevationFile); how to create efficient Java 3D data structures using interleaved arrays and level-of-detail representation (ElevationModel, ElevationSegment, LODSegment); and how to create a user interface that allows the user to navigate through the virtual world (View3DPanel, FlyingPlatform). These objects are documented using javadoc and available for download in Resources.

The work presented here is part of a larger ongoing project; its goal is to create a large virtual world based on USGS mapping data that allows real-time display and fly-through capabilities. This virtual world can then be used as a basis for GIS applications in areas such as meteorology, flood and erosion control, wilderness fire-fighting, environmental and growth management, among others. My current work includes creating extensions to the LODSegment and DistanceLOD objects to allow segments to be swapped in and out of memory. This will allow mapping beyond the 1-degree-by-1-degree area to be created and interactively navigated.

About the author

Dr. Pendergast is an associate professor at Florida Gulf Coast University. Dr. Pendergast received an MS and PhD from the University of Arizona and a BSE in electrical computer engineering from the University of Michigan. He has worked as an engineer for Control Data Corporation, Harris Corporation, Ventana Corporation, and taught at the University of Florida and Washington State University. His works have appeared in books and journals, and he has presented his work at numerous conferences. He is a member of the ACM (Association for Computer Machinery) and IEEE (Institute of Electrical and Electronics Engineers) computer societies.

Read more about Core Java in JavaWorld's Core Java section.

  • Print
  • Feedback

Archived Discussions (Read only)
Subject
. Forum migration complete By AthenAdministrator
. Forum migration update By AthenAdministrator
. trwlShzpMohpr By wlitqxudb
Resources