Why Java and VRML?

Understand why Java and VRML are uniquely suited to each other

When Java was just getting started, a certain group of marketeers had the bright idea to position VRML as a competing technology. Nothing could be further from the truth. In fact, the two technologies were made for each other. VRML (in case you have been living under a rock for the last year or so) stands for Virtual Reality Modeling Language. Unlike most computer terms, this aptly describes what VRML does: it allows you to layout 3D worlds in such a way that they can be read by any VRML-compliant browser on any platform much like HTML. One feature of VRML 1.0 is its object-oriented nature.

When we embarked on our journey to bring 3D to Java, we were hesitant to jump immediately on the VRML bandwagon. It was almost 10 months ago and its future and support were much less certain. So, we first began creating a 3D API for Java called ICE, the goal behind ICE was a highly portable library, with only the part which could not be done in Java (for performance reasons) written in C. This essentially mirrors the architecture of AWT. We will get into ICE in later columns. After completing this project Chris Laurel started to take a look at VRML, he immediately saw a synergy between VRML and Java. After a long protracted conversation we decided it was worth a shot trying to implement a VRML parser in Java. This is where the real power of Java started to show. A scant few days after Chris started, he had the workings of a VRML parser up and running. How is this possible? Well, for starters there aren't many of the pitfalls of C or C++, no memory errors to track down. In addition, Chris could leverage the base classes he created to easily and quickly build functionality. For instance, let's take a look at SceneNode. Nodes are what we call objects in VRML, and a SceneNode provides the core functionality for a Node in a scene, like geometry information. Below we have the class hierarchy for IndexedFaceSetNode. Notice we are reusing quite a bit of code. Now realize that we have over 40 classes which reuse the functionality in SceneNode. This makes doing development more productive given the class design is correct.

java.lang.Object

| +----ice.scene.SceneNode

| +----ice.scene.ShapeNode

| +----ice.scene.IndexedShapeNode

| +----ice.scene.IndexedFaceSetNode

Below is some code which will put a sphere on your screen. If you have a VRML-capable browser you can see it here. Note: the code below is meant for example only. There is some code you need in order to be able to see this in most VRML browsers (like lights and cameras), but I did not want to take up the space. You can see the full code here.

#VRML V1.0 ascii
Separator {
   Sphere { radius 1.1 }
}

So, you say, what's so special about that? The Sphere node (we call objects in VRML nodes) has a corresponding Java class. As a matter of fact all VRML nodes in Liquid Reality have a corresponding Java class. The SphereNode has an interface it inherits from the ShapeNode. For those who are asking themselves if the braces surrounding the Cone node mean the same as they do in Java or other languages, give yourself a gold star. Those braces represent something called scope (or in English the fact that anything done inside of those braces only applies to what is inside the braces). Ohhh, kind of like an object. Well, not exactly. They act more like scoping operators in VRML 1.0. Of course this is one of the things being fixed in the VRML 2.0 specification and isn't worth getting into here. So let's take a look at how Separators act somewhat like objects. Notice the Translation applied to the sphere does not affect the cone below it. Why? Because it is inside the Separator. For those of you with VRML browsers you can see this here. You may see the full code listing here.

#VRML V1.0 ascii
Separator {
   Separator {
      Translation { translation -3 0 0 }
      Sphere { radius 1.1 }
   }
   Separator {
      Material {
         ambientColor [ 0 0 0 ]
         diffuseColor [0.1 0.5 0.2 ]
         shininess 0.25
      }
      Cone{}
   }
}

So now you should be getting the idea. Separators allow you to define "objects" within which you can do anything and it does not affect the rest of the 3D objects in the scene. As a matter of fact, you can even define an "object" and then use it again somewhere else, but we will get to that in another article. So now you are ready to go off and create 3D worlds, but if you are like the rest of us you will quickly realize these worlds are not very exciting. There is an effort underway to fix this problem as part of the VRML 2.0 specification, which you have already or will be hearing a lot about this week.

So, what do you do if you want your 3D cat to chase a virtual ball of yarn rather than just sit there? You might have a look at how we extended VRML 1.0 to support this type of thing in Liquid Reality. We will support VRML 2.0, but like you and lots of other people on the Net, we couldn't wait.

Liquid Reality is a VRML toolkit based on ICE and is entirely written in Java (except for a small C library that is part of ICE). What makes Liquid Reality unique is its ability to dynamically extend its VRML capabilities, much like Netscape Navigator or HotJava can extend their HTML capabilities using Java. Liquid Reality actually runs inside Netscape Navigator or any Java-capable browser, as well as a standalone. Let's go back to our VRML example and add some things.

The first change you will notice is the addition of a rotor node. Those of you familiar with VRML are probably saying "rotor node? VRML doesn't have a rotor node." Give me a few sentences it will all become clear.

A rotor node looks at the current time and returns a rotation matrix which changes with time. Every time the rotor node is asked to render itself, it checks the current time and returns a matrix which will rotate the objects in the same separator node. I won't reprint all the VRML here, just the changes. For those of you who have Liquid Reality here is a demo. Click the motion button to see the action. For the full code, go here.

Clock{}
Separator {
   Rotor { rotation 1 0 0 0 }
   Translation { translation 0 6 0 }
   Cone {}
}

A rotor node takes two arguments: an axis angle rotation (which is a fancy way of saying the axis about which the object will spin) and the initial orientation (as well as a speed value). Here it is using the default speed of 1. So how does all this work? First, Liquid Reality loads the VRML file and begins parsing through the objects. Eventually it comes to our rotor node and says, "what the heck is a rotor node?" Well, Liquid Reality doesn't know what a rotor node is, but it's smart enough to go and ask the server. So, it creates a request to send to the server looking for a class file that describes the rotor node. The request is "get me RotorNode.class." The server happily hands back the class file that contains the Java code describing the node. Liquid Reality uses the Java interpreter to interpret the class file and, ta da, Liquid Reality now knows what a rotor node is. Of course, this is a rather simple example, but you can get much more complex. There are quite a few people writing extension nodes for Liquid Reality. Some of ours allow you to add a node to any critter in your VRML world which makes it run away when the user gets too close. Other more interesting ones actually open a connection back to the server to receive real-time information.

An example of how you might use one of these nodes would be a 3D map of a network where computers are represented by cubes connected by cylinders. The computers could spin according to how much processing they were doing and the cylinders which connect them could show bits moving through them according to how much traffic was passing between the machines. Now you probably want to write one of these nodes yourself. Let's take a look at the Java code for the rotor node.

package ice.scene;
import ice.Matrix4;
// Rotor {
// rotation 0 0 1 0 # SFRotation
// speed 1 # SFFloat
// }
// Here we are extending the functionality of the
// ModelTransformationNode which basically does something to the
// current geometry every time it is asked to render.
public class RotorNode extends ModelTransformationNode
{
// Define the field names for the parser -- this is basic
// boilerplate code for all LR Nodes.
static String fieldnames[] = {
   "rotation",
   "speed"
};
// Default values for fields
static NodeField defaults[] = {
   new SFRotation(0, 0, 1, 0),
   new SFFloat(1)
};
// Initialize the fields
public SFRotation rotation = new SFRotation((SFRotation)
                             defaults[0], this);
public SFFloat speed = new SFFloat((SFFloat)
                       defaults[1], this);
NodeField fields[] = {
   rotation, speed
};
// Standard methods to access static class member variables
public int numFields() { return fields.length; }
public String fieldName(int n) { return fieldnames[n]; }
public NodeField getDefault(int n) { return defaults[n]; }
public NodeField getField(int n) { return fields[n]; }
// The guts: apply a time varying rotation
public void applyModelTransformation(Action a)
{
   a.state.rotateModelMatrix(rotation.getAxisX(),
      rotation.getAxisY(),
      rotation.getAxisZ(),
      (float) (rotation.getAngle() + speed.getValue() * 
                    a.state.getTime() % (2 * Math.PI)));
   }
}

As you can see, state is the key here; this class contains all the state information for the current rendering action. Now you need to put this in your Web page. Liquid Reality is like any other applet; you need to put an APPLET tag in your HTML.

<applet code="dnx.lrbrowser.LRBrowser.class" width=500
height=300>
<param name=vrml value="sphereconelr.wrl">
</applet>
</body>
</html>

So, there you have it. Your first animated 3D object. Now let's try something a bit more complex. How about something which actually reacts to a user in a 3D world? As these behaviors become more complex you start to realize how nice Java is, since you can hand the class file to someone who knows nothing about Java and they can add it to one of their critters in a VRML world as simply as you can add an Applet to your HTML page.

stream

In this particular world the spider is the critter who runs away.

package ice.scene;
import ice.Matrix4;
public class AvoidDNXXNode extends ModelTransformationNode
{
static String fieldnames[] = {
   // how far from the critter the user has to be before it flees
   "fleeProximity",
   // how far the critter will flee
   "fleeDistance",
   // how fast
   "speed",
   // the first angle at which the critter will run
   "startAngle"
};
static NodeField defaults[] = {
   new SFFloat(1),
   new SFFloat(1),
   new SFFloat(1),
   new SFFloat(0)
};
public SFFloat fleeProximity = new SFFloat((SFFloat)
                               defaults[0], this);
public SFFloat fleeDistance = new SFFloat((SFFloat)
                               defaults[1], this);
public SFFloat speed = new SFFloat((SFFloat)
                               defaults[2], this);
public SFFloat startAngle = new SFFloat((SFFloat)
                               defaults[3], this);
NodeField fields[] = {
   fleeProximity,
   fleeDistance,
   speed,
   startAngle
};
private Matrix4 mat = new Matrix4();
private boolean fleeing = false;
private float x, y, z;
private float dest_x, dest_y, dest_z, angle;
private double flee_time;
public int numFields() { return fields.length; }
public String fieldName(int n) { return fieldnames[n]; }
public NodeField getDefault(int n) { return defaults[n]; }
public NodeField getField(int n) { return fields[n]; }
public void applyModelTransformation(Action a) {
mat.identity();
if (!fleeing) {
   // If we are not already fleeing for our lives
   // check to see if anyone is too close for comfort.
   float distance = a.state.distanceToPoint(x, y, z);
   if (distance < fleeProximity.getValue()) {
      fleeing = true;
      flee_time = a.state.getTime();
      angle = (float) (Math.random() * 2 * Math.PI);
      dest_x = (float) Math.cos(angle) * fleeDistance.getValue();
      dest_y = 0;
      dest_z = (float) -Math.sin(angle) * fleeDistance.getValue();
   }
   mat.translate(x, y, z);
} else { // We have fled far enough time to rest
   if (a.state.getTime() >= flee_time + 1.0 / speed.getValue()) {
      x += dest_x; y += dest_y; z += dest_z;
      fleeing = false;
      mat.translate(x, y, z);
   } else {
      // Keep running!
      float t = (float) (a.state.getTime() - flee_time) *
         speed.getValue();
      mat.translate(x + t * dest_x, y + t *
         dest_y, z + t * dest_z);
   }
}
a.state.concatModelMatrix(mat);
mat.identity();
mat.yrotate(angle + startAngle.getValue());
a.state.concatModelMatrix(mat);
}
}

If you think that was interesting, think about this. We can give an object in a 3D world random behavior through a Java program. What if rather than generating a random direction to flee, we were to open a connection to a server and tell it where we are in the VRML world? Then imagine everyone connected to the same server is reporting their location, so you can "see" where they are.

Karl Jacob is the CEO of Dimension X, a San Francisco-based Internet entertainment company specializing in creating tools to blend technology and entertainment. Previously he worked at Sun Microsystems Inc. in the Nomadic Computing Group, where he oversaw several collaborative projects in Internet research and invented a wireless e-mail application. He left Sun to help start On Ramp Inc. one of the first Web site production houses. He then founded Dimension X with two partners, Brad Karns, and Greg Fry, as well as Chris Laurel, Scott Fraize, Ryan Watkins, and Lexi Sonnenberg. Today, Dimension X has grown to 22 people and is considered by many to be a leading Java software-development house.