I had originally planned on writing this month's article espousing the minutiae of implementing MOM with callback over RMI, using HTTP proxies, CGI bouncers, network PDUs, and other TLAs. Then, upon waking from twenty-six days of comatose inertia brought on by a massive overconsumption of purloined candy, I had but one thing on my mind: Easter eggs.

As a result, this article is instead about developing a Java applet that draws textured Easter eggs. The textures are just a tile pattern built from a straightforward mathematical function of sines and cosines. We will transform these planar textures onto a sphere's surface to produce our finished product. To quickly draw these images to the screen we will render them into Java `Image`

objects using classes from the `java.awt.image`

package, letting the browser take care of any issues involved in actually displaying the resulting pictures. See Resources for the complete source code.

I must admit that the original inspiration for this article comes from Clifford A. Pickover's *Computers, Pattern, Chaos and Beauty: Graphics from an Unseen World* (St. Martin's Press, ISBN: 031206179X). If pretty computer-generated pictures interest you, I recommend you pick up a copy of this book.

## Creating a texture

The first issue we encounter when generating our eggs is what texture to use. So as not to unduly restrict ourselves I'm going to start by defining a generic `Texture`

interface that can be supported by a variety of different texture functions.

public interface Texture { public RGB getTexel (double i, double j); }

An implementation of this interface must provide a method that returns the color of the texture element at the specified texture coordinate *(i,j)*. The texture coordinate will be a value *0.0 <= i,j < 1.0*, meaning that a texture function will define a texture over a square domain paramaterized by *i* and *j*. The texture function should, however, accept values outside this range, clipping, replicating, or extending the texture as appropriate. The value returned from the `getTexel()`

method is of type `RGB`

:

public class RGB { double r, g, b; public RGB (double r, double g, double b) { this.r = r; this.g = g; this.b = b; } public RGB (int rgb) { r = (double) (rgb >> 16 & 0xff) / 255; g = (double) (rgb >> 8 & 0xff) / 255; b = (double) (rgb >> 0 & 0xff) / 255; } public void scale (double scale) { r *= scale; g *= scale; b *= scale; } public void add (RGB texel) { r += texel.r; g += texel.g; b += texel.b; } public int toRGB () { return 0xff000000 | (int) (r * 255.99) << 16 | (int) (g * 255.99) << 8 | (int) (b * 255.99) << 0; } }

Our `RGB`

class is similar to the standard `Color`

class, except that it stores RGB colors in double precision; the color components should have values *0.0 <= r,g,b <= 1.0*. We also provide some helper methods to convert, scale, and combine colors.

**An image-based texture**

This class implements a texture that uses an `Image`

object as a source. We can use this class to map images onto a sphere by first converting the image into an array of integer RGB values (using the `java.awt.image.PixelGrabber`

class) and then using this array to calculate texel values (as *pixel* is to picture element, so *texel* is to texture element).

import java.awt.*; import java.awt.image.*; public class ImageTexture implements Texture { int[] imagePixels; int imageWidth, imageHeight; public ImageTexture (Image image, int width, int height) throws InterruptedException { PixelGrabber grabber = new PixelGrabber (image, 0, 0, width, height, true); if (!grabber.grabPixels ()) throw new IllegalArgumentException ("Invalid image; pixel grab failed."); imagePixels = (int[]) grabber.getPixels (); imageWidth = grabber.getWidth (); imageHeight = grabber.getHeight (); } public RGB getTexel (double i, double j) { return new RGB (imagePixels[(int) (i * imageWidth % imageWidth) + imageWidth * (int) (j * imageHeight % imageHeight)]); } }

Note that we simply convert the texture coordinate into an integer location on the surface of the image and then return the image color at that exact point. If the texture is sampled at a greater or lower frequency than the original image, the result will be jagged as pixels are skipped or replicated. Properly addressing this problem requires us to interpolate between colors of the image; however, such a task is difficult to do properly when we don't know where the texture will finally be displayed. Ideally, we would determine the amount of texture area covered by a single pixel on screen, and would then sample this amount of the actual texture. This approach is not practical, however, so we will not attempt to address it; supersampling, which we will examine later, is a much simpler way to reduce the effects of the problem.

**An algorithmic texture**

We may wish to experiment with an alternate texture, a completely artificial mathematical function. We could go with something like the Mandelbrot set or a Lyapanov function, but we'll instead go with a texture computed from the `sin()`

function (described in Pickover's book).

public class SineTexture implements Texture { double multiplier, scale; int modFunction; public SineTexture (double multiplier, double scale, int modFunction) { this.multiplier = multiplier; this.scale = scale; this.modFunction = modFunction; } public RGB getTexel (double i, double j) { i *= multiplier; j *= multiplier; double f = scale * (Math.sin (i) + Math.sin (j)); return ((int) f % modFunction == 0) ? new RGB (1.0, 0.0, 0.0) : new RGB (0.0, 1.0, 0.0); } }

This class computes a simple sinusoidal function of *(i,j)*. If the result, modulo a certain value, is *0*, it returns bright red; otherwise, it returns bright green. The function uses three constants that control details of the resulting texture.

## Mapping from sphere-space to texture-space

Now that we have a texture function, we must decide how to map a square, flat texture onto the closed surface of a sphere; or in other words, how to transform a point on the surface of the sphere into an *(i,j)* texture coordinate.

An obvious transformation is simply from longitude to *i* and latitude to *j*. The primary problem with this solution is that near the poles, the *i* coordinate will be greatly compressed: Walking around the earth at latitude 89^{o} North is a lot quicker than at latitude 0. In other words, our uniform flat texture will be squashed at the poles.

A nice solution to this problem is to use a 3D texture function, so instead of a function of *(i,j)* it becomes a function of *(i,j,k)*. We can then simply sample the texture at points on the surface of the sphere within three dimensions. In this way, we get an even application of our texture. A fine solution, except that we are limited to textures that can be expressed in three dimensions. While this is possible for the algorithmic texture described above, it is not generally a valid option; most textures are only described by two variables (for example, an image or the Mandelbrot set) and cannot be mapped into three dimensions.

For the purposes of this article, we will stick with 2D texture functions, which means that we will have to map longitude and latitude directly to *(i,j)*.

## Mapping from screen-space to sphere-space

Now that we can transform from points on the surface of a sphere into points on the texture function, we need to be able to transform from points on the screen into points on the surface of a sphere. We're going to keep things simple here, bypassing many of the complexities of a fully-generalized solution.

One simple way to draw our sphere is to iterate over a large number of latitude and longitude values, computing the corresponding texture values and then placing these on the screen. Transforming from sphere space to screen space is a simple matter of sines and cosines.

*y = r * sin(latitude)*

* x = r * cos(latitude) * cos(longitude)*

The problem with this type of transformation is that points are concentrated at the edge of the sphere compared with the center. This can lead to a particular problem: the center of our sphere may have cracks if we use too few latitude/longitude values, but if there are no cracks in the center we will be computing an excess number of values at the edge.

A much better solution is to simply iterate over points in our image, determining whether any point actually hits the sphere, and if it does, determining the corresponding latitude and longitude on the surface of the sphere. This provides us with sphere coordinates that we can use to compute a texture value. For such computations, however, we need some real mathematics.

**An egg without perspective**

Let us pretend, for a moment, that we're Bambuti Pygmies. For us, perspective doesn't exist. Having lived all our lives in a rainforest, we've never seen anything far away. As a result, things don't get smaller as they recede, so our task is quite simple: We iterate over points *(x,y)* on the screen and compute where on the sphere our point hits.

Let's look at what happens.

A point hits the sphere if *x ^{2} + y^{2} < r^{2}*. If it hits the sphere, its latitude is

*latitude = arcsin(y / r)*.

The earth's longitudinal radius at this latitude is *r * cos(latitude)*. Therefore, the longitude is *longitude = arccos(x / r / cos(latitude))*

Got all that?

Unfortunately, owing to the Western military industrial machine that controls our lives, all the rainforests have been cut down to grow cows and coffee and to print paper magazines (yay *JavaWorld*!), and so we *can* see things at a distance, and they *do* get smaller as they recede. If we draw our egg without perspective it looks all wrong. Being accustomed to looking at real Easter eggs, we're not fooled by this phony.

**An egg with perspective**

Perspective is our enemy. It complicates things. No longer can we simply take a point on the screen, orthogonally map it to the sphere, and perform an inverse transformation. We must now take the location of the viewer into account, fire a ray from their eye through the screen, and see where on the sphere it lands.

The basic setup for adding perspective is to choose a location for the viewer (we will assume just one eye) and pretend that the computer display is a window in space (every pixel is a point on the window). To see what color any given pixel is, we fire a ray from the viewer's eye through the pixel's location on the window and into our scene. We then compute where on what object this ray hits, and from this value we compute the pixel's color. We define an interface `Obj`

that describes this simple behavior:

public interface Obj { public RGB getIntersection (Vec ray); }

This interface provides a method that should return the color of the object where the specified ray strikes it. Note that this is very simplistic; we don't provide intersection or distance tests or any other of the features that would be needed by a full raytracer.

public class Vec { double x, y, z; public Vec (double x, double y, double z) { this.x = x; this.y = y; this.z = z; } }

The `Vec`

class describes a vector in space -- a simple triple *(x,y,z)* that represents a direction in three dimensions. We are assuming, for this article, that the viewer is at the origin *(0,0,0)*.

For our Easter egg, we will be computing the intersection between a ray and a sphere. The mathematics of this computation are fairly straightforward: The definition of the surface of a sphere; radius *r*, center *(x _{c},y_{c},z_{c})*, is:

*(x - x _{c})^{2} + (y - y_{c})^{2} + (z - z_{c})^{2} = r^{2}*

The definition of a ray (line) starting from point *(x _{0},y_{0},z_{0})* and passing through point

*(x*is:

_{1},y_{1},z_{1})*x = x _{0} + (x_{1} - x_{0}) * t*

* y = y _{0} + (y_{1} - y_{0}) * t*

* z = z*

_{0}

* + (z*

_{1}

* - z*

_{0}

*) * t*

To simplify things, our ray will start at the origin *(x _{0}=y_{0}=z_{0}=0)* and the sphere itself will be located along the

*Z*axis

*(x*.

_{c}=y_{c}=0)Now the definition of our sphere is:

*x ^{2} + y^{2} + (z - zc)^{2} = r^{2}*

And our ray:

*x = x _{1} * t*

* y = y _{1} * t*

* z = z*

_{1}

* * t*

To determine the intersection, we substitute the ray into the sphere definition.

*x _{1}^{2} * t^{2} + y_{1}^{2} * t^{2} + (z_{1} * t - z_{c})^{2} = r^{2}*

* t*

^{2}

* * (x*

_{1}

^{2}

* + y*

_{1}

^{2}

* + z*

_{1}

^{2}

*) - t * 2 * z*

_{1}

* * z*

_{c}

* + z*

_{c}

^{2}

* - r*

^{2}

* = 0*

This is a simple quadratic equation, the solution to which is:

*a = x _{1}^{2} + y_{1}^{2} + z_{1}^{2}*

* b = - 2 * z _{c} * z_{1}*

* c = z _{c}^{2} - r^{2}*

* (t*

_{0}

*,t*

_{1}

*) = [-b +- sqrt(b*

^{2}

* - 4 * a * c)] / 2 / a*

Here, *(t _{0},t_{1})* are the two values of

*t*for which the ray intersects the sphere. Note that

*b*if the ray does not hit the sphere.

^{2}- 4 * a * c < 0Once we have values for *(t _{0},t_{1})*, we determine which point is closer to the viewer (the front side of the sphere) and then compute the location in space of this intersection. We now have an

*(x,y,z)*location on the sphere that we can transform back into latitude and longitude as before, and from this we can compute the appropriate texture color.