Letters to the Editor

'Automate dependency tracking'

Part 1: Design an information model for automatically discovering dependencies in interactive object-oriented applications

Part 2: Automatic dependency tracking discovers dependencies at runtime and updates the user interface

Part 3: Create a rich, interactive experience for your user

Michael L. Perry

Why so many information layers?

Michael,

On the Visitor pattern implementation: If you make a standard of <classname> using "visit" + <classname>, you can dynamically complete the implementation in the base classes using reflection. For example:

 public class Device
  {
     // IDeviceVisitor interface method name standard is "visit"+<classname>
     public void accept(IDeviceVisitor visitor)
     {
        // invoke the right method of the visitor based on the subclass name
        // with this instance as argument.
        Class[] typeParms;
        Class[] thisInList= {this};
        Method m = visitor.getmethod
          ("visit"+ this.getClass().getName(), typeParms);
        m.invoke(visitor, thisInList);
     }
     ...
  }

In addition, you can create and easily extend all the dynamic sentries using aspect-oriented programming, such as AspectJ. The easiest way would be to use a naming convention for the dynamic field's getters and setters. The sentinel would be added to all getter and setter methods that satisfy the naming standard. Thus, the system requires no changes in the IM (information model); updating and adding classes is not a problem.

Setting up the automatic dependencies is a good concept, and you show its power. However, you introduced an extra level of complexity by adding additional information layers, like the located level. That extension implementation is what requires the Visitor pattern, since the class- and object-instance networks closely match in each level. However, this complexity is orthogonal to the dependencies concept and a bit hard to grasp. I suggest leaving that out since it doesn't affect the dependencies.

For added information layers, I would use an extensible property mechanism rather than shadowed networks. You can then use the IM level's dynamic proxies to standardize access to base or extended attributes. But that really is also orthogonal to the automatic update dependencies concepts.

This example also shows why people who make complex systems like languages such as Common Lisp and the CLOS (Common Lisp Object System). These languages use the metaobject protocol, which can easily add the functionality you need, without the extra information layers.

Mark

Mark, Thanks for the reference to AspectJ; I will definitely look into it. Your suggestion to use reflection certainly does reduce the amount of code needed to implement the Visitor pattern. However, I usually avoid using reflection, as it bypasses compile-time error checking. In this case, it would be possible to define a new derived class of the base class Device without also adding a method to the IDeviceVisitor interface. Without reflection, the compiler quickly reveals the error; but as you have written the code, the problem would not arise until runtime. I gave a lot of thought to the insertion of the arguably extraneous location layer. I originally planned to make location part of the underlying IM for simplicity's sake. However, that would violate one of the first rules: that the IM is not designed for a UI (user interface), but for a client knowledgeable in the problem domain. Location is intended for UI consumption only. Your suggestion of using an extensible property mechanism certainly has merit. But again, this bypasses the compiler, so I opted not to use it. Perhaps a preprocessor such as the metaobject protocol you mention would allay those concerns. The solution I chose to implement demonstrates the need for object recycling, which is core to automatic dependency tracking. If objects in the location layer don't recycle, the location of devices in the network is lost. I finally settled upon the location layer when I realized that it would help me make this point. Thanks again for your comments and suggestions. I will research aspect-oriented programming. It sounds as if it has great merit, both in automatic dependency tracking and in other applications. Michael Perry

'The myth of code-centricity'

Jack Harich

The software world doesn't revolve around code

Mr. Harich,

I believe you would do well to review the dictionary for the meaning of the word myth:

Main Entry: myth

Pronunciation: "mith"

Function: noun

Etymology: Greek mythos

Date: 1830

(1a) a usually traditional story of ostensibly historical events that serves to unfold part of the world view of a people or explain a practice, belief, or natural phenomenon; (1b) PARABLE, ALLEGORY; (2a) a popular belief or tradition that has grown up around something or someone; especially :one embodying the ideals and institutions of a society or segment of society (seduced by the American myth of individualism -- Orde Coombs); (2b) an unfounded or false notion; (3) a person or thing having only an imaginary or unverifiable existence; (4)the whole body of myths

Merriam-Webster's Collegiate Dictionary, Tenth Edition

(1993)

Your premise is that the industry "holds the false belief" that code is the center of the software world. I disagree. The majority of the industry's thinking doesn't revolve around code. The current emphasis is on components, modules, or simply objects.

Now one could say, "Well, components are simply code." But that would be akin to saying, well, a high-level language is just assembly, and assembly is just machine code, and machine code is just silicon, and so on. What are graphical tools? Simple complex systems of code used to generate code. Some aspect of code-centricity will always exist as long as we use current computational hardware technology. After all, processors execute instructions. To get away from computer code, we will need a fundamental paradigm shift, like using AI (artificial intelligence) as a technology base to build systems upon, for example. AI systems can be built using code, but they work in a fundamentally different way.

Extracting our tools out to further levels of abstraction to achieve congruence still leaves us with code at the base. I am sorry to say that I fail to see the existence of a myth here, only progress within our current processor-instruction-based paradigm.

I liked most of your article; I just took offense to the negative connotations associated with the word myth.

Tracy Eidenschink

Tracy, I still lean towards the software industry being code-centric, because software developers spend the majority of their time coding. Sorry the word myth rubbed you the wrong way. The industry is definitely on its way towards improvement with objects and visual tools, but these have yet to have the high impact promised. Jack Harich

Tips 'N Tricks

'Java Tip 118: Utilize the EjbProxy'

Gorsen Huang

Using the Reflection API with EJB: What's the tradeoff?

Gorsen,

Using the Java Reflection API with EJB (Enterprise JavaBeans) is a useful technique. It can remarkably improve developer productivity. I like it because it saves me a lot of development time. On the other hand, Java Reflection definitely adds overhead during runtime and therefore penalizes performance. How significant is this tradeoff?

Ken Wong

Ken, You are right, using Java Reflection will add overhead. But in some cases, this overhead might not be significant and can be ignored. Since the invoking EJB, which involves remote-calling procedures, usually takes a longer time, this overhead is relatively small. I have compared two ways of invoking EJB, by Java Reflection and by directly calling the

create()

method; their differences are ignorable (testing showed the differences average about 1 millisecond, or less than 2 percent). My simple testing compared the following two ways of invoking the EJB (repeating 50 times):

  1. Non-Reflection:
               YourEJBHome obHome = (YourEJBHome)PortableRemoteObject.narrow(home, YourEJBHome.class);
                YourEjb obj = (YourEjb) PortableRemoteObject.narrow(YourEJBHome.create(), YourEjb.class);
    
    
  2. Reflection:
               EJBHome obHome = (EJBHome)PortableRemoteObject.narrow (home,EJBHome.class);
                 Method m = obHome.getClass().getDeclaredMethod("create", newClass[0]);
                YourEjb objRemote = (YourEjb)m.invoke (obHome, new Object[0]);
    
    

Generally, the process of design and programming is the process of making choices. Sometimes, the tradeoff is worth it, sometimes not. Gorsen Huang

Java Q&A

'Double trouble'

Tony Sintes

Why did Sun omit parseDouble() before Java 1.2?

Tony,

The code's behavior is a biproduct of Sun's preference to "favor immutable objects" (Bloch, Gosling), similarly exhibited in wrapper classes like BigInteger and BigDecimal. The benefits are the following:

  • Ease of concurrent system development (immutable objects are inherently thread-safe)
  • Typical preference of composition over derivational models
  • Flexibility of static factory methods
  • Number of logically equivalent objects limited to distinct objects using factory methods

And finally ... invariance is simple to develop and use as a client!

Lance

Lance, I'm not sure if Sun leaving out the

parseDouble()

method before Java 1.2 is a biproduct of favoring immutable objects or not, since other 1.1 wrappers included similar parsing methods. That said, your explanation makes the most sense of any I have heard. Those that did include parsing methods in 1.1:

  • Byte has parseByte()
  • Integer has parseInt()
  • Long has parseLong()
  • Short has parseShort()

Those that didn't in 1.1:

  • Double
  • Float
  • BigDecimal
  • BigInteger

I dug up some old docs.

Integer

and

Long

have always had parsing methods, while

Double

and

Float

have always lacked them.

Short

and

Byte

were added in 1.1 with parsing methods.

BigDecimal

and

BigInteger

were added in 1.1, but lacked parsing methods. I've had a few opportunities to speak with Joshua Bloch (though I'm sure he doesn't remember me). Joshua says that practices like favoring immutable objects were often learned over time. When Java was first developed, Sun didn't even know what the best practices were (thus, all kinds of ugliness in the older APIs, such as

Interface

constants). Perhaps including a parsing method in

Byte

,

Integer

,

Long

, and

Short

are just historical anomalies, that is, they were created before Sun decided to favor immutableness. Now, the only thing that makes me uncomfortable with that theory is the fact that

Float

and

Double

do not have parsing methods in 1.0, while

Integer

and

Long

do. Why would Sun put the methods in

Integer

and

Long

but not

Float

and

Double

? Maybe the reason had something to do with the fact that

Float

and

Double

both dealt with floating-point. Maybe the classes' original implementations could not easily support direct parsing of a string through a static. Tony Sintes

'The Lucene search engine: Powerful, flexible, and free'

Brian Goetz

Does Lucene search through database records?

Brian,

For my Java application, I am considering using the Lucene search engine to search records in a database table, specifically certain columns in the table, which may consist of text blocks. Is Lucene appropriate for this type of searching?

All the examples I find on Lucene (including your article) indicate that Lucene is primarily used for file searching, not searching through database records. However, your article makes some reference to database searching:

Data sources: Many search engines can only index files or Webpages. This handicaps applications where indexed data comes from a database, or where multiple virtual documents exist in a single file, such as a ZIP archive. Lucene allows developers to deliver the document to the indexer through a String or an InputStream, permitting the data source to be abstracted from the data. However, with this approach, the developer must supply the appropriate readers for the data.

I would greatly appreciate any resources or code examples you could provide on how to use Lucene to search database records.

Chris Bojrab

Chris, Lucene is appropriate for the type of searching you need. When you give a Lucene indexer a document to index, it expects a Reader. The Tokenizer/Analzyer will convert the character stream into a token stream. If you're indexing a file, you just use FileReader; if you're indexing a document that comes out of a database, you just create a class that implements Reader, which fetches the bytes out of the database (probably using the Java Database Connectivity's BLOB (binary large object) features.) Lucene doesn't care where the data comes from -- it just draws characters from a Reader. You supply it with a Reader that knows where to get the characters. Brian Goetz

'Internationalize your software'

Part 1. Learn how to develop software for the global marketplace

Part 2. Explore concepts of internationalization and localization; characters and character definition standards; locales; and resource bundles

Part 3. Explore dates, time zones, calendars, formatters, and international fonts

Jeff Friesen

Jeff presents a list of worldwide time zones

Jeff, For years I have been looking for a list of the worldwide standard time zones, with their complete names as used in the US -- for example, eastern standard time, Pacific standard time, and so on. My search brought me to this site where I found 33 time zones initialized. Could I get this list of time zones spelled out in full?

Arthur Jackson

1 2 Page 1
Page 1 of 2