Letters to the Editor

'The myth of code-centricity'

Jack Harich

What is the declarative knowledge?

Jack,

First of all, great article! I completely agree with your views and have been trying to come to such a model myself for years. My main questions arise from the DK (declarative knowledge) -- and the inpin/outpin model in general. The fundamental question: What is the DK? That is, what is the fundamental language of SI (system imagery) in your model? It seems that XML is not enough on its own (even with schemas) because there must be a common language that every service, container, part, and so on, understands. I've been involved with software agent technology for some time as well, and even in that space, there is difficulty in coming to agreement on the message-passing language. Maybe you have some insights you could share. I'd love to have an SI process myself!

Scott A. Schell

Scott, Declarative knowledge is what we want to do, so to speak. A reusable part needs DK to vary its mission per reuse case. DK is given to parts in the form of

nested key values.

A key is a

String

, and a value is a

String

, a number, an image, even another set of key values. A value should not be code. Presently, XML serves as a good way to store and transport DK. But in SI (and the sample implementation, Visual Circuit Board (VCB)), parts should not receive XML directly. They should receive a higher-level representation, namely nested key values. This way, you aren't locked into XML, because there will be better XMLs, or ones more appropriate for a domain. DK is SI's fundamental language in my model, but it's stored and transported in whatever an assembly tool prefers, such as XML. You write:

It seems that XML is not enough on its own (even with schemas) because there must be a common language that every service, container, part, and so on, understands.

DK, parts, and links can accomplish everything code does. If you haven't written a DK-driven or configurable part, this might not be easy to see. The common language between parts is the data they send and receive, which is akin to class method calls. A link represents one object calling a method on another object (or itself). The data the parts send and receive are the method's arguments. You write:

There is difficulty in coming to agreement on the message-passing language.

To resolve that, introduce a totally domain-neutral way to communicate to your internal architecture, such as a hash-table-like message sent and received on pins. This makes standards choices much easier. You can also put a facade over the actual low-level protocol -- such as HTTP, FTP, RMI (Remote Method Invocation), sockets, and so on -- to insulate your investment from the "message-passing language" decision. No matter what external mechanism you pick, it will eventually change, or you will need multiple mechanisms to cope with different nodes. A facade can remain the same or evolve independently of the external mechanism, as you see fit. In the case of SI, the facade is in the engine. It can be plugable, allowing you to use different facades. This has been done in VCB. Jack Harich

Is reusability closer to reality than we think?

Jack,

As I started to read the article, I felt you were describing some kind of utopia. But the more I read, the more I thought, "This isn't so far fetched, why couldn't this happen? In fact, it might already have."

We are already beginning to see reusability with Web-based services and frameworks such as the Java 2 Platform, Enterprise Edition (J2EE). The Apache Software Foundation has a subproject called Commons, which is a repository of reusable components and frameworks, including a Workflow framework.

I definitely see reusability as the next step, and XML will have much to do with that step. However, in order to achieve this reusability, you must master the problem domain you're working with.

David Snyder

David, You provided many good insights and feedback. I'm glad to hear about Apache's project; I was unaware of it. I agree that XML will affect reusability. As I see it, XML has three main uses: as a better HTML, as a standard for data exchange, and as declarative knowledge. Better HTML was XML's first main use. Many industry standards and Microsoft's XP use XML for a data exchange standard, its second use. The third use, DK, is what SI would use XML for. In the long run, DK is XML's most productive use. You write:

However, in order to achieve this reusability, you must master the problem domain you're working with.

In general yes. But I expect that once SI is mature, expert SI users will build systems and composite parts as fast as they can learn about a new domain, which is now unheard of. Jack Harich

We must focus on the physical world, not the business world

Jack,

This is the first time I've read an article that explicitly addresses tool congruence. I was disappointed to see that it mostly dealt with databases and business applications. I was also disappointed to learn that it failed to mention HyperCard, SuperCard, ToolBook, Visio, Metropolis, or ScriptX. All these applications attempt to reuse code modules by modeling the problem domain for the user. I agree with most of your thoughts, but generalized business applications are the wrong place to start. We need to look for ways to build software for problem domains in the physical world or at least where the business world meets the physical world.

Mark

Mark, Thanks for your thoughts. My intent was to deal with all domains. Perhaps the examples gave the wrong impression. You're quite right; there are many fine examples of code reuse where the user models the problem domain. But to my knowledge, they all have shortcomings that prevent them from becoming mainstream. The article on SI attempts to show a higher level that would allow such efforts to truly become mainstream. This will be a long and rocky road. It helps to have a roadmap that allows you to see how it looks from 30,000 feet. Jack Harich

Sensate developers versus perceptive developers

Jack,

In my experience, the major issue surrounding code-centricity seems to be that many developers are sensate rather than perceptive, and so prefer line-by-line text to a visual presentation. This factor leads people to prefer full-source-code views in their Java IDEs (integrated development environments) to the browser-view style IDEs. A good example of this is the difference between the coding environment presented by VisualAge for Java and all other full-source-view style IDEs (e.g., JBuilder). For us Smalltalkers, the browser and object inspectors seem much more natural than the traditional full-source-view style found in many Java IDEs.

In the same way, most developers do not make good object modelers. It seems that most developers are interested in the code layout and the idiomatic details thereof. So only a few happily and successfully become good object modelers and architects. Modeling and architecture are much more about visual/perceptive reasoning than line-by-line coding knowledge. This visual-model way of thinking about architecture is at odds with many shops that think they do real architecture, but instead focus on code-level mechanisms and ignore overall structure and architectural design patterns.

These tendencies and preferences only hint at the barriers to getting developers to accept the sort of paradigm shift your article describes. Sometimes, you can't teach a sensate dog new perceptive tricks -- you just have to wait for some fresh minds to come along. That has little to do with developers' ages and more to do with their personality traits (e.g., sensate versus perceptive).

The visual tools that you describe in the article resemble Prograph, (http://www.pictorius.com) a purely visual programming language that was certainly ahead of its time. I started using Prograph in 1989, but when I would show it to developers, they just couldn't grok it. I believe the original company producing Prograph now just provides Web development consulting and uses Prograph as the underlying tool.

Prograph uses a visual metaphor in which an icon represents a method. Input objects flow to input pins on the icon, and output objects flow from output pins on the icon. It features visual decorations to indicate lists, logical conditions, and so on. The method icons are infinitely nestable so that any level of complexity can be expressed in one. Classes, instance variables, and methods are all constructed in a visual environment.

When I last used Prograph, its user interfaces' visual construction hadn't been perfected yet, but that wouldn't have been hard to fix.

Prograph was certainly an early forerunner of the approach that you describe, but obviously much more work has been done since then, as mentioned in your article.

Pat Podenski

Pat, You provide keen observations. I did look at Prograph years ago. It had some fine ideas, but appeared to be low level -- more at the code/logic level than at the part-assembly and configuration level. I like how you use the sensate mindset. Its prevalence certainly indicates how we are in the Stone Age when it comes to tool productivity. Developers are capable of moving to a visual-parts assembly style of work, but because we lack the tools to do it and the underlying infrastructure, we remain code-centric. For example, most developers can pick up white-board, high-level design skills fast, but the tools to do that and integrate results with coded reusables or code itself are pathetic to nonexistent. Ninety-nine percent of developers are tool users, not tool builders. I wonder how some of Prograph's architects would react to the article? Jack Harich

'Accelerate your RMI programming'

Ashok Mathew and Mark Roulo

How do you use externalization?

Mark,

Because RMI is central to many applications I work on, I found your article useful and interesting. However, I do not understand externalization and how to use it. Please provide a more detailed example of using externalization with RMI and compare it with an example that uses serialization.

Ryan Wexler

Ryan, Both serialization and externalization are mechanisms to turn objects (and graphs of objects) into byte arrays. Serialization is easier to code, and externalization gives you more power. RMI uses both in exactly the same way, so you won't find any difference when coding RMI based on whether arguments and return values are serializable or externalizable. "Improving Serialization Performance with Externalizable," Stuart Halloway (Java Developer Connection, 2000) goes into more details on serialization and externalization and shows more externalization examples beyond the four examples in our JavaWorld article: http://developer.java.sun.com/developer/TechTips/2000/tt0425.html. The Sun documentation on serialization also has an examples section with some externalization examples: http://java.sun.com/j2se/1.3/docs/guide/serialization/index.html. Mark Roulo

Java 101

'Object-oriented language basics, Part 6: Interfaces'

Jeff Friesen

What about empty interfaces?

J eff,

Thanks for the candid explanation on the whys and hows of interfaces. I would love to know more about the usefulness of empty interfaces. The concept just does not sit in my mind when I think of an interface as being a set of method declarations that establishes a contract for any implementation. In the case of empty interfaces, I somehow get the notion of a missing contract. Of course, the implements clause does tie the two together. A little more light on the subject would be great.

Adarsh

Adarsh, To those new to Java, it seems that interfaces are one of the more troubling aspects of the language. In fact, developers debate over whether or not constants should be declared in interfaces. I have stayed out of that debate, as I do not have an opinion at this time. To address your question, I'll start by asking the following question: does an empty interface represent a contract? In my opinion, the answer is yes. For example, consider the empty

Serializable

interface. Classes that wish to participate in the serialization process implement that interface -- either directly through an implements clause or indirectly (by extending a superclass that implements

Serializable

). If you examine the source code to

ObjectOutputStream's writeObject()

method, you will find code that uses the

instanceof

operator to see if an object is a class instance that either directly or indirectly implements the

Serializable

interface. Basically,

Serializable

represents the following contract: any object created from a class that either directly or indirectly implements

Serializable

has the potential to be serialized. I say

potential

because you can prevent subclass objects from being serialized. That contract is fulfilled through the

implements Serializable

clause and verified through

writeObject()

's

instanceof Serializable

logic. In closing, classes represent entities capable of doing things, and interfaces represent contracts for one or more behaviors common to diverse entity groups. Many interfaces declare one or more methods that correspond to the various behaviors as defined by the interface's contract. If an interface represents a contract that defines a single behavior, the behavior can be expressed either as a single method signature or by leaving the interface empty. In my opinion, Java's designers might have implemented

Serializable

this way:

interface Serializable
{
     void setSerializable ();
}

However, this is

overkill

, and developers might forget to call

setSerializable()

-- also, from what part of their code would they call that method (most likely, the constructor)? To sum up, every interface represents a contract, whether or not that interface declares methods. Jeff Friesen

'Master Merlin's new I/O classes'

Michael T. Nygard

Will the new I/O classes improve RMI performance?

Michael,

Do you think these new I/O classes can increase RMI (Remote Method Invocation) performance? If yes, do you know if some plans exist?

Tony Reix

Tony, That's an interesting idea. I haven't given it any thought before this, but at first glance, it would seem that buffer pooling would be the first place to start optimizing RMI. Preconstructing buffers for common Ramp operations would be a good start. Reusing a pool of fixed-size buffers would also be an excellent approach. I don't think the asynchronous I/O would buy much, since RMI has strict blocking-call semantics. Michael T. Nygard

'Test infect your Enterprise JavaBeans'

Michael T. Nygard and Tracie Karsjens

How does unit testing differ from Cactus?

Michael,

What is the main difference between your framework and Cactus? It seems that in Cactus, you cannot test EJBs (Enterprise JavaBeans) that use container services (which is a big limitation). Does your framework feature this problem too (it doesn't seem to)? The main advantage I see with Cactus is that you can run it from the command line.

Navjeet

Navjeet, The biggest difference between the technique demonstrated in the article and Cactus is its scope. The servlet we developed in the article is limited in its intent. Cactus aims to provide a much more complete framework for unit testing Java 2 Platform, Enterprise Edition (J2EE) services. It will ultimately allow for in-container or mock object testing. I like the direction that Cactus is going. In fact, if you are looking for the more complete framework, I would say Cactus is the way to go. Michael T. Nygard