Build user interfaces for object-oriented systems, Part 2: The visual-proxy architecture

A scalable architecture for building object-oriented user interfaces

T his installment of Java Toolbox presents the first of several examples of how to apply object-oriented design principles that I've outlined in previous articles to build a workable user interface. I'll show you a forms-based I/O system that implements an architecture that I call "visual proxy." Subsequent articles will describe more complex systems, but I'll start with a simple one just to demonstrate the concepts.

"Build user interfaces for object-oriented systems": Read the whole series!

The problem

This month's column expands on the ideas presented in July's column, "Building user interfaces for object-oriented systems, Part 1," by actually implementing a simple system for doing forms-based I/O. The main principle that I'm demonstrating here is organizational: I'll show you how to organize your code in such a way that arbitrary forms can be constructed automatically, minimizing the coupling relationships between the GUI subsystem and the underlying logical model (the "business" objects, if you will).

One approach to this problem is to have all objects render themselves on the screen (with a draw_yourself(Graphics here) message). Though this sort of simplistic approach can work for trivial objects, it's not a realistic solution for several reasons. First, actually embedding graphical code into the methods of a model-level class is usually a bad idea for maintenance reasons. The graphical code is usually scattered throughout the methods of the object, and it becomes too difficult to make minor changes in implementation as a consequence. It's also very difficult for an object to display itself in more than one way. This month, I'll look at an approach that doesn't have the problems of Model/View/Controller (MVC) architecture, but that accomplishes the main goal of the Microsoft Foundation Classes (MFC): the decoupling of the abstraction (model) and presentation (view) layers.

Attributes

The main way to get around the problems caused by simply asking an object to display itself is to make a distinction between the entire object and its individual attributes. In the world of object-oriented design, an attribute falls under one of the following definitions:

  • A characteristic of some class of objects that serves to distinguish it from other classes of objects. For example, the notion of a salary distinguishes a class of objects (employees) from a broader class of objects (people in general). People without salaries are simply not employees -- they're volunteers, derelicts, CEOs of software startups, and the like, but they aren't employees. All employees, then, must have a salary, so salary is an attribute of an employee.
  • A characteristic of some object that serves to distinguish it from other objects. An employee's name, for example, serves to distinguish individual employees from each other. Two completely unrelated classes (person and microprocessor, for example) could have name attributes, so the class-based test above doesn't work, but the name does serve to distinguish one employee from another employee (or one microprocessor from another microprocessor).

The operations of a class are attributes as well: what an object can do certainly distinguishes one class of objects from another.

Attributes are not fields

An attribute is not the same thing as a field. A Salary, for example, could be represented internally as a float, a double, in binary-coded-decimal, as an ASCII string, as an ASCII string holding the SQL needed to get the actual salary from a database, and so forth. It might not be stored at all, but could be computed at runtime from some other attribute like a U.S. Government salary "grade" (like "GS-3"). Even in the last case -- in which no internal storage occurs -- an employee still must have a salary. That is, attributes have nothing to do with implementation. If you get the attribute set wrong, you've blown it as a designer. Moreover, the odds of an attribute going away as the program evolves are minimal, though the implementation of that attribute could change radically, and a field that used to be used for implementation could indeed go away. (You may add an attribute, but that's typically not as difficult a maintenance problem as removing one.)

The main reason you need to identify the attributes is not to decide what the fields are; rather, you need them to decide which operations are relevant. If a class of objects has some attribute, then it's reasonable to ask the objects to do something with that attribute. To paraphrase the July Java Toolbox column: "Don't ask for the information that you need to do something; rather, ask the object that has that information to do the job for you." Object-oriented designers call this process delegation.

Display attributes, not objects

It's commonplace to need to display various attributes of an object in different combinations on different screens, but to never display all the attributes of a given object in one place. Sometimes you need to display only the name of an employee; sometimes you need to display the name, salary, and social security number of the employee. And all of these attributes are displayed in arbitrary places on a given screen or within a given form. That's why a generic display-yourself method (like an Employee.draw_yourself(Graphics g) message that caused an Employee object to print its state on some window) can't work in the general case. It's worth noting, though, that a given attribute will almost always be displayed in the same way. That is, a salary, when it's displayed, will always be shown as a fixed-point number with two decimal places. (Minor formatting problems can be handled by passing in a previously initialized NumberFormat object that controls the formatting.) In fact, this sort of consistency is usually considered essential in a well-designed UI. It's typically the mix of attributes that change, not the way the attributes themselves are displayed.

As I discussed in the July column, an MVC solution to this problem would be to separate the generation of the UI entirely from the object that's being displayed, with the controller building a view by accessing an object's state (either directly by making the fields public, or indirectly through accessor methods). Similarly, the MVC controller would intercept events coming in from the view and translate them into calls to set methods to modify an object's state. For reasons I discussed in the July column, this approach is simply not workable in an object-oriented system. The access to the data is too unrestricted, with the resulting coupling relationships between the model and the view side. The elimination of model/view coupling is the goal of MVC, but the coupling is inherent in the architecture. MVC can work admirably for small things like a checkbox, where the relationship between the model-level state (the boolean check state) and view (the rendering of the checkbox) is so simple that the tight coupling is irrelevant. MVC doesn't scale well to the application level, however. The desired separation between the model and view are essentially impossible to achieve using MVC.

My goal in the current system is to be able to arbitrarily change the look of the screen -- to add forms, remove forms, change the layout and contents of a form, and so on -- without having to modify the logical model at all. Similarly, I want to be able to make changes in the model-level implementation without impacting the I/O system. As a secondary goal, I want to be able to store a description of a form somewhere external to the program (in a database or a configuration file, for example) and be able to modify the form by modifying its description (without having to recompile). This decoupling of the model and view is also the goal of MVC, but as we saw two months ago, MVC can't really pull it off.

Issues of reuse

Before leaving MVC entirely, I need to discuss one other issue. Some people argue that the MVC grab-the-data-and-shove-it-into-the-view approach is necessary to get "pluggable components," originally called "software integrated circuits" by Brad Cox (who came up with the notion in the context of his Objective Clanguage). I strongly believe -- I can hear the flames coming now -- that the notion of programming by hooking together pluggable components doesn't work in practice, however. It's true that you can create a program by joining together "component" objects, but I don't believe these objects (with the obvious exceptions of trivial things like widgets -- buttons, lists, and so on) can ever be truly reusable. It's impossible to define a class that works in every possible context. Even if you could get such a thing to work, the class would be extremely bloated. One oft-cited example that documents in excruciating detail why using a "pluggable component" in a context other than its initial design context is risky at best is the Ariane 5 Failure Report (see Resources). It's worthwhile reading.

Let's look at the problem of reuse from an object modeler's perspective. The notion of a "problem domain" -- the context in which the program that you are designing will be used -- is fundamental to object-oriented design. If you're writing an accounts-payable program, for example, then your problem domain is accounting. An object modeler begins focused on defining a problem solely in the context of the domain. The attributes (and operations, which are attributes after all) of the real-world thing you're modeling are irrelevant unless they are related to the problem at hand. For example, you typically wouldn't model hair color, or a body-odor quotient, in a Person class designed for use in a human resources application. Hair color, however, would be essential if you were modeling an appointment system for a hair stylist. The difference is the problem domain -- accounting in one case and hair care in the other. Trying to come up with a pluggable component Person class that could be used in both applications is a waste of time.

For a more down-to-earth example, consider two business objects like "employee" and "manager." In most problem domains, there is only one class (Employee) that takes on the role of "manager" in some scenarios. Generic employees and their managers would have the same attributes (and operations), but they'd be used differently -- perhaps put into different lists. In other problem domains, a manager has all the attributes and behavior of an employee, but also has a few additional attributes. Here, a Manager class that derived from Employee might make sense. The most interesting case, though, is a problem domain like time-sheet authorization. In this problem domain, an employee fills out time sheets and a manager authorizes them; there is no overlapping functionality. The fact that the same physical person logs on to the system as a manager in some situations and an employee in others is irrelevant. Inside the program, Employee and Manager are completely different classes because they are defined by a nonoverlapping set of operations and attributes. Further, the operations supported by an employee in this last example (that is, fill out time sheet) are irrelevant in most other applications.

To summarize, object-oriented designers are not particularly interested in all the richness of the real world -- they focus on the narrow part of the real world that's relevant to the current problem. I contend that it's impossible to come up with a reasonable definition for an Employee class that can be "plugged" into all the situations discussed in the previous paragraph. If you did manage to write such a class, it would be an incoherent mess -- a bloated collection of unrelated operations and attributes. I can't imagine an interface that you could use in all possible situations.

If a pluggable component doesn't work, then what is reuse? To paraphrase the Gang of Four approach (see Resources), you get reuse by programming to interfaces rather than to classes. If all the arguments to a method are references to some known interface, implemented by classes you've never heard of, then that method can operate on objects whose classes didn't even exist when the code was written. Technically, it's the method that's reusable, not the objects that are passed to the method.

Regarding user interfaces, some people don't believe that an object can create its own user interface because such an object couldn't be reusable as a "pluggable" component. Because you don't know the context in which the object is to be used, they argue, you can't invent an appropriate user interface for that object that will work in every context. While that argument is fine on the surface, it begs the larger question of whether it's desirable, or even possible, to construct such a generic beast.

1 2 3 4 5 6 7 Page
Join the discussion
Be the first to comment on this article. Our Commenting Policies
See more