Conventional software vs. software as a service

Lines redrawn in battle between browser-based and traditional software services

September 5, 2005—When Peter Yared, CEO and founder of LAMP (Linux, Apache, MySQL, and Perl/PHP/Python) middleware startup ActiveGrid, realized he needed project management software to coordinate his company's development work, he tried Microsoft Project 2003.

The experiment didn't last long. "2003???" Yared asked incredulously on his Weblog. "This code hadn't been touched in 3 years!" What's more, the features and functions simply weren't compelling. So Yared switched to Basecamp, a Web-based application that also served as the proving ground for the now-popular Ruby on Rails Web application framework. The upshot? Basecamp "rocks," Yared wrote.

For a decade, we've understood the benefits of delivering software through the Web: zero installation, platform independence, anywhere/anytime access, continuous improvement. The drawbacks have been equally obvious, or so it has seemed. The browser-based application has not held a candle to conventional software's rich interactivity, split-second responsiveness, and offline-capable local datastore. The resurgence of interest in what is now called AJAX (asynchronous JavaScript and XML), however, has cast doubt on the first two objections. Browser-based software has always been capable of highly dynamic behavior, thanks to DHTML (dynamic HTML). It has also long been capable of interacting autonomously with remote XML services. These two functionalities, recently stabilized as de-facto cross-browser standards, are powering a new generation of Web applications that are richer and more responsive than many thought possible. And although the lack of a model for offline use remains a factor, it reduces in importance as connectivity becomes ever more pervasive.

It's tempting to portray this as a battle between old-fashioned software—which must be laboriously installed and configured—versus newfangled stuff that just flows where needed. The reality, of course, is more subtle. A powerful AJAX-style application, such as Google's Gmail, doesn't install itself in the conventional sense, but it does refresh its code on each use. What's the distinction between installing code and just caching it? It's fuzzy, and no less so for JavaScript and XML resources than for Java applets or .Net assemblies.

Likewise, despite no official model for local storage, various mechanisms exist. Internet Explorer 5 introduced a limited persistence capability called "the userData behavior." Tibco's General Interface, which is a powerful AJAX toolkit, uses the browser's cache to store and retrieve XML files. BEA's project Alchemy, which is aimed at formalizing the idea of a browser-accessible XML datastore, has yet to emerge, but it's reasonable to suppose that it will in some form.

In the end, the various approaches—including browser-based applications, hybrids involving Java applets or Flash components, and on-demand technologies such as Java Web Start and .Net ClickOnce—must all deliver the same goods: universal reach, rich behavior, secure execution, and secure access to local storage. In this age-old battle on four fronts, from which no single victor is likely to emerge, the lines have recently been redrawn. The AJAX revolution of 2005 showed that the browser's unparalleled reach could be combined with unsuspected richness.

Of course, we've also seen old security issues resurface, as when the HTTP client capability of Firefox's Greasemonkey extension was found to be vulnerable and had to be temporarily neutered. There was nothing new here—and nothing specific to Firefox or the browser-based approach in general. If we want software as a service, and we most assuredly do, we'll continue to wrestle with the trade-offs between what partially trusted and demand-loaded software can do for us—and what it can do to us.

Jon Udell is lead analyst and blogger in chief at the InfoWorld Test Center.

Learn more about this topic