Optimize with a SATA RAID Storage Solution
Range of capacities as low as $1250 per TB. Ideal if you currently rely on servers/disks/JBODs
What is it about benchmarks that makes them both universally reviled and, at the same time, impossible to ignore? Whatever their attraction, benchmarks got a lot of J2EE (Java 2 Platform, Enterprise Edition) developer attention this past year, with a number of controversial and high-profile benchmarks that left some developers scratching their heads and asking themselves, what does it all mean?
For J2EE and .Net, the benchmarking battles kicked off in June 2001, when Oracle published benchmark numbers showing Oracle 8i's performance with an unnamed J2EE server on a modified version of Sun Microsystems' Pet Store Demo training application. Microsoft published test results that, it claimed, showed a dramatic performance improvement over the Oracle numbers. Soon BEA and Macromedia had released Pet Store benchmarks of their own repudiating Microsoft's findings, and Pet Store, despite that it was created as a teaching tool and not a performance benchmark suite, was fast becoming ground zero for the J2EE/.Net benchmarking wars.
Two months ago, J2EE training company the Middleware Company stepped in and attempted its own benchmark of the Pet Store Demo, inviting Microsoft to tune the .Net implementation and use its own engineers to tune the J2EE version.
That's when all hell broke loose.
The Middleware Company's numbers initially seemed like very bad news for the Java community. J2EE was consistently outperformed on two-, four-, and eight-way systems; it required seven times the lines of code of .Net, and the unnamed app server the Middleware Company used for the test was twice as expensive as the Microsoft solution.
The Java community immediately shredded the results and said that the J2EE code was not properly optimized; the benchmark should have used container-managed persistence (CMP) Enterprise JavaBeans (EJB) or perhaps no EJB at all instead of bean-managed persistence (BMP) entity beans; and the Middleware Company was wrong to allow only one vendor—Microsoft—to participate in the benchmark.
The Middleware Company now admits it made mistakes in its initial tests and is talking to some J2EE vendors about conducting a second benchmark. According to a company spokesperson, the Middleware Company now believes that "until alternative architectures such as CMP and no EJB at all can be tested, definitive conclusions about the performance of J2EE cannot be drawn."
But Gartner Group Research Director Mark Driver says there is at least one message in the benchmark numbers: the idea of .Net not being scalable is a myth. "The ironic thing was that it was Java bigots [who did the benchmarks]...Their purpose was to show that Java was faster, and they did everything that they could to level the playing field and get Java to perform."
Hotly debated benchmark numbers aside, .Net has made some significant strides this past year. In February 2002, Microsoft released its Visual Studio .Net development tool, along with the Common Language Runtime (CLR) and .Net class libraries that make up the .Net Framework. Driver says this software is getting decent reviews from the approximately 10 percent of Microsoft developers who have tried it.