J2ME devices: Real-world performance

Performance benchmarks can help device developers build better applications

J2ME (Java 2 Platform, Micro Edition) devices have invaded Asia and Europe and are starting to penetrate the US. Vendors have shipped millions of GSM (Global System for Mobile Communications) mobile phones and PDAs enabled with J2ME. No longer will a communication device, and eventually consumer devices in general, be constrained to basic functionalities. The possibilities are now endless. A Java solution allows a developer to reach numerous devices with just a single piece of code.

However, as these Java devices proliferate, we will see wide variations in how quickly different applications perform particular tasks due to the deviation in processor platforms, VM implementations, and device memory capabilities. Vendors who are not traditional VM providers may develop many of the VM implementations. Thus, characterizing device performance proves useful for writing applications. Developers can better understand each device's limitations and strengths, and tailor applications to suit their intent.

Traditional benchmarking standards used in desktop and server environments, such as CaffeineMark, Linpack, SPECmarks, and Dhrystones, are comprehensive, but address many performance aspects that might not be directly relevant to J2ME devices. J2ME devices are inherently lightweight computing devices not meant to provide the same level of functionality as traditional desktop or server computing devices. Thus, we must examine how to measure performance on a J2ME device.

With less emphasis on heavy computation and complexity, J2ME devices instead emphasize simplification of user tasks. Given the typically small screen size and constrained data input mechanisms, the user interface and the computations associated with a good user experience are paramount in any J2ME application.

In this article, we approach performance measurement by examining J2ME devices' performance from a user functionality perspective. Rather than compare what processor and operating system a particular device runs, the tests focus on determining whether a good user experience results when accessing a particular function.

Target platform

Since most J2ME-enabled devices currently in the market only support CLDC (Connected Limited Device Configuration) 1.0 and MIDP (Mobile Information Device Profile) 1.0, our performance benchmark is based on J2ME CLDC/MIDP 1.0. In the future, we will update our benchmark if real devices support higher J2ME versions.

Test devices

As of May 15, 2002, more than 70 types of J2ME-enabled mobile devices were available in the market. The number is increasing dramatically. See Resources for a list of devices that support J2ME.

We use the following devices in our benchmark; here we classify the devices based on their functions (not performance):

  • General-purpose Java phone
    • Siemens SL45i/6688i
    • Siemens M50
  • Smart phone
  • Nokia 9210 Communicator
  • Nokia 7650
  • Motorola Accompli 008
  • Motorola 388
  • PDA
  • Palm m125
  • Handspring Treo 270 Communicator
  • iPAQ 3760
  • J2ME Wireless Toolkit (desktop emulator)

We test our benchmark applications with Sun Microsystems' J2ME Toolkit 1.04 beta (using Microsoft Windows 2000 and a Intel Pentium 4 1.5-GHz processor) for a better comparison. Although J2ME Wireless Toolkit 1.04 has functions for customizing a device, tracing exceptions, monitoring memory usage, and so on, we disable all those functions during benchmarking.

iPAQ 3760 supports PersonalJava (Jeode) and J2ME (IBM J9 VM). J2ME applications can be transformed to PersonalJava applications and tested in the PersonalJava environment using ME4SE technology. We test our benchmark in iPAQ 3760 using both the Jeode and IBM J9 environments.

For Palm m125, we test two different J2ME-compatible virtual machines:

  1. Sun's MIDP for Palm OS 1.0, which can be downloaded from Sun's Website
  2. IBM's J9 VM for Palm that can be found after installing the IBM WebSphere Studio Device Developer 4.0 (evaluation version is free to download)

Here we list the minimum hardware requirements for J2ME-enabled (MIDP 1.0) mobile devices:

  • Java version: CLDC 1.0 + MIDP 1.0
  • Memory needed: 128 KB of nonvolatile memory for MIDP components, 8 KB of nonvolatile memory for application-created persistent data, 32 KB volatile memory for Java runtime
  • Screen size: 96 x 54 pixels
  • Display depth: 1 bit
  • Pixel shape: 1:1
  • Networking: Two-way, wireless, with limited bandwidth
  • Input: One or more of the following: one-handed keyboard, two-handed keyboard, touch screen

Test procedure

Because different devices have different hardware settings, operating systems, and Java platform implementations, we provide a general benchmark standard and related applications to test the performance of various J2ME devices, and compare the performance fairly. We define the benchmark according to two different levels: kernel level and application level.

  • Kernel level: We give a general idea of how fast a J2ME virtual machine executes those common basic instructions like logic-compare, loop, and method invocation. Compared to API calls, these are low-level instructions. We consider speed as the only performance standard here. The kernel-level test is application independent.
  • Application level: We give a general idea of how fast different J2ME devices execute common application interfaces like drawing a picture on the screen, opening an HTTP connection, storing data in a local file system, and parsing the XML document. Because some of these executions consume memory, we show the memory usage information during benchmarking. However, memory usage is not considered a benchmark standard since different J2ME devices differ little in memory usage.

We only benchmark general J2ME platform performance. We do not benchmark an individual application's performance nor do we locate an application's bottleneck to optimize it. But application developers may glean from our benchmark tests which parts of J2ME applications consume more processor time and heap size as compared to others. These benchmarks should show developers which J2ME-enabled devices perform better in certain areas.

Many companies define benchmarking strategies for Java platforms (some are free, some are not). Most benchmarks test desktop and server-side environments. Some provide powerful tools, with, for example, support for thread monitoring. For our standard, we follow the common procedures for performing benchmarks and use typical test areas.

In the kernel-level test, we pick up some of the common test areas for J2SE (Java 2 Platform, Standard Edition) benchmarks and add our own. In each test area, we execute a test-specific code in a loop. We receive the speed (loops per second) as the result. We built an application named JKernelMark (version 1.0, 10 KB), to perform the kernel-level benchmark.

In the application-level test, we define some test areas first. In each test area, we execute a test-specific code in a method. The speed (milliseconds per execution) is the result we receive. We built different benchmark applications according to different J2ME APIs: For J2ME standard APIs, we built an application named JAppsMark (version 1.0, 14 KB). For third-party APIs—XMLParser, for example—we built the JXMLMark application (version 1.0, 21 KB) to test XML-parsing performance using kXML.

Benchmark details

For each benchmark, a MIDlet suite application performs the benchmark in the real device. The following sections explain what we test in each benchmark application.

JKernelMark

JKernelMark tests the following:

  • Sieve: Arithmetic algorithm that generates the mathematical results with predefined mathematical expressions and conditions.
  • Loop: JKernelMark tests how VMs optimize loop operations. Again, this test is a mathematical algorithm, which outputs a mathematical expression's series of results. The benchmark puts the result into an array and reverses the array's sequence.
  • Logic: JKernelMark tests how fast the VM executes logic instructions. JKernelMark creates many Boolean flags and then reverses those flag values in loops.
  • String: Here, we test the speed of the virtual machine executing typical string operations. JKernelMark creates a StringBuffer, continues to append a string into that StringBuffer in loops, and then tries to find the specified substring's location.
  • Method: JKernelMark tests how fast the VM handles method calls. It calculates the sum of integers using recursive function calls.
  • Memory allocation and garbage collection: Here, JKernelMark tests the memory allocation speed and how garbage collection affects performance. It continues to create new objects and new arrays of bytes in memory (in each loop, it may create around 20 KB of objects in the heap). If the memory is not enough, the system will collect garbage and definitely affect the speed of allocating new objects.

JKernelMark score meaning: Loops per second; a higher score indicates better performance.

JAppsMark

JAppsMark tests the following:

  • Network communication: This test records the time required by a Java phone to make an HTTP connection to a Web-based system and read 200 bytes of information from the response. The test execution time also includes the network delay. If the test is unsuccessful, we receive test-failed status. Even though a mobile phone is J2ME enabled, you still need a SIM (Subscriber Identity Module) card with either data service GSM or GPRS (General Packet Radio Service) activated. There are other ways to configure the J2ME devices to conduct this test, which we don't highlight. For example, you can directly connect the handheld to a PC.
  • Low-level GUI: JAppsMark tests a Java phone's performance in rendering the graphics to the screen. This test involves loading an image file and painting it on a canvas at 250 coordinates randomly. The time required to execute this application is recorded for different phone devices.
  • RMS (record management system): The RMS test is conducted using a MIDP application that creates a record store, adds records, retrieves sorted enumerated RecordStore objects, iterates through the records, and finally deletes a record store. The total time for this execution is recorded for phone devices from different manufacturers.
  • Thread-switching: JAppsMark tests the speed of the VM handling thread-switching.

JAppsMark score meaning: Milliseconds per execution; a lower score means better performance.

JXMLMark

Parsing XML in J2ME is interesting because J2ME has no standard XML-related APIs. Since a normal XML parser is too heavy for mobile devices, performance—which includes runtime performance, user perception, and deployment code size—proves important when parsing XML in J2ME.

As Jonathan Knudsen explains in "Parsing XML in J2ME" (Sun Microsystems, 2002), there are three ways to parse XML:

  1. "A model parser reads an entire document and creates a representation of that document in memory. Model parsers use significantly more memory than other types of parsers.
  2. A push parser reads through an entire document. As it encounters various parts of the document, it notifies a listener object. This is how the popular SAX (Simple API for XML) parser operates.
  3. A pull parser reads a little bit of a document at once. The application drives the parser through the document by repeatedly requesting the next piece.
  4. "

Although MIDP 1.0 does not include an XML parser, due to the importance of XML, Sun plans to add a small, efficient XML parser to enable platform-independent data exchange in MIDP 2.0. Please check JSR (Java Specification Request) 118: Mobile Information Device Profile 2.0 for more details.

Among lightweight XML parsers, kXML is one of the most popular and standard APIs for XML parsing and can be used in the MIDP environment. Thus, we use kXML as the base API to benchmark the speed of parsing XML. We use kXML 1.21 since kXML 2.0 is still under alpha tests (when we designed our application).

Due to limited memory, J2ME mobile devices can only parse small XML files. We created a simple and small XML file, shown below, for our benchmark. DTD (document type definition) is not needed since kXML does not validate DTD (which costs processor time and memory).

<?xml version="1.0" encoding="UTF-8" ?> 
<message_list>
<message>
<header>
<status>xxxStatus</status> 
<command>xxxCommand</command> 
<messageId>xxxMessageId</messageId> 
<processingRule>xxxProcessingRule</processingRule> 
</header>
<body>
<a>1</a> 
<b bvalue="123" /> 
<c cvalue="1234">cvalue</c> 
</body>
</message>
</message_list>

We parse the whole XML file from beginning to end in the usual way—go through the whole XML document and read every tag, every attribute, and every value. Please check Jonathan Knudsen's article "Parsing XML in J2ME" for more details.

We calculate the time (milliseconds) needed to parse the whole XML document and use the time as the score.

Score meaning: Milliseconds per parsing the whole documentation; a lower score means better performance.

Benchmark application

The figures below show how to run the JKernelMark 1.0 benchmark application; JAppsMark and JXMLMark resemble JKernelMark.

Figure 1. Select Start
Figure 2. Gauge indication of each test's progress and available memory display
Figure 3. The application shows the results

Benchmark results

We list our benchmark scores in the following tables. Notice that only the final scores are released.

Related:
1 2 Page 1
Page 1 of 2