Three years ago, I wrote an article for JavaWorld called "Java Scripting Languages: Which Is Right for You?" When I collected the interpreters to compare, I tried to choose ones that seemed a good fit for a demanding commercial application. Ideally, I wanted an interpreter that would ease the extension of the application's user interface, have readable scripting code, be reliable, fast, well-supported, well-documented, and complete. At that time, I narrowed the list down to Jacl, Jython, Rhino, and BeanShell.
A lot has changed over the last three years. Instead of a handful of choices, there are now more than a dozen scripting languages either under active development or already available for use. The list of solid choices is bigger than it was three years ago and now includes Groovy, JudoScript, Pnuts, and JRuby, in addition to Jacl, Jython, Rhino, and BeanShell. We could consider other scripting interpreters beyond this group, but this list is large enough for developers to find what they're looking for.
I wanted to benchmark all of these interpreters to see if the performance for Jacl, Jython, Rhino, and BeanShell had improved since 2002, and to find out how Groovy, JudoScript, JRuby, and Pnuts compare with them. I thought it might be interesting to see what's unique about the different scripting interpreters and if any have particular strengths or weaknesses.
In my previous article, I called out some of the well-known benefits of scripting interpreter integration and described the risks you take when integrating with one. Here, I boil that information down to the most important points and update the information based on my experience since writing the last article.
The benefits of Java scripting interpreters are substantial. For one thing, scripting languages can be simpler to code in than Java. Scripts also make it possible to drive and extend your program's application logic and user interface. They can be run directly against your Java application's class interfaces, which is very powerful. This can make it easier to write test drivers against your program much more quickly than if you had to code and compile unit tests against the Java classes themselves. Also, if users take the time to extend your application by writing scripts for it, they're making an investment in your tool—and that can give your application an edge against the competition.
You do open yourself up to a certain amount of risk by integrating a Java scripting interpreter into your application, though. The two biggest risks are that the interpreter will be orphaned or that you will discover a fatal flaw in the interpreter after you ship a product with it.
Most of the interpreters are actively maintained and updated through an open source model, and, in those cases, you can probably rely on experts for help on working around problems you find, patching the interpreter, or getting the bug-fix you need included in a future release. That's a safe bet, but not a guarantee. If you are seriously considering using a specific interpreter, take a look at the activity on the development site to get a feel for how the code is evolving and look at the message boards to see if user questions are getting answered. That will give you a feel for how well supported the code really is.
Another way to protect yourself is to thoroughly test any scripting interpreter you plan to use. The distributions for some interpreters include a set of unit tests. When you test the scripting interpreter integration with your application, those unit tests can serve as part of the larger test suite you'll want to put together. When you test the integration between the interpreter and the application, you have your work cut out for you, because scripting interpreters are so flexible and expose so much functionality to the developer. You're making an investment by spending time on quality assurance early on, instead of when the application is in production or when customers need a critical bug fixed.
The new list of contenders
The first benchmark: Performance
For the first benchmark, I wrote equivalent scripts for each of the interpreters to do a set of simple tasks and then timed how long it took each interpreter to run the scripts. My benchmark scripts stick to basic operations like looping, comparing integers against lots of other integers, and allocating and initializing large one- and two-dimensional arrays. The benchmarking scripts for each of the languages and the Java programs to run them can be downloaded from Resources.
The most useful information that comes out of the benchmarking tests is an apples-to-apples comparison of how quickly the interpreters complete some extremely simple tasks. If throughput is a high priority for you, then the benchmarking numbers are useful.
I tried to code each test as similarly as possible for each of the scripting languages. The tests were run using Java 1.4.2 on a Toshiba Tecra 8100 laptop with a 700 MHz Pentium III processor and 256 MB of RAM. I used the default heap size when invoking the Java Virtual Machine.
In the interest of giving you some perspective for how fast or slow these numbers are, I also coded the test cases in Java and ran them using Java 1.4.2.
Here are the four performance tests:
- Count from 1 to 1 million
- Compare 1 million integers for equality
- Allocate and initialize a 100,000 element array
- Allocate and initialize a 500-by-500 element array
Have things improved since 2002?
Before I show you which of the interpreters is the fastest, I want you to look at the bar chart in Figure 1 that lists results for the most time-consuming task: comparing 1 million integers for equality. For the four scripting interpreters covered in the 2002 article, I show the times with the old release of the interpreter running on the 1.3.1 JVM and the interpreter's most current release running on the 1.4.2 JVM. Interestingly, when I tested Jython, I used the same version of the scripting interpreter I tested in my previous article, but the results show about a 25 percent improvement in speed.
Since I ran the tests on the exact same hardware I used for my previous tests, I'd say that the 1.4.2 JVM substantially cut the time needed to run these benchmarks. Now look at what happened with Rhino, BeanShell, and Jacl. The newest release of Rhino running on 1.4.2 was 86 percent faster than the old release running on 1.3.1. For BeanShell, the improvement was about 70 percent, and for Jacl, about 76 percent. That's quite an improvement.
Total time for the four tasks
Since several of the interpreters closely resembled each other in terms of speed (at least for my benchmarks), I summed up the times for the interpreters on the four benchmark tasks and show the cumulative results in Figure 2.
The checkered flag
For these simple test cases, Rhino, Pnuts, and Jython were consistently the fastest, followed closely by Groovy, then JudoScript, and the others.
Whether these performance numbers matter to you depends on the kind of things you want to do with your scripting language. If you have many hundreds of thousands of iterations to perform in a scripting function and users of your application will be waiting on the result, then you might want to either focus your attention on the interpreters at the fast end of the spectrum, or plan on implementing your most demanding algorithms in Java code instead of in scripting code. If your scripts have few repetitive functions to run, then the relative differences in speeds between these interpreters is a lot less important. A faster computer can also make a big difference in these numbers.
Another thing worth pointing out is that even the fastest of the interpreters takes about 40 times as long to run these simple programs as compiled Java code does. If speed is really at a premium for you, you might decide that it makes more sense to code certain algorithms in Java instead of using scripting code.
Some scripting interpreters support the compilation of scripts directly down to bytecode. I was curious about how much of a performance difference this would make, so I tried another test. I used the script compiler for Rhino to turn the benchmark scripts into bytecode. Then I reran the whole benchmark suite 10 times using scripts and 10 times using scripts converted to bytecode. Surprisingly, compiling the scripts to bytecode only shaved about 10 percent off the time it took to run the four programs in the benchmark suite. I initially thought that the JVM invocation must be taking the lion's share of the time to run the benchmarks, but further examination showed that the invocation of the JVM itself only accounted for about 20 percent of the total time required to run the suite. It seems that compilation of simple scripts makes a positive difference, but isn't necessarily a silver bullet for dramatically improving performance. Perhaps with longer or more compute-intensive scripts, you would see different results.
The second benchmark: Integration difficulty
The integration benchmark covers two tasks. One task shows how much code it takes to instantiate the scripting language interpreter and run a scripting file. The name of the script to run is passed in as a command line argument to the
ScriptRunner class. This yields a straightforward but useful program for testing scripts. Most distributions for the interpreters include much nicer console applications for interactive testing of scripts. I wanted to write a simple program from scratch to see if the documentation made using the interpreter easy or hard.
The second task writes a script that instantiates a Java
JFrame, populates it with a
JTree, and displays the
These tasks are simple but have some value since they show how easy it is to get started using the interpreters and also what a script written for the interpreter will look like when you use it to call Java class code. I present these examples as just one way of getting started. They aren't meant to be bulletproof or even particularly complete; they provide just the essentials to get something working in that scripting language. Once you have that going, you can really start investigating the features important to your application.