I am not as proficient with Scala as I am with Java, but I attempted to convert the code for the matrix multiplication benchmark directly from Java to Scala. The result is shown in Listing 3 below. When I ran the Scala version of the benchmark on my computer, it averaged 12.30 seconds, which puts the performance of Scala very close to that of Java with primitives. That outcome is much better than I expected and supports the claims about how Scala handles numeric types.

#### Listing 3. Multiplying two matrices in Scala

```
def multiply(a : Array[Array[Double]], b : Array[Array[Double]]) : Array[Array[Double]] =
{
if (!checkArgs(a, b))
throw new IllegalArgumentException("Matrices not compatible for multiplication");
val nRows : Int = a.length;
val nCols : Int = b(0).length;
var result = Array.ofDim[Double](nRows, nCols);
for (rowNum <- 0 until nRows)
{
for (colNum <- 0 until nCols)
{
val sum : Double = 0.0;
for (i <- 0 until a(0).length)
sum += a(rowNum)(i)*b(i)(colNum);
result(rowNum)(colNum) = sum;
}
}
return result;
}
```

## Benchmarking C++

Since C++ runs directly on "bare metal" rather than in a virtual machine, one would naturally expect C++ to run faster than Java. Moreover, Java performance is reduced slightly by the fact that Java checks accesses to arrays to ensure that each index is within the bounds declared for the array, whereas C++ does not (a C++ *feature* that can lead to buffer overflows, which can be exploited by hackers). I found C++ to be somewhat more awkward than Java in dealing with basic two-dimensional arrays, but fortunately much of this awkwardness can be hidden inside the private parts of a class. For C++, I created a simple version of a `Matrix`

class, and I overloaded the operator `*`

for multiplying two matrices, but the basic matrix multiplication algorithm was converted directly from the Java version. The C++ source code is shown in Listing 4.

#### Listing 4. Multiplying two matrices in C++

```
Matrix operator*(const Matrix& m1, const Matrix& m2) throw(invalid_argument)
{
if (!Matrix::checkArgs(m1, m2))
throw invalid_argument("matrices not compatible for multiplication");
Matrix result(m1.nRows, m2.nCols);
for (int i = 0; i < result.nRows; ++i)
{
for (int j = 0; j < result.nCols; ++j)
{
double sum = 0.0;
for (int k = 0; k < m1.nCols; ++k)
sum += m1.p[i][k]*m2.p[k][j];
result.p[i][j] = sum;
}
}
return result;
}
```

Using Eclipse CDT (Eclipse for C++ Developers) with the MinGW C++ compiler, it is possible to create both *debug* and *release* versions of an application. To test C++ I ran the *release* version several times and averaged the results. As expected, C++ ran noticeably faster on this simple benchmark, averaging 7.58 seconds on my computer. If raw performance is the primary factor for selecting a programming language, then C++ is the language of choice for numerically-intensive applications.

## Benchmarking JavaScript

Okay, this one surprised me. Given that JavaScript is a very dynamic language, I expected its performance to be the worst of all, even worse than Java with wrapper classes. But in fact, JavaScript's performance was much closer to that of Java with primitives. To test JavaScript I installed Node.js, a JavaScript engine with a reputation for being very efficient. The results averaged 15.91 seconds. Listing 5 shows the JavaScript version of the matrix multiplication benchmark that I ran on Node.js

#### Listing 5. Multiplying two matrices in JavaScript

```
function multiply(a, b)
{
if (!checkArgs(a, b))
throw ("Matrices not compatible for multiplication");
var nRows = a.length;
var nCols = b[0].length;
var result = Array(nRows);
for(var rowNum = 0; rowNum < nRows; ++rowNum)
{
result[rowNum] = Array(nCols);
for(var colNum = 0; colNum < nCols; ++colNum)
{
var sum = 0;
for(var i = 0; i < a[0].length; ++i)
sum += a[rowNum][i]*b[i][colNum];
result[rowNum][colNum] = sum;
}
}
return result;
}
```

## In conclusion

When Java first arrived on the scene some 18 years ago, it wasn't the best language from a performance perspective for applications that are dominated by numerical calculations. But over time, with technological advancements in areas such as just-in-time (JIT) compilation (*aka* adaptive or dynamic compilation), Java's performance for these kinds of applications is now comparable to that of languages that are compiled into native code when primitive types are used.

Moreover, primitives eliminate the need for garbage collection, thereby providing another performance advantage of primitives over object types. Table 4 summarizes the runtime performance of the matrix multiplication benchmark on my computer. Other factors such as maintainability, portability, and developer expertise make Java a better choice for many such applications.

As previously discussed, Oracle appears to be giving serious consideration to the removal of primitives in a future version of Java. Unless the Java compiler can generate code with performance comparable to that of primitives, I think that their removal from Java would preclude the use of Java for certain classes of applications; namely, those applications dominated by numerical calculations. In this article I have used a simple benchmark based on matrix multiplication and a more scientific benchmark, SciMark 2.0, to argue this point.

## About the Author

John I. Moore, Jr., Professor of Mathematics and Computer Science at The Citadel, has a wide range of experience in both industry and academia, with specific expertise in the areas of object-oriented technology, software engineering, and applied mathematics. For more than three decades he has designed and developed software using relational databases and several high-order languages, and he has worked extensively in Java since version 1.1. In addition, he has developed and taught numerous academic courses and industrial seminars on advanced topics in computer science.

## Further reading

- Paul Krill wrote about Oracle's long-range plans for Java in "Oracle lays out long-range Java intentions" (JavaWorld, March 2012). This article, along with the associated comments thread, motivated me to write this defense of primitives.
- Szymon Guz writes about his results in benchmarking primitive types and wrapper classes in "Primitives and objects benchmark in Java" (SimonOnSoftware, January 2011).
- On the support website for
*Programming — Principles and Practice Using C++*(Addison-Wesley, 2009), C++ creator Bjarne Stroustrup provides an implementation for a matrix class that is much more complete than the one accompanying this article. - John Rose, Brian Goetz, and Guy Steele discuss a concept called
*value types*in "State of the Values" (OpenJDK.net, April 2014). Value types can be thought of as immutable user-defined aggregate types without identity, therby combining properties of both objects and primitives. The mantra for value types is "codes like a class, works like an int."