Newsletter sign-up
View all newsletters

Enterprise Java Newsletter
Stay up to date on the latest tutorials and Java community news posted on JavaWorld

Sponsored Links

Optimize with a SATA RAID Storage Solution
Range of capacities as low as $1250 per TB. Ideal if you currently rely on servers/disks/JBODs

Is your code ready for the next wave in commodity computing?

Prepare yourself for multicore processing

  • Print
  • Feedback

Hardware is really just software crystallized early," says Alan C. Kay in his paper "The Early History of Smalltalk" (ACM, 1993). This quote really explains the inspiration for this article. Software developers have always been at the mercy of hardware manufacturers, although we've had a pretty easy ride of it since the inception of the computing industry itself. From then until now, increasing speeds of every single component that goes into the standard Von Neumann architecture have given our software literally free increases in performance.

No longer.

All of the main hardware manufacturers (Intel, IBM, Sun, and AMD) have realized the problems inherent in jacking up the clock speeds of CPUs and are in the process of rolling out a fundamental change in processor architecture—multicore units with more than one processing element, providing true hardware support for multiple threads of execution, instead of simulated as in the past.

Like every other change you have come across in your career as a Java programmer, this one brings opportunities and threats to you. In this article, I highlight these opportunities and challenges, as well as detail how to avoid the main challenges identified.

Why parallelize?

Let's take a step back and examine the factors that have precipitated the advent of parallel computing hardware in all tiers of IT, as opposed to specialized high-end niches. Why would we want hardware that can execute software in true parallel mode? For two reasons: You need an application to run more quickly on a given dataset and/or you need an application to support more end users or a larger dataset.

And if we want a "faster" or "more powerful" application, where powerful means handling more and more users, then we have two options:

  1. Increase the power of the system resources
  2. Add more resources to the system

If we increase the power of the system resources (i.e., in some semantic sense, replace or extend the system within its original boundaries), then we are scaling the system vertically; for example, replacing a 1-GHz Intel CPU with a 2-GHz version that is pin-compatible, which is a straight swap. If however, we choose to add to the system resources such that we extend beyond the original boundaries of the system, then we are scaling the system horizontally; for example, adding another node to our Oracle 10g RAC cluster to improve overall system performance.

Finally, I'd like to make one more point on the advent of parallel hardware—you may not need it or even want it for your application, but you have no choice in the matter—CPUs that provide true hardware support for multiple threads are becoming the norm, not the exception. Some estimates indicate that 75 percent of the Intel CPUs that ship by the end of 2007 will be multicore. Intel itself estimates 25 percent by the end of 2006.

Let's switch gears for a moment and place ourselves in the shoes of a hardware designer. How can we feed the insatiable appetite of software programmers for more powerful hardware? We've spent the last 20 years delivering the promise of Moore's Law, to the extent that we are running into fundamental problems of physics. Continuing to increase the clock speed of single processing units is not a sustainable solution moving forward because of the power required and heat generated. The next logical move is to add more processing units to those self-same chips. But that means no more free lunch. Those software engineers will need to explicitly take advantage of the new hardware resources at their disposal or not realize the benefit.

  • Print
  • Feedback

Resources