Newsletter sign-up
View all newsletters

Enterprise Java Newsletter
Stay up to date on the latest tutorials and Java community news posted on JavaWorld

Sponsored Links

Optimize with a SATA RAID Storage Solution
Range of capacities as low as $1250 per TB. Ideal if you currently rely on servers/disks/JBODs

Understanding actor concurrency, Part 1: Actors in Erlang

A new way to think about structuring concurrent applications

  • Print
  • Feedback

The hardware we rely on is changing rapidly as ever-faster chips are replaced by ever-increasing numbers of cores. As a result, concurrency and parallelism, niche features today, will soon be a basic requirement for most software. Application developers seeking concurrency solutions have discovered that shared-state concurrency (employed by Java and other imperative languages) comes up short when compared to functional models that promise better reliability and scalability. In this two-part article Alex Miller introduces the actor concurrency model, which has been popularized by Erlang and more recently adopted by Scala. In Part 1 you'll be introduced to the actor model and its benefits; in Part 2 you'll learn how to use it on the JVM. Level: Intermediate

As programmers we know that our software lives on top of hardware, which for decades has relied on chips getting faster at an unrelenting pace. Moore's Law famously states that the number of transistors on a chip will double approximately every 18 months. This exponential law has held true for nearly four decades. and has even exceeded that pace. Intel chips had a few thousand transistors in the early 1970s. The chip in the computer you're reading this article with probably has hundreds of millions of transistors, and newer quad-core chips have several billion.

Until recently, the increase in chip counts (and reduction in size) has made chips faster, increased the number of pipelines, and dramatically increased the number and size of on-chip caches. But hardware crossed a boundary in the early 2000s: chips got big enough and cycle speed got fast enough that a signal can no longer reach the whole chip in a clock cycle -- a physical and architectural limit that will be difficult to breach. Yet the transistors per chip continue their unrelenting exponential march. Instead of increased speed per core, we are now seeing an increase in the number of cores per chip. CPU speeds are likely to stay relatively flat or possibly even decrease in the near future in the service of reduced heat and power consumption (see Figure 1). The multicore era is well under way, and it will change the way we write programs -- a reality that many software developers have yet to face.

A separate outline window appears if the PDF document supports an outline.

Figure 1. Transistor and MIPS (millions of instructions per second) trends over time. Click to enlarge.

Right now you probably use a dual-core machine for day-to-day work, or maybe a quad-core if you're lucky. You might be deploying your server application to an 8-core machine. The concurrency model that most popular languages use now -- shared-state concurrency -- can make good use of multiple cores and result in fast and efficient software. But if the number of cores doubles every 18 months, in a decade we will be dealing with thousands of cores. (This seems ridiculous, but exponential curves always seem ridiculous to our feeble wetware. Forty years ago, the growth in transistor count to today's levels seemed ridiculous and unsustainable.) The prospect of multicore systems of this magnitude threatens the viability of the shared-state concurrency model. This article introduces you to a robust alternative -- the actor concurrency model -- and explains how it is implemented in a 20+-year-old yet increasingly relevant functional language: Erlang.

  • Print
  • Feedback

Resources

More from JavaWorld