Secure your Java apps from end to end, Part 2

Don't let flaws compromise application security

While most software developers are aware of the threat posed by intentionally malicious or simply curious hackers, few developers understand the extent to which the flaws they introduce into their applications aid and abet those same hackers.

In January of this year, a German software developer identified a design flaw with serious security implications in the recently open sourced Borland InterBase product. This flaw existed in versions of InterBase that stretched back to 1994!

No one had maliciously added the flaw -- a back door in the form of a hardcoded name and password. Instead, the error resulted from an InterBase developer's poor design decision. The application used the hardcoded name and password to access a special InterBase access control database during authentication.

While this mistake represents an extreme case, the lesson it teaches is important: as developers, the way we design our software and implement those designs has a huge impact on the overall security of the applications we build. And, as the example above illustrates, the security flaws we introduce can affect our customers for years.

Looking back

Last month I argued that we must examine security> in the light of three different contexts. These contexts, though distinct, often require solutions that cut across all three. Java application developers must understand the implications their work will have on their solution's security from the perspective of each context.

Both Java and non-Java developers are familiar with the most well-known security context: virtual machine security. This is due to the vast amount of attention the JVM and runtime environment received throughout these technologies' formative years. Virtual machine security encompasses the JVM and the supporting runtime environment. Over the last several years, virtual machine security has solidified and is well respected.

The recent lack of visible flaws in the JVM combined with the fact that the average programmer has little control over JVM security has gradually shifted the focus of Java security from the virtual machine to the application running on top of the virtual machine. The average Java programmer can significantly affect security at this level. This context, the application security context, deals with the design decisions and the deliberate and accidental programming missteps that happen during software development. Of course, not all such flaws compromise application security. We are only interested in those that do.

The final context, the network security context, illuminates the security facets relevant between applications and application components interacting in a distributed environment over a network. Once again, not all flaws introduced here compromise security. We will focus on those that do.

Last month, I looked at the virtual machine security context and demonstrated how flaws in VM security manifest themselves as exploits. This month I will focus on application security. I will describe the most common classes of flaws with an eye toward helping you avoid them. I will conclude with a recent example found in Sun's own code.

Read the whole series on security contexts:

It is impossible to specifically list all the ways that bad design and poor programming can compromise an application's security. The example in the introduction is only one route. Instead, I will define and describe a handful of categories from which most Java application security vulnerabilities arise. These categories fall into two larger camps: implementation-related and design-related flaws.

Implementation-related flaws

Implementation-related flaws are introduced into an application during implementation. Caused by careless coding, insufficient understanding of requirements, or general lack of skill, implementation-related errors remain hidden because of inadequate testing and review. However, if the design is sound, you can usually fix the flaw without changing the design.

Timing problems

The most pernicious timing problem is the race condition. A race condition occurs when two threads accessing the same resource aren't properly synchronized. Flaws arise when two interacting threads either leave an object in an inconsistent or invalid state, or when malicious code takes advantage of improperly protected resources being used by another thread. Often the solution is as simple as adding proper synchronization.

Insufficient input validation

Input supplied to a process must be validated before it is used. Although some input comes from trusted sources, for safety's sake, all input should be considered to have come from an untrusted, possibly compromised source. Failure to thoroughly validate input can lead to a number of serious security vulnerabilities.

Inadequate randomness

Good cryptography requires a high-quality source of randomness. Many early Netscape Navigator vulnerabilities directly resulted from an insufficiently random source. Keys generated from this source had an effective key length much shorter than their advertised key length and were therefore easier to crack.

On a more mundane note, the predictability (lack of randomness) of many user-supplied passwords makes them easy to guess. Sometimes sound security consists of weeding out weak passwords.

Design flaws

Security vulnerabilities that arise from implementation flaws are bad enough, but even worse are vulnerabilities that surface from poor design decisions, lack of foresight, or insufficient understanding of the language or library features. These flaws often become so intertwined with the application logic that removing them without killing the patient is difficult. The InterBase flaw mentioned above provides an excellent design-flaw example.

Initialization issues

The Java savvy know that the new operator isn't the only way to create new instances. Both deserialization and cloning create fully functional instances. You can use both mechanisms to subvert a class's carefully constructed security by introducing instances that don't play by the rules. It's wise to defeat both cloning and deserialization.

Visibility and extensibility issues

Visibility (whether a class or member is public or private) and extensibility (whether or not subclassing can extend a class or method) provide important tools in a software developer's tool chest. However, both, if not used properly, can lead to subtle vulnerabilities.

In the case of subclassing, a subclass can alter the contract implied by the superclass. Existing code, with its implicit dependence on the implied contract, can break or malfunction in a way that affects the application's security. The solution: liberally use the final keyword, which prevents redefinition through subclassing.

In the same way, failure to protect a class's inner workings and state through judicious use of the private keyword can expose a class to unexpected modification by classes subsequently added to a package.

Don't keep secrets

The InterBase design flaw provides an excellent example of why you shouldn't embed secrets in an application. Secrets, often in the form of passwords or encrypted data, are never completely safe from a determined attacker. And once the secret is compromised, the genie is out of the bottle.

Sun makes mistakes too

Through direct experience, I have come to respect many of the engineers at Sun. However talented they are, they make mistakes just like you and me.

On February 23, Sun announced its discovery of a security flaw in the JDK. In their words:

A vulnerability in certain versions of the Java Runtime Environment may allow malicious Java code to execute unauthorized commands. However, permission to execute at least one command must have been granted in order for this vulnerability to be exploited.

The flaw allowed untrusted Java code, executing within an otherwise secure JVM, to invoke any executable (i.e., format) if the code had been given the legitimate ability to invoke at least one executable (i.e., echo) in certain circumstances. The bug very likely went undetected for so long because the bug's exploitation relied on the existence of a race condition with a narrow window of opportunity.

The flaw was in the exec() method of the java.lang.Runtime class. Paraphrased slightly, the code was:

  public
  Process
  exec(String [] arstringCommand, String [] arstringEnvironment)
  throws IOException
  {
    // Ensure that the array parameters aren't null, their elements
    // aren't null, etc.
      .
      .
      .
    // Do some stuff.
      .
      .
      .
    // Get the security manager.
    SecurityManager securitymanager = System.getSecurityManager();
    // Check the first element of the command array -- which should
    // be the name of the executable to invoke.  Ensure that it has
    // executable privilege.
    if (securitymanager != null) 
securitymanager.checkExec(arstringCommand[0]);
    // Now, invoke the executable.
    return execInternal(arstringCommand, arstringEnvironment);
  }

Do you see the problem?

The error lies in the last three lines (comments and white space excluded). First, the security manager checks the executable's name to see if it has been granted execute permission in the policy file. Second, the code executes the command. Whoops! In a multithreaded environment, the contents of the parameter arrays can change between these two steps. Since the two input parameter arrays are used directly, the caller still holds references to them and can modify their contents.

The fix: immediately copy the two input arrays and operate on the copies instead of the originals.

Return to best practices

You can detect many flaws that lead to security vulnerabilities through good old-fashioned software development best practices. Clear requirements, formal design reviews, formal code reviews, and thorough testing will uncover many flaws and improve overall software quality.

Next month, I will explore the final security context: network security.

Todd Sundsted has been writing programs since computers became available in desktop models. Though originally interested in building distributed applications in C++, Todd moved on to the Java programming language when it became the obvious choice for that sort of thing. In addition to writing, Todd is cofounder and chief architect of PointFire, Inc.

Learn more about this topic