Java programming fundamentals: Datastructures and algorithms in Java

Java 101: Datastructures and algorithms in Java, Part 1

Learn the fundamentals of datastructures and algorithms in Java

1 2 Page 2
Page 2 of 2

Both complexity functions are based on the size of input (n), which somehow reflects the amount of input data. Consider the following pseudocode for array printing:

DECLARE INTEGER i, x[] = [ 10, 15, -1, 32 ]
FOR i = 0 TO LENGTH(x) - 1
   PRINT x[i]
NEXT i
END

Time complexity and time-complexity functions

You can express the time complexity of this algorithm by specifying the time-complexity function t(n) = an+b, where a (a constant multiplier) represents the amount of time to complete one loop iteration, and b represents the algorithm's setup time. In this example, the time complexity is linear.

The t(n) = an+b function assumes that time complexity is measured in terms of a chronological value (such as seconds). Because you'll want to abstract machine details, you'll often express time complexity as the number of steps to complete.

How we define "step" can vary from one algorithm to another. In this case, if you identified the single print instruction as the program's step, you would rewrite the time-complexity function in terms of the printing step:

t(n) = n; for n array elements, n steps are needed to print the array.

It's important to take care when defining an algorithm's steps, so that the definition is meaningful and correlates with the algorithm's input size. For example, it makes sense to define printing as the step for the array-printing algorithm, because printing dominates the runtime and depends on the input size (number of array elements to print).

It's also possible to define steps in terms of comparisons and exchanges. In a sorting algorithm, for instance, you might define steps in terms of comparisons if comparisons dominate the runtime or exchanges if exchanges dominate the runtime.

It's fairly easy to choose a time-complexity function for the array-printing example, but it can be more difficult to find this function for more complicated algorithms. Use the following rules-of-thumb to simplify this task:

  • Algorithms with single loops are typically linear--their time-complexity functions are specified in terms of n.
  • Algorithms with two nested loops are typically quadratic--their time-complexity functions are specified in terms of n2.
  • Algorithms with a triply-nested loop are typically cubic--their time-complexity functions are specified in terms of n3.
  • This pattern continues with quadruply and higher nested loops.

These rules-of-thumb work best when a loop executes n times (where n is the size of the input data). This isn't always the case, however, as demonstrated by the Selection Sort algorithm represented in pseudocode below:

DECLARE INTEGER i, min, pass
DECLARE INTEGER x[] = [ ... ]
FOR pass = 0 TO LENGTH(x) - 2
   min = pass
   FOR i = pass + 1 TO LENGTH(x) - 1
      IF x[i] LT x[min] THEN
         min = i
      END IF
   NEXT i
   IF min NE pass THEN
      EXCHANGE x[min], x[pass]
   END IF
NEXT pass
END

Because this algorithm consists of two nested loops, you might think that its performance is quadratic. That's only partially correct, however, because the algorithm's performance depends on whether you choose comparisons or exchanges as the algorithm's step:

  • If you choose an exchange as one step (because you think that exchanges dominate the runtime) you end up with a linear time-complexity function because n-1 exchanges are required to sort n data items. This function is specified as t(n) = n-1.
  • If you choose a comparison as one step ((because you think that comparison dominate the runtime) you end up with t(n) = (n-1)+(n-2)+...+1, which shortens to t(n) = n2/2-n/2. Comparisons occur in the inner loop, which executes n-1 times for the first outer loop iteration, n-2 for the second, and so on down to once for the final outer loop iteration.

Space complexity and space-complexity functions

An algorithm's space complexity indicates the amount of extra memory needed to accomplish its task. For printing an array, a constant amount of extra memory (for code storage, stack space to store the return address when PRINT is called, and space for variable i's value) is needed no matter how large the array.

You can express the array-printing algorithm's space complexity via space-complexity function s(n) = c, where c signifies how much constant additional space is required. This value represents overhead only; it doesn't include space for the data being processed. In this case, it doesn't include the array.

Space complexity is expressed in terms of machine-independent memory cells instead of machine-dependent bytes. A memory cell holds some kind of data. For the array-printing algorithm, i's memory cell stores an integer value.

Comparing algorithms

You use time complexity and space complexity functions to compare the algorithm to others of a similar nature (one sorting algorithm to another sorting algorithm, for example). In order to ensure a fair comparison, you must use the same definition for step and memory cell in each algorithm.

Even when you have chosen identical step and memory cell definitions, however, comparing algorithms can still prove tricky. Because complexities are often nonlinear, an algorithm's input size can greatly affect the comparison result. As an example, consider two time-complexity functions:

  • t1(n) = 10n2+15n
  • t2(n) = 150n+5

When n equals 1, t1 yields 25 steps, whereas t2 yields 155 steps. In this case, t1 is clearly better. This pattern continues until n equals 14, at which point t1 yields 2170 steps and t2 yields 2105 steps. In this case, t2 is the much better choice for this and successor values of n.

Using Big Oh to represent upper bounds

Computer scientists commonly compare algorithms as n tends to infinity; this is known as asymptotic analysis. Complexity functions serve as the upper bound of the algorithm's asymptotic behavior (as n approaches infinity), and a notation called Big Oh is used to represent these upper bounds. Here's the formal definition for Big Oh:

Note: n, f(n), g(n), c, and n0 must be positive.

f(n) represents the algorithm's computing time. When we say that this function is "O(g(n))," we mean that (in terms of steps) it takes no longer than a constant multiplied by g(n) for this function to execute. For example, here are the Big Oh notations for the previous time-complexity functions:

t1(n) = O(n2)
t2(n) = O(n)

You read the first equation as "t1 is order n2" or "t1 is quadratic," and read the second equation as "t2 is order n" or "t2 is linear." The result is that you can think of time complexities as being linear, quadratic, and so on; because this is how the algorithms respond as n greatly increases.

To prove that t1(n) is O(n2), all you need to do is find any two constants c and n0 where the relation holds. For example, choosing 25 for c and 1 for n0 is sufficient because every value of 10n2+15n is less than or equal to 25n2 for n >= 1. You can similarly prove that t2(n) is O(n).

Comparing algorithms with Big Oh

Suppose the Selection Sort algorithm is followed by the Array Printing algorithm. Because each algorithm offers its own time-complexity function, what is the overall time-complexity function for both algorithms? The answer is governed by the following rule:

If f1(n) = O(g(n)) and f2(n) = O(h(n)) then:

  • (A) f1(n)+f2(n) = max(O(g(n)), O(h(n)))
  • (B) f1(n)*f2(n) = O(g(n)*h(n)).

Part A covers cases where algorithms follow each other sequentially. For the Selection Sort algorithm followed by the Array Printing algorithm, the overall time-complexity function is the maximum of each algorithm's time-complexity function, which happens to be O(n2) (assuming that comparisons are the dominant steps).

Part B covers cases where one algorithm nests inside another. For example, suppose the Array Printing algorithm is called after Selection Sort performs an exchange. Assuming that the sort's time-complexity function is O(n2) (comparisons are dominant), the overall time complexity changes to O(n3).

How do you choose an efficient algorithm that meets your application's needs? Start by obtaining the Big Oh-bounded time-complexity functions for the candidate algorithms being considered, then deciding the range of n values that will be input to these functions (and, hence, the algorithms).

Because it helps to see the impact of various n values in a tabular format, I've constructed a table that correlates the number of steps with common Big Oh-bounded time-complexity functions and various n values. This table is presented in Figure 3.

A table correlating steps with time complexity functions and n values. Jeff Friesen

Figure 3. Correlating step counts with common Big Oh-bounded time-complexity functions and various n values

Figure 3. Correlating step counts with common Big Oh-bounded time-complexity functions and various n values. (Click to enlarge.)

The Big Oh-bounded time-complexity functions are sorted from the most efficient function (constant) at the top to the least efficient function (exponential) at the bottom. As you move down the table, notice the functions becoming less efficient (with more steps to complete) for n values starting at 16.

It would be great if all algorithms were O(1) because they would all be equally efficient. Because this doesn't happen in the real world, you need to carefully choose the most efficient algorithm based on Big Oh-bounded time-complexity functions and the desired range of n values.

Keep in mind that more efficient algorithms may be harder to code than less efficient ones. If the range of n input values doesn't result in too many steps, you may find that it's better to use a less efficient algorithm with a smaller input range than a more efficient algorithm with a larger input range. You'll see an example of this in Part 2.

Conclusion to Part 1

Part 1 has introduced you to the fundamentals of datastructures and algorithms. Part 2 moves from the theoretical to the practical, taking you on a tour of Java arrays and their algorithms.

1 2 Page 2
Page 2 of 2