Java 101: Datastructures and algorithms in Java, Part 1

Learn the fundamentals of datastructures and algorithms in Java

Datastructures and algorithms in Java
Vinoth Chandar

Datastructures and algorithms are essential to computer science, which is the study of data, its representation in memory, and its transformation from one form to another. In programming, we use datastructures to store and organize data, and we use algorithms to manipulate the data in those structures. The more you understand about datastructures and algorithms, the more efficient your Java programs will be.

This article launches a three-part series introducing datastructures and algorithms. In Part 1, you'll learn what a datastructure is and how datastructures are classified. You'll also learn what an algorithm is, how algorithms are represented, and how to use time and space complexity functions to compare similar algorithms.

What is a datastructure?

Datastructures are based on abstract data types (ADT), which Wikipedia defines as follows:

An ADT doesn't care about the memory representation of its values or how its operations are implemented. It's like a Java interface, which is a data type that's disconnected from any implementation. In contrast, a datastructure is a concrete implementation of one or more ADTs, similar to how Java classes implement interfaces.

Examples of ADTs include Employee, Vehicle, Array, and List. Consider the List ADT (also known as the Sequence ADT), which describes an ordered collection of elements that share a common type. Each element in this collection has its own position and duplicate elements are allowed. Basic operations supported by the List ADT include:

  • Creating a new and empty list
  • Appending a value to the end of the list
  • Inserting a value within the list
  • Deleting a value from the list
  • Iterating over the list
  • Destroying the list

Datastructures that can implement the List ADT include fixed-size and dynamically sized one-dimensional arrays and singly-linked lists. (You'll be introduced to arrays in Part 2, and linked lists in Part 3.)

Classifying datastructures

There are many kinds of datastructures, ranging from single variables to arrays or linked lists of objects containing multiple fields. All datastructures can be classified as primitives or aggregates, and some are classified as containers.

Primitives versus aggregates

The simplest kind of datastructure stores single data items; for example, a variable that stores a Boolean value or a variable that stores an integer. I refer to such datastructures as primitives.

Many datastructures are capable of storing multiple data items. For example, an array can store multiple data items in its various slots, and an object can store multiple data items via its fields. I refer to these datastructures as aggregates.

All of the datastructures we'll look at in this series are aggregates.

Containers

Anything in which data items are stored and retrieved could be considered a datastructure. Examples include the datastructures derived from the previously mentioned Employee, Vehicle, Array, and List ADTs.

Many datastructures are designed to describe various entities. Instances of an Employee class are datastructures that exist to describe various employees, for instance. In contrast, some datastructures exist as generic storage vessels for other datastructures. For example, an array can store primitive values or object references. I refer to this latter category of datastructures as containers.

As well as being aggregates, all of the datastructures we'll look at in this series are containers.

Design patterns and datastructures

It's become fairly common practice to use design patterns to introduce university students to datastructures. A Brown University paper surveys several design patterns that are useful for designing high-quality datastructures. Among other things, the paper demonstrates that the Adapter pattern is useful in the design of stacks and queues. The demonstration code is shown in Listing 1.

Listing 1. DequeStack.java

public class DequeStack implements Stack
{
   Deque D; // holds the elements of the stack
   public DequeStack()
   {
      D = new MyDeque();
   }
   @Override
   public int size()
   {
      return D.size();
   }
   @Override
   public boolean isEmpty()
   {
      return D.isEmpty();
   }
   @Override
   public void push(Object obj)
   {
      D.insertLast(obj);
   }
   @Override
   public Object top() throws StackEmptyException
   {
      try
      {
         return D.lastElement();
      }
      catch(DequeEmptyException err)
      {
         throw new StackEmptyException();
      }
   }
   @Override
   public Object pop() throws StackEmptyException
   {
      try
      {
         return D.removeLast();
      }
      catch(DequeEmptyException err)
      {
         throw new StackEmptyException();
      }
   }
}

Listing 1 excerpts the Brown University paper's DequeStack class, which demonstrates the Adapter pattern. Note that Stack and Deque are interfaces that describe Stack and Deque ADTs. MyDeque is a class that implements Deque.

DequeStack adapts MyDeque so that it can implement Stack. All of DequeStack's method are one-line calls to the Deque interface's methods. However, there is a small wrinkle in which Deque exceptions are converted into Stack exceptions.

What is an algorithm?

Historically used as a tool for mathematical computation, algorithms are deeply connected with computer science, and with datastructures in particular. An algorithm is a sequence of instructions that accomplishes a task in a finite period of time. Qualities of an algorithm are as follows:

  • Receives zero or more inputs
  • Produces at least one output
  • Consists of clear and unambiguous instructions
  • Terminates after a finite number of steps
  • Is basic enough that a person can carry it out using a pencil and paper

Note that while programs may be algorithmic in nature, many programs do not terminate without external intervention.

Many code sequences qualify as algorithms. One example is a code sequence that prints a report. More famously, Euclid's algorithm is used to calculate the mathematical greatest common divisor. A case could even be made that a datastructure's basic operations (such as store value in array slot) are algorithms. In this series, for the most part, I'll focus on higher-level algorithms used to process datastructures, such as the Binary Search and Matrix Multiplication algorithms.

Representing algorithms

How do you represent an algorithm? Writing code before fully understanding its underlying algorithm can lead to bugs, so what's a better alternative? Two options are flowcharts and pseudocode.

Using flowcharts

A flowchart is a visual representation of an algorithm's control flow. This representation illustrates statements that need to be executed, decisions that need to be made, logic flow (for iteration and other purposes), and terminals that indicate start and end points. Figure 1 reveals the various symbols that flowcharts use to visualize algorithms.

A visual representation of a flow chart. Jeff Friesen

Figure 1. Flowcharts use symbols to represent statements, decisions, logic flow, and terminals

Consider an algorithm that initializes a counter to 0, reads characters until a newline (\n) character is seen, increments the counter for each digit character that's been read, and prints the counter's value after the newline character has been read. The flowchart in Figure 2 illustrates this algorithm's control flow.

Visual representation of a flowchart for counting from 0 to 9. Jeff Friesen

Figure 2. This flowchart shows how to count from 0 to 9

A flowchart's simplicity and its ability to present an algorithm's control flow visually (so that it's is easy to follow) are its major advantages. Flowcharts also have several disadvantages, however:

  • It's easy to introduce errors or inaccuracies into highly-detailed flowcharts because of the tedium associated with drawing them.
  • It takes time to position, label, and connect a flowchart's symbols, even using tools to speed up this process. This delay might slow your understanding of an algorithm.
  • Flowcharts belong to the structured programming era and aren't as useful in an object-oriented context. In contrast, the Unified Modeling Language (UML) is more appropriate for creating object-oriented visual representations.

Using pseudocode

An alternative to flowcharts is pseudocode, which is a textual representation of an algorithm that approximates the final source code. Pseudocode is useful for quickly writing down an algorithm's representation. Because syntax is not a concern, there are no hard-and-fast rules for writing pseudocode.

You should strive for consistency when writing pseudocode. Being consistent will make it much easier to translate the pseudocode into actual source code. For example, consider the following pseudocode representation of the previous counter-oriented flowchart:

DECLARE CHARACTER ch = ''
DECLARE INTEGER count = 0
DO
   READ ch
   IF ch GE '0' AND ch LE '9' THEN
      count = count + 1
   END IF
UNTIL ch EQ '\n'
PRINT count
END

The pseudocode first presents a couple of DECLARE statements that introduce variables ch and count, initialized to default values. It then presents a DO loop that executes UNTIL ch contains \n (the newline character), at which point the loop ends and a PRINT statement outputs count's value.

For each loop iteration, READ causes a character to be read from the keyboard (or perhaps a file--in this case it doesn't matter what constitutes the underlying input source) and assigned to ch. If this character is a digit (one of 0 through 9), count is incremented by 1.

Choosing the right algorithm

The datastructures and algorithms you use critically affect two factors in your applications:

  1. Memory usage (for datastructures).
  2. CPU time (for algorithms that interact with those datastructures).

It follows that you should be especially mindful of the algorithms and datastructures you use for applications that will process lots of data. These include applications used for big data and the Internet of Things.

Measuring algorithm efficiency

Some algorithms perform better than others. For example, the Binary Search algorithm is almost always more efficient than the Linear Search algorithm--something you'll see for yourself in Part 2. You want to choose the most efficient algorithm for your application's needs, but that choice might not be as obvious as you would think.

For example, what does it mean, from an efficiency perspective, for the Selection Sort algorithm (also introduced in Part 2) to take 0.4 seconds to sort 10,000 integers on a given machine? That benchmark is only valid for the machine on which the algorithm's implementation runs, for the implementation itself, and for the size of the input data.

A computer scientist measures an algorithm's efficiency in terms of its time complexity and space complexity by using complexity functions to abstract implementation and runtime environment details. Complexity functions reveal the variance in an algorithm's time and space requirements based on the amount of input data:

  • A time-complexity function measures an algorithm's time complexity--meaning how long an algorithm takes to complete.
  • A space-complexity function measures an algorithm's space complexity--meaning the amount of memory overhead required by the algorithm to perform its task.

Both complexity functions are based on the size of input (n), which somehow reflects the amount of input data. Consider the following pseudocode for array printing:

DECLARE INTEGER i, x[] = [ 10, 15, -1, 32 ]
FOR i = 0 TO LENGTH(x) - 1
   PRINT x[i]
NEXT i
END
1 2 Page 1