Introduction to Java microframeworks

Jump into Java microframeworks, Part 3: Spark

An extra lightweight, flexible, and scalable architecture for single-page web apps

Spark Java microframework
Credit: Christian Schnettelker

Spark makes fewer assumptions than the other microframeworks introduced in this short series, and is also the most lightweight of the three stacks. Spark makes pure simplicity of request handling, and it supports a variety of view templates. In Part 1 you set up a Spark project in your Eclipse development environment, loaded some dependencies via Maven, and learned Spark programming basics with a simple example. Now we'll extend the Spark Person application, adding persistence and other capabilities that you would expect from a production-ready web app.

Data persistence in Spark

If you followed my introduction to Ninja, then you'll recall that Ninja uses Guice for persistence instrumentation, with JPA/Hibernate being the default choice. Spark makes no such assumptions about the persistence layer. You can choose from a wide range of options, including JDBC, eBean, and JPA. In this case, we'll use JDBC, which I'm choosing for its openness (it won't limit our choice of database) and scalability. As I did with the Ninja example app, I'm using a MariaDB instance on localhost. Listing 1 shows the database schema for the Person application that we started developing in Part 1.

Listing 1. Simple database schema for a Spark app



create table person (first_name varchar (200), last_name varchar (200), id int not null auto_increment primary key);



CRUD (create, read, update, delete) capabilities are the heart of object-oriented persistence, so we'll begin by setting up the Person app's create-person functionality. Instead of coding the CRUD operations straightaway, we'll start with some back-end infrastructure. Listing 2 shows a basic DAO layer interface for Spark.

Listing 2. DAO.java interface



public interface DAO {

	public boolean addPerson(Map<String, Object> data);

}



Next we'll add the JdbcDAO implementation. For now we're just blocking out a stub that accepts a map of data and returns success. Later we'll use that data to define the entity fields.

Listing 3. JdbcDAO.java implementation



public class JdbcDAO implements DAO {

	@Override

	public boolean addPerson(Map<String, Object> data) {

		return true;

	}

}



We'll also need a Controller class that takes the DAO as an argument. The Controller in Listing 4 is a stub that returns a JSON string describing success or failure.

Listing 4. A stub Controller



import org.mtyson.dao.DAO;

public class Controller {

	private DAO dao;

	public Controller(DAO dao) {

		super();

		this.dao = dao;

	}

	public String add(String type){

		Map<String, Object> data = new HashMap<String, Object>();

		if (dao.addPerson(data)){

			return "{\"message\":\"Added a person!\"}"; 

		} else {

			return "{\"message\":\"Failed to add a person\"}";

		}

	}

}



Now we can reference the new controller and DAO layers in App.java, the main class for our Spark application:

Listing 5. App.java



import org.mtyson.dao.DAO;

import org.mtyson.dao.JdbcDAO;

import org.mtyson.service.Controller;

public class App {

	private final static DAO dao = new JdbcDAO();

	private final static Controller controller = new Controller(dao);

    public static void main( String[] args ){

    	//...

        Spark.post("/person", (req, res) -> { return controller.addPerson(req.body()); } ); // 1

    }

}



Notice the line in Listing 5 that is commented with the number 1. You'll recall from Part 1 that this line is how we handle a route in Spark. In the route-handler lambda, we just access the App.controller member (given that lambdas have full access to the enclosing class context), then call the addPerson() method. We pass in the request body via req.body(). A JSON request body will be expected in our request, and that body should contain the fields for the new Person entity.

If we now hit the POST /person URL (using Postman, which I introduced in Part 2) we'll get a message back indicating success. Postman shows us what a response message would look like, but it's empty of real content. For that we need to populate our database.

Populating the database

We'll use JdbcDAO to add a row or two to our database. To set this up, we first need to add some items to pom.xml, the application's Maven dependency file. The updated POM in Listing 6 includes a MySQL JDBC implementation, Apache DBUtils, and a simple wrapper library so that we don't have to manage the JDBC ourselves. I've also included Boon, a JSON project that is reputed to be the fastest way to process JSON in Java. If you're familiar with Jackson or GSON, Boon does the same thing with a similar syntax. We'll put Boon to use shortly. The POM updates are shown in Listing 6.

Listing 6. Add MySQL, DBUtils, and Boon to Maven POM



<dependency>

			<groupId>mysql</groupId>

			<artifactId>mysql-connector-java</artifactId>

			<version>5.1.37</version>

		</dependency>

		<dependency>

			<groupId>commons-dbutils</groupId>

			<artifactId>commons-dbutils</artifactId>

			<version>1.6</version>

		</dependency>

		<dependency>

			<groupId>io.fastjson</groupId>

			<artifactId>boon</artifactId>

			<version>0.33</version>

		</dependency>



Now, change JdbcDAO to look like Listing 7. The addPerson() will take the first_name and last_name values from the map argument and use them to insert a Person into the database.

Listing 7. Add a Person to the database



package org.mtyson.dao;

import java.sql.SQLException;

import java.util.ArrayList;

import java.util.List;

import java.util.Map;

import java.util.stream.Collectors;

import org.apache.commons.dbutils.QueryRunner;

import com.mysql.jdbc.jdbc2.optional.MysqlDataSource;

public class JdbcDAO implements DAO {

	private static MysqlDataSource dataSource;

    static {

        try {

        	dataSource = new MysqlDataSource();

        	dataSource.setUser("root");

        	dataSource.setPassword("password");

        	dataSource.setServerName("localhost");

        	dataSource.setDatabaseName("spark_app");

        } catch (Exception e) {

            throw new ExceptionInInitializerError(e);

        }

    }

    public boolean addPerson(Map<String, Object> data) {

    	QueryRunner run = new QueryRunner( dataSource );

		try	{

		    int inserts = run.update( "INSERT INTO Person (first_name, last_name) VALUES (?,?)", data.get("first_name"), data.get("last_name"));

		} catch(SQLException sqle) {

		    throw new RuntimeException("Problem updating", sqle);

		}

		return true;    	

    }

}



In Listing 7 we obtained a JDBC dataSource instance, which we'll use when connecting to the database instance running on localhost. In a true production scenario we'd need to do something about connection pooling, but we'll side-step that for the present. (Note that you'll want to change the root and password placeholders above to something unique for your own implementation.)

Updating the controller

Now let's return to the controller and update it. The updated controller shown in Listing 8 takes a String and modifies it into a Map, which can be passed to the DAO. We'll see how Boon lives up to its name here, because the String argument will be a bit of JSON from the UI. Listing 8 has the controller updates.

Listing 8. Controller converts a JSON String to a Java Map



import java.util.HashMap;

import java.util.Map;

import org.boon.json.JsonFactory;

import org.boon.json.ObjectMapper;

import org.mtyson.dao.DAO;

public class Controller {

	private DAO dao;

	ObjectMapper mapper =  JsonFactory.create(); // 1

	public Controller(DAO dao) {

		super();

		this.dao = dao;

	}

	public String addPerson(String json){

		Map<String,Object> data =  mapper.readValue(json, Map.class); // 2

		if (dao.addPerson(data)){ // 3

			return "{\"message\":\"Added a person!\"}"; 

		} else {

			return "{\"message\":\"Failed to add a person\"}";

		}

	}

}



The line marked 1 creates a mapper that we can use to convert JSON (it's a class member -- this ObjectMapper is designed to be reused). The line marked 2 uses the mapper to parse the string into a Java Map. Finally, in line 3, the map is passed into the DAO.

Now if we send a POST request with the body, our new Person will be added to the database. Remember that the primary key is an auto-increment field, so that isn't shown.

Listing 9. JSON body for the create Person POST



{"first_name":"David","last_name":"Gilmour"}



Here's the request displayed in Postman:

javamicroframeworksp3 fig1

Figure 1. Creating a Person from Postman

The statically typed data layer

So far I've demonstrated a dynamically typed approach to creating the Spark data layer, modeling with maps of data rather than explicitly defined classes. If we wanted to push further in the dynamic direction, we could insert a single add(String type, Map data) method in the DAO, which would programmatically persist a given type. For this approach we'd need to write a layer to map from Java to SQL types.

The more common approach to persistence is to use model classes, so let's take a quick look at how that would work in Spark. Then we'll wrap up the remaining Person CRUD.

Persistence with a model class

For a more traditional, statically typed approach to the data layer, we start by adding a Person class to the original stub application, as seen in Listing 10. This will be our model.

Listing 10. Person model



package org.mtyson.model;

import org.boon.json.annotations.JsonProperty;

public class Person {

    Long id;

	@JsonProperty("first_name")

	private String firstName;

	@JsonProperty("last_name")

	private String lastName;

	public Long getId() {

		return id;

	}

	public void setId(Long id) {

		this.id = id;

	}

	public String getFirstName() {

		return firstName;

	}

	public void setFirstName(String firstName) {

		this.firstName = firstName;

	}

	public String getLastName() {

		return lastName;

	}

	public void setLastName(String lastName) {

		this.lastName = lastName;

	}

}



The @JsonProperty annotation in Listing 10 tells Boon to convert JSON's underscore format (which corresponds to HTML document fields) to the camel-cased fields of a Java class. If you're familiar with Jackson, you'll observe that Boon has borrowed some of its annotations. Also notice the modified addPerson() method on the controller below. It shows how the JSON String is converted in the object.

Listing 11. JSON-to-Java Person instance conversion



public String addPerson(String json){

		Person person = mapper.fromJson(json, Person.class); // Here's where we get our Person instance

		if (dao.addPerson(person)){

			return "{\"message\":\"Added a person!\"}"; 

		} else {

			return "{\"message\":\"Failed to add a person\"}";

		}

	}



In this case we aren't doing anything but persisting the Person object, but we can now use the model instance in whatever business logic we please. In Listing 12 I've updated the JdbcDAO.addPerson() method to use the Person class. The difference here is that the first and last names are now pulled from the Person getters, rather than from the Map used in Listing 7.

Listing 12. JdbcDAO with Person class



public boolean addPerson(Person person) {

    	QueryRunner run = new QueryRunner( dataSource );

		try	{

		    int inserts = run.update( "INSERT INTO Person (first_name, last_name) VALUES (?,?)", person.getFirstName(), person.getLastName());

		} catch(SQLException sqle) {

		    throw new RuntimeException("Problem updating", sqle);

		}

		return true;    	

    }



The Person application's request processing infrastructure now consists of three layers, successively converting request data from JSON, to a Map, to a Java class.

Developing the Spark UI

We have our model and a way to persist it. Next we'll begin developing a UI to save and view objects in the database. In Spark, this means adding static JavaScript resources to use in the template.html page.

To start, create a src/main/resources/public folder to hold the new resources, as shown in Figure 2.

javamicroframeworksp3 fig2

Figure 2. Adding src/main/resources/public to the Eclipse project

Integrating jQuery

For our JavaScript tool we'll use jQuery, which is especially useful for Ajax and DOM handling. If you don't have it already, download the latest version of jQuery (2.1.4 as of this writing) and place it in your new public folder. Another option is to create a file in public and copy the jQuery source into it, or you can actually download the file and copy it into the directory.

Next, using the same process, add the Serialize Object jQuery plugin. This plugin will manage the process of converting the HTML form into a JSON format that the server can understand. (Recall that the addPerson() method from Listing 8 expects a JSON string.)

Finally, add a file called app.js into the same directory. As you can see in Listing 13, app.js contains simple controls for the template.html.

1 2 Page 1
Notice to our Readers
We're now using social media to take your comments and feedback. Learn more about this here.