|
|
Optimize with a SATA RAID Storage Solution
Range of capacities as low as $1250 per TB. Ideal if you currently rely on servers/disks/JBODs
Page 5 of 5
All of the enterprise Java applications I've developed executed SQL directly. Early applications used SQL exclusively, whereas the later ones, which used a persistence framework, used SQL in a few components. Initially, I used plain JDBC to execute the SQL statements, but later on I often ended up writing mini-frameworks to handle the more tedious aspects of using JDBC. I even briefly used Spring's JDBC classes, which eliminate much of the boilerplate code. But neither the homegrown frameworks nor the Spring classes addressed the problem of mapping between Java classes and SQL statements, which is why I was excited to come across iBATIS.
In addition to completely insulating the application from connections and prepared statements, iBATIS maps JavaBeans to SQL
statements using XML descriptor files. It uses Java bean introspection to map bean properties to prepared statement placeholders
and to construct beans from a ResultSet. It also includes support for database-generated primary keys, automatic loading of related objects, caching, and lazy loading.
In this way, iBATIS eliminates much of the drudgery of executing SQL statements. iBATIS can considerably simplify code that
executes SQL statements. Instead of writing a lot of low-level JDBC code, you write an XML descriptor file and make a few
calls to iBATIS APIs.
Of course, iBATIS cannot address the overhead of developing and maintaining SQL or its lack of portability. To avoid those problems you need to use a persistence framework. A persistence framework maps domain objects to the database. It provides an API for creating, retrieving, and deleting objects. It automatically loads objects from the database as the application navigates relationships between objects and updates the database at the end of a transaction. A persistence framework automatically generates SQL using the object/relational mapping, which is typically specified by an XML document that defines how classes are mapped to tables, how fields are mapped to columns, and how relationships are mapped to foreign keys and join tables.
EJB 2 had its own limited form of persistence framework: entity beans. However, EJB 2 entity beans have so many deficiencies, and developing and testing them is extremely tedious. As a result, EJB 2 entity beans should rarely be used. What's more, it is unclear how some of their deficiencies will be addressed by EJB 3.
The two most popular lightweight persistence frameworks are JDO, which is a Sun standard, and Hibernate, which is an open source project. They both provide transparent persistence for POJO classes. You can develop and test your business logic using POJO classes without worrying about persistence, then map the classes to the database schema. In addition, they both work inside and outside the application server, which simplifies development further. Developing with Hibernate and JDO is so much more pleasurable than with old-style EJB 2 entity beans.
In addition to deciding how to access the database, you must decide how to handle database concurrency. Let's look at why this is important as well as the available options.
Almost all enterprise applications have multiple users and background threads that concurrently update the database. It's quite common for two database transactions to access the same data simultaneously, which can potentially make the database inconsistent or cause the application to misbehave. Most applications must handle multiple transactions concurrently accessing the same data, which can affect the design of the business and persistence tiers.
Applications must, of course, handle concurrent access to shared data regardless of whether they are using lightweight frameworks or EJB. However, unlike EJB 2 entity beans, which required you to use vendor-specific extensions, JDO and Hibernate directly support most of the concurrency mechanisms. What's more, using them is either a simple configuration issue or requires only a small amount of code.
In this section, you will get a brief overview of the different options for handling concurrent updates in database transactions, which are transactions that do not involve any user input. In the next section, I briefly describe how to handle concurrent updates in longer application-level transactions, which are transactions that involve user input and consist of a sequence of database transactions.
Sometimes you can simply rely on the database to handle concurrent access to shared data. Databases can be configured to execute database transactions that are, in database-speak, isolated from one another. Don't worry if you are not familiar with this concept; for now the key thing to remember is that if the application uses fully isolated transactions, then the net effect of executing two transactions simultaneously will be as if they were executed one after the other.
On the surface this sounds extremely simple, but the problem with these kinds of transactions is that they have what is sometimes an unacceptable reduction in performance because of how isolated transactions are implemented by the database. For this reason, many applications avoid them and instead use what is termed optimistic or pessimistic locking, which is described a bit later.
One way to handle concurrent updates is to use optimistic locking. Optimistic locking works by having the application check
whether the data it is about to update has been changed by another transaction since it was read. One common way to implement
optimistic locking is to add a version column to each table, which is incremented by the application each time it changes
a row. Each UPDATE statement's WHERE clause checks that the version number has not changed since it was read. An application can determine whether the UPDATE statement succeeded by checking the row count returned by PreparedStatement.executeUpdate(). If the row has been updated or deleted by another transaction, the application can roll back the transaction and start over.
It is quite easy to implement an optimistic locking mechanism in an application that executes SQL statements directly. But
it is even easier when using persistence frameworks such as JDO and Hibernate because they provide optimistic locking as a
configuration option. Once it is enabled, the persistence framework automatically generates SQL UPDATE statements that perform the version check.
Optimistic locking derives its name from the fact it assumes that concurrent updates are rare and that, instead of preventing them, the application detects and recovers from them. An alternative approach is to use pessimistic locking, which assumes that concurrent updates will occur and must be prevented.
An alternative to optimistic locking is pessimistic locking. A transaction acquires locks on the rows when it reads them, which prevent other transactions from accessing the rows. The details depend on the database, and unfortunately not all databases support pessimistic locking. If it is supported by the database, it is quite easy to implement a pessimistic locking mechanism in an application that executes SQL statements directly. However, as you would expect, using pessimistic locking in a JDO or Hibernate application is even easier. JDO provides pessimistic locking as a configuration option, and Hibernate provides a simple programmatic API for locking objects.
In addition to handling concurrency within a single database transaction, you must often handle concurrency across a sequence of database transactions.
Isolated transactions, optimistic locking, and pessimistic locking only work within a single database transaction. However, many applications have use-cases that are long running and that consist of multiple database transactions that read and update shared data. For example, suppose a use-case describes how a user edits an order (the shared data). This is a relatively lengthy process, which might take as long as several minutes and consists of multiple database transactions. Because data is read in one database transaction and modified in another, the application must handle concurrent access to shared data differently. It must use the Optimistic Offline Lock pattern or the Pessimistic Offline Lock pattern, two more patterns described by Fowler in Patterns of Enterprise Application Architecture.
One option is to extend the optimistic locking mechanism described earlier and check in the final database transaction of the editing process that the data has not changed since it was first read. You can, for example, do this by using a version number column in the shared data's table. At the start of the editing process, the application stores the version number in the session state. Then, when the user saves their changes, the application makes sure that the saved version number matches the version number in the database.
Because the Optimistic Offline Lock pattern only detects changes when the user tries to save their changes, it only works well when starting over is not a burden on the user. When implementing such use-cases where the user would be extremely annoyed by having to discard several minutes' work, a much better option is to use the Pessimistic Offline Lock.
The Pessimistic Offline Lock pattern handles concurrent updates across a sequence of database transactions by locking the shared data at the start of the editing process, which prevents other users from editing it. It is similar to the pessimistic locking mechanism described earlier except that the locks are implemented by the application rather than the database. Because only one user at a time is able to edit the shared data, they are guaranteed to be able to save their changes.
Read more about Enterprise Java in JavaWorld's Enterprise Java section.
Archived Discussions (Read only)