|
|
Optimize with a SATA RAID Storage Solution
Range of capacities as low as $1250 per TB. Ideal if you currently rely on servers/disks/JBODs
Page 3 of 7
The streaming approach can also be used when sending message data. This avoids buffering large chunks of data. To do this,
the message content will be transferred during the method call by using an InputStream or a ReadableByteChannel. After writing the message header, the body data will be transferred based on the body stream or channel. Listing 7 is an
example of how implicit output streaming works. In this case the output streaming will be managed by the network library.
Performing the HTTP call means that the user has to pass over a channel object, which represents the handle of a streamable
resource.
HttpClient httpClient = new HttpClient();
// call request blocks until the response returns
File file = new File("rfc2616.html");
FileChannel fc = new RandomAccessFile(file, "r").getChannel();
HttpRequest req = new HttpRequest("POST", "http://localhost:80/upload/rfc2616.html", "text/html", fc);
// response handler
IHttpResponseHandler responseHandler = new IHttpResponseHandler() {
public void onResponse(HttpResponse resp) throws IOException {
int status = resp.getStatus();
// ...
}
// ...
};
// send the request by input streaming (this also works for the call method)
httpClient.send(req, responseHandler);
// ...
In some use cases the output (or body) streaming should be managed by application-level user code. An explicit, user-managed
steaming approach requires that the user retrieves an output channel to write the body data. In Listing 8 a message header
object is sent instead of a complete message object after the send() method has been called. This method call responds immediately by returning an output body channel object, which will be used
by the application code to write the body data. The message-send procedure finalizes by calling the body channel's close() method.
HttpClient httpClient = new HttpClient();
// create a http message header
HttpRequestHeader reqHdr = new HttpRequestHeader("POST", "http://localhost:80/upload/greeting", "text/plain");
// response handler
IHttpResponseHandler responseHandler = new IHttpResponseHandler() {
public void onResponse(HttpResponse resp) throws IOException {
int status = resp.getStatus();
// ...
}
// ...
};
// sending the message header (instead of the complete message)
WritableByteChannel outputBodyChannel = httpClient.send(reqHdr, responseHandler);
// writing the message body data
outputBodyChannel.write(ByteBuffer.wrap(new byte[] { 45, 78, 56}));
// ...
// close the request
outputBodyChannel.close();
Both approaches, streaming input data and streaming output data, will read and write data as soon as it appears. However,
streaming doesn't mean that the data will be read or written directly to the network. All read and write operations work on internal
socket buffers. When a write method is called, the operating system kernel transfers the data to the socket's send buffer. Returning from the write operation just says that the data has been copied to this low-level send buffer. It doesn't say that the peer has received the data.
All of the examples in the previous sections show different ways of handling messages and content on the client side. As you will see later in Listing 10, it is possible to use the same programming style, as well as the same input and output message object representations, on the server side in a very seamless way. When you develop server-side HTTP-based applications, however, you must give consideration to the Java Servlet API.
The Servlet API defines a standard programming approach for handling HTTP requests on the server side. Unfortunately, the
current Servlet API 2.5 supports neither non-blocking data streaming nor asynchronous message handling. When you implement
a servlet's service method such as doPost() or doGet(), the application-specific servlet code will read the request data, perform the implemented business logic, and return the
response. To simplify writing servlets, the Servlet API uses a single-threaded programming approach. The servlet developer
doesn't have to deal with threading issues such as starting or joining threads. Thread management is part of the servlet engine's
responsibilities. Upon receiving an HTTP request the servlet engine uses a (pooled) worker thread to call the servlet's service
method.
The downside of the Servlet API 2.5 is that it only allows for handling messages in a synchronous way. The HTTP request and response object have to be accessed within the scope of the request-handling thread. This message-handling approach is sufficient for most classic use cases. When you begin working with event-driven architectures such as Comet or middleware components such as HTTP proxies, however, asynchronous message handling becomes a very important feature.
When implementing an HTTP proxy, for instance, a request message has to be forwarded, and the response message has to be returned without wasting a request-handling thread for each open call. When you implement an HTTP proxy based on the current Servlet API, each open call requires one worker thread. The number of concurrent proxied connections is restricted to the number of possible worker threads.
xSocket.
Archived Discussions (Read only)