End-to-end internationalization of Web applications

Going beyond the JDK

A typical Web application workflow involves a user loading one of your Webpages into her browser, filling out HTML form parameters, and submitting data back to the server. The server makes decisions based on this data, sends the data to other components such as databases and Web services, and renders a response back to the browser. At each step along the way, a globally aware application must pay attention to the user's locale and the text's character encoding.

The JDK provides many facilities to enable an internationalized workflow from within your Java code, and the Apache Struts framework extends it even further. However, you must still take particular care when managing how data gets into your application code and how your application interacts with other components in an internationalized manner. It is at the interfaces where enabling internationalization is thinly documented and supported.

In this article, you explore what you need to accomplish when developing an internationalized Web application. You also learn some best practices that will make your global applications successful.

A refresher on character encoding

Depending on what article, book, or standard you read, you'll notice subtle differences in the use of the terms character set and character encoding. Loosely speaking, a character set is a collection of the atomic letters, numbers, punctuation marks, and dingbats used to construct textual documents for one or more locales. A character encoding defines a mapping of numbers to the members of a character set. Although not truly synonymous, the terms are often used interchangeably.

The familiar

7-bitUS-ASCII

encoding maps a Latin character set suitable for American users, but it proves unsuitable for global applications. To accommodate additional characters, ligatures, and diacritics, the 8-bit

ISO-8859

series of encodings was created. These standards augment

US-ASCII

by extending the encodings to include 128 additional characters. The most common encoding (and, for many browsers and application servers, the default) is

ISO-8859-1,

or Latin Alphabet

No. 1

, which supports Western European character sets. Other encodings include

ISO-8859-7

for Greek characters and

ISO-8859-10

for Nordic languages.

Many applications are built solely around the

ISO-8859-1

encoding. Although this encoding accommodates a wide scope of users—and might prove sufficient for many applications—it is not a complete character set. An application could, of course, select an appropriate

ISO-8859

encoding based on the user's locale, but that can only create a good deal of grief. One problem is that the byte-sized

ISO-8859

encodings may not coexist on the same page because the upper halves of their encoding spaces map numbers to different characters. Another headache comes from receiving HTML form input from users using different encodings. When this data is stored in a database using byte-size characters, you also need to store the encoding associated with the field.

The final blow that knocks

ISO-8859

out of the realm of fully internationalized applications is its lack of support for multibyte characters such as those found in Asian languages. Although wider character encodings and modal 8-bit encodings support these character sets, they also cannot coexist with other encodings.

For this reason, the Unicode Consortium developed the Unicode Standard. Unicode was created to be a character set of

all

characters and can represent millions of characters. One encoding for Unicode is the variable width,

UTF-8

encoding.

UTF-8

is compatible with

US-ASCII

—the first seven bits overlap precisely. Any character supported by the US-ASCII encoding is encoded into a single byte in

UTF-8

using the same

US-ASCII

encoding value.

UTF-8

indicates the presence of a multibyte encoding by setting the most significant bit of the first byte. The

UTF-16

encoding is similar, but all characters are at least two-bytes wide.

To be fully internationalized—and avoid headaches—pick a UTF encoding and use it throughout your application. Both

UTF-8

and

UTF-16

provide precisely the same support, although documents with characters taken predominantly from the

US-ASCII

encoding and encoded in

UTF-8

will be about half the size of a

UTF-16

-encoded document because the default character width is one byte instead of two.

The right input requires the right output

Text is both sent and received by Web applications, so you must address the character encoding of user submitted text as carefully as the encoding of your Website's pages.

If your Website collects user input through an HTML form text field, you must know the character encoding used by the browser submitting the form. First, let's start with the bad news: the browser probably won't tell you what encoding it used. Some browsers may indicate the encoding in an HTTP header, and some browser-specific mechanisms exist to indicate encoding, but you must still deal with the reality that many browsers simply won't tell you how the data was encoded.

The HTML 4.0 standard introduced the accept-charset attribute on the <form> element to indicate what character encodings the server must accept. Unfortunately, the browser may disregard this value altogether, thus rendering this construct essentially useless for controlling character encoding.

What you can do consistently with common modern browsers is assume the text's character encoding in a form submission is the same as the page encoding of the HTML containing the submitted form. Thus, if the form is contained on a page rendered with

UTF-8

, you can assume the submitted form text content is also

UTF-8

-encoded.

One caveat is that many browsers, including Internet Explorer and Netscape, allow the user to change which encoding is used to interpret the page after the page has loaded. A user could request the browser to display a

UTF-8

-encoded document as if it were actually

ISO-8859-1

-encoded. If the page contains only

US-ASCII

characters, the page will not look different to the user. However, any submitted form text will be encoded differently than what the server anticipates. Again, if the submitted text is

US-ASCII

compatible, the server won't be any wiser. However, if any of the submitted text is in the upper end of the

ISO-8859-1

encoding space, it will not be decoded properly—the server will view it as garbage.

This risk only results when a user forces the page to be interpreted with an encoding for which it was not intended. In general, assuming the submitted text uses the same encoding as the form page is perfectly reasonable.

As noted earlier, there are problems associated with applications that render different pages using different encodings—and needing to know the browser's character encoding only adds to the mess. The character encoding used to decode submitted text must be set by calling

setCharacterEncoding()

on the

ServletRequest

object

before

calling

getParameter()

. Hence, you cannot embed the page encoding in a hidden form field unless you bypass the Servlet API (which is not recommended). Your best solution is to pick a single UTF encoding, such as

UTF-8

, and use it consistently throughout your application.

Controlling output character encoding

Because the output character encoding controls input character encoding, you must ensure the pages sent to your user are encoded as you intended.

You have several options for controlling output character encoding in a J2EE application. If you're writing a servlet, you can set the content type directly on the ServletResponse object. In doing so, however, be sure to use the java.io.PrintWriter to render your output. If you write directly to the java.io.OutputStream, your response will not be encoded as you intended:

   ServletResponse response = getServletResponse();
   // Always set the content type before getting the PrintWriter
   response.setContentType( "text/html; charset=UTF-8" );
   // Now, get the writer that will handle your output
   PrintWriter writer = response.getWriter();

Setting the content type directly on the response object in a servlet is essentially the same as using a JSP (JavaServer Pages) page directive like this:

   <%@ page contentType="text/html; charset=UTF-8" %>

Both methods set the output response encoding, but they have a shortcoming. If you use the same page encoding throughout your Web application, you'll need to replicate this code throughout all of your application's servlets and JSP pages. Are you certain you, or another developer on your team, won't forget this subtle one-liner in any of your code? If you set the encoding in the servlet, then you can, of course, encapsulate this behavior in a common subclass for all of your servlets. However, this approach isn't recommended; it now prevents you from subclassing from other framework-related base classes because Java restricts you to single-inheritance of implementation.

If you're using Struts, you're in luck. The

contentType

attribute on the

controller

element in your

struts-config.xml

file can be used to set your responses' default character encodings:

   <controller contentType="text/html; charset=UTF-8" />

This attribute only sets the default encoding type. A JSP page directive setting the content type, or setting the content type on the response object, overrides this setting.

If your Struts application has workflows that pass through servlets, or go directly to JSP pages without first passing through Struts, this configuration setting won't help.

Also, if your application contains static HTML documents, the problem proves even more difficult. You can use an

http-equiv

setting in an HTML

<meta>

tag to specify an output encoding, but that doesn't mean the editor really used that encoding to save the file! (I talk more about conflicting encoding information later.)

The broader solution to control output-encoding for JSP pages, servlets, and static HTML in a single place is to add a javax.servlet.Filter implementation to your application. First, implement a filter that wraps the servlet response object:

   public class UTF8EncodingFilter implements javax.servlet.Filter
   {
      public void init( FilterConfig filterConfig )  throws ServletException
      {
         // This would be a good place to collect a parameterized
         // default encoding type.  For brevity, we're going to
         // use a hard-coded value in this example.
      }
      public void doFilter( ServletRequest request,
                            ServletResponse response,
                            FilterChain filterChain )
                                     throws IOException, ServletException
      {
         // Wrap the response object.  You should create a mechanism 
         // to ensure the response object only gets wrapped once.
         // In this example, the response object will inappropriately
         // get wrapped multiple times during a forward.
         response = new UTF8EncodingServletResponse( (HttpServletResponse) response );
         filterChain.doFilter( request, response );
      }
      public void destroy()
      {
         // no-op
      }
   }

The servlet response wrapper should set the default content type before the application attempts to read the submitted form parameters. Here, we override the call to

setContentType()

, which will be called at least once during the request (by the application server). If no explicit character encoding is specified—for example, the content type is simply set to

"text/html"

instead of

"text/html; charset=ISO-8859-1"

—we'll set the encoding to

UTF-8

, as shown in the code below. It's important, however, to make sure you only do this to text documents and not images or similar binary files.

   public class UTF8EncodingServletResponse
                     extends javax.servlet.http.HttpServletResponseWrapper
   {
      private boolean encodingSpecified = false;
      public UTF8EncodingServletResponse( HttpServletResponse response )
      {
         super( response );
      }
      public void setContentType( String type )
      {
         String explicitType = type;
         // If a specific encoding has not already been set by the app,
         // let's see if this is a call to specify it.  If the content
         // type doesn't explicitly set an encoding, make it UTF-8.
         if (!encodingSpecified)
         {
            String lowerType = type.toLowerCase();
            // See if this is a call to explicitly set the character encoding.
            if (lowerType.indexOf( "charset" ) < 0)
            {
               // If no character encoding is specified, we still need to
               // ensure the app is specifying text content.
               if (lowerType.startsWith( "text/" ))
               {
                  // App is sending a text response, but no encoding
                  // is specified, so we'll force it to UTF-8.
                  explicitType = type + "; charset=UTF-8";
               }
            }
            else
            {
               // App picked a specific encoding, so let's make
               // sure we don't override it.
               encodingSpecified = true;
            }
         }
         // Delegate to supertype to record encoding.
         super.setContentType( explicitType );
      }
   }

Pitfalls in controlling output character encoding

Now that you've picked a uniform character encoding for your entire application and you've implemented a mechanism to manage it, what could go wrong?

All the mechanisms I've discussed indicate the response character encoding in an HTTP header. This is actually quite enough for a browser to know how to handle your documents. However, a browser can get confused if it finds multiple encoding specifications for the same response that conflict with one another.

For example, suppose you've implemented a servlet filter to set a common default character encoding of

UTF-8

, but your application returns a static HTML document that contains an

http-equiv

setting in a

<meta>

tag specifying

ISO-8859-1

. Consider the HTML fragment:

   <html>
      <head>
         <title>My confusing page</title>
         <meta http-equiv="content-type" content="text/html; charset=ISO-8859-1">
      </head>
      . . .

How could this happen? Some HTML editors insert this <meta> tag to indicate the encoding used to store the file on the local file system. This tag can also be introduced if an XSL (Extensible Stylesheet Language) transformation produced your HTML. The XSL standard requires the inclusion of the content type setting used for the transformer output when rendering a <head> element.

Unfortunately, both the static HTML document and the XSL output document can be transformed from their original encoding through any of the mechanisms I've just discussed. This transformation does not change the content type setting in the <meta> tag, and, as a result, the browser sees a value that conflicts with the HTTP header. How this is handled is browser-dependent, so you want to avoid this situation at all costs, but how?

If the problem involves static HTML, you could simply delete the <meta> tag. However, you'll want to ensure the document's true encoding matches the default character encoding assumed by the application server for reading local files (this is application server-dependent). Alternatively, you can set this tag's value to the target application response encoding.

For XSL output, removing the <meta> tag is not an option unless you find a nonstandard transformer. Your only choice is to explicitly set the output encoding in your stylesheet to the target application response encoding like this:

   <xsl:output method="html" encoding="UTF-8" />

Controlling input character decoding

We've seen how to influence which character encoding the browser uses on HTML form submissions by properly specifying the character encoding of the HTML form page. However, how do we properly decode the text our Web application receives? If we've rendered all of our pages with the same encoding, we simply decode all of our input with the same strategy. Nevertheless, we must still tell our J2EE container what encoding to use. How do we do that?

If you use

ISO-8859-1

throughout your application, you may be in luck because most application servers default to it. However, since we've already noticed that

ISO-8859-1

is insufficient for a truly global application, we'll want to set up our application server to accept a Unicode encoding such as

UTF-8

. Unfortunately, there's no standard way to do this. The simplest solution may be to refer to your application server documentation for a vendor-specific mechanism.

There are, however, portable standards-based solutions. Ideally, your input-decoding solution complements your output-encoding mechanism. If you control character encoding directly within a servlet, you can specify the character encoding directly on the ServletRequest object:

   ServletRequest request = getServletRequest();
   // Always set the character encoding before getting parameters
   request.setCharacterEncoding( "UTF-8" );
   // Now, you can get properly decoded parameters
   String firstName = request.getParameter( "firstName" );

However, the javax.servlet.Filter implementation described previously is probably the simplest solution. We can modify that code with one new line to set the input decoding strategy:

   public class UTF8EncodingFilter implements javax.servlet.Filter
   {
      public void init( FilterConfig filterConfig )  throws ServletException
      {
         // This would be a good place to collect a parameterized
         // default encoding type.  For brevity, we're going to
         // use a hard-coded value in this example.
      }
      public void doFilter( ServletRequest request,
                            ServletResponse response,
                            FilterChain filterChain )
                                     throws IOException, ServletException
      {
         // Wrap the response object.  You should create a mechanism 
         // to ensure the response object only gets wrapped once.
         // In this example, the response object will inappropriately
         // get wrapped multiple times during a forward.
         response = new UTF8EncodingServletResponse( (HttpServletResponse) response );
                // Specify the encoding to assume for the request so
         // the parameters can be properly decoded/.
         request.setCharacterEncoding( "UTF-8" );
         
         filterChain.doFilter( request, response );
      }
      public void destroy()
      {
         // no-op
      }
   }

You should be warned that, even if you've rendered all of your pages in

ISO-8859-1

, most browsers won't stop a user from entering unencodable content such as Asian characters. How this data ends up being encoded is browser-dependent and certain to give your Web application grief when it attempts to decode the input. If you're lucky, an exception will be thrown. At worst, you may end up with garbage passed straight through to your database. Do you need any more encouragement to just go straight to Unicode?

Processing the input

Once you've successfully decoded the input and you're within the context of your Java code, you can stop worrying about character encoding. Sure, the issue will come up again when you send data off to a database, an authentication server, or a flat file—but, within the world of Java code, your text is encoding-neutral when encapsulated within a java.lang.String object.

Before you start processing, your input data may have to go through one more transformation. Although your input might be a properly decoded string, you might actually need it as a java.util.Date, a double, or some other locale-sensitive object.

You can reasonably expect browsers to send you a locale identifier along with a request (when they don't, you must pick a default). While java.util.ResourceBundle and Struts org.apache.struts.util.MessageResources instances are generally associated with selecting localized output, these are also great places to store initialization parameters for java.text.Format instances you'll use for localized input parsing.

Note
Did you know the java.text.DateFormat object isn't thread-safe? This has always been the case, but it wasn't mentioned in the JDK Javadoc until J2SE 1.4. Therefore, don't create a shared format object as a servlet or Struts org.apache.struts.action.Action attribute. Consider creating a new instance for every request or storing a shared instance as ThreadLocal data.

Although Struts has numerous mechanisms for localizing output, localizing input is less thorough. For this reason, never map a form property to a type of java.lang.Double—you won't have an opportunity to select a locale specific parsing format. Instead, map it to a java.lang.String and select the parsing format within your org.apache.struts.action.Action.

Another helpful workaround to know in Struts involves the use of the Validator framework. The Validator framework is an invaluable tool for flagging missing or incorrectly formatted user data and reporting it to the user before it ever reaches your business logic. However, how can you validate that the user entered a properly formatted date or decimal value in a locale-aware manner?

The Validator framework is locale-aware in the sense it can render locale-specific error messages. You can also specify locale-specific form validation configurations using the language attribute on your <formset> element. Specifying a new form-set for every locale can get unwieldy, however, because much of the content is often identical. Reliably maintaining multiple form-sets where much of the content is simply cut and pasted across elements is tedious and error prone (unless, of course, the locale-specific forms radically differ—I discuss this later with respect to postal addresses).

If only one field on a form requires validation in the context of a localized format or mask string, it's far more convenient to implement a solution that addresses only that field by obtaining the parsing format string from the org.apache.struts.util.MessageResources. Unfortunately, a generic field validator cannot easily access a locale-specific org.apache.struts.util.MessageResources instance. An easy way to address this issue (which, of course, involves implementing your own validation method and hooking it into the Validator framework), is to embed the parsing parameters as hidden fields in the form. For example, consider this JSP page:

   <%@ taglib uri="struts-html.tld" prefix="html" %>
   . . .  open HTML form
   <html:text property="birthday" />
   <html:hidden property="dateFormat" />

Prior to rendering the HTML form, prepopulate the "dateFormat" form property with the locale-specific date format. Alternatively, you can set the format value through another request property in the org.apache.struts.action.Action that forwards to this page. The net effect is this HTML fragment:

   <input type="text" name="birthday">
   <input type="hidden" name="dateFormat" value="M/d/y">

You can now validate the user-supplied date in the "birthday" property against the locale-specific format in the hidden field from within the Validator framework using a custom validation method, as shown in the following code. The method below expects a variable named "format" to be specified in the form validation configuration. The value of the "format" variable is the name of the hidden form property containing the parsing format. If the user-supplied date string can be successfully parsed using a java.text.SimpleDateFormat object and specified parsing format, the date is considered valid.

   import javax.servlet.http.HttpServletRequest;
   import org.apache.commons.validator.Validator;
   import org.apache.commons.validator.GenericValidator;
   import org.apache.commons.validator.ValidatorAction;
   import org.apache.commons.validator.ValidatorUtil;
   import org.apache.commons.validator.Field;
   import org.apache.struts.validator.Resources;
   import org.apache.struts.action.ActionErrors;
   import java.text.SimpleDateFormat;
   import java.text.ParseException;
   
   . . .
   static public boolean validateDate( Object bean,
                                       ValidatorAction va,
                                       Field field,
                                       ActionErrors errors,
                                       HttpServletRequest request )
   {
      String dateString;
      String dateFormat;
      // Get the date as a string
      if ((bean == null) || (bean instanceof String))
      {
         dateString = (String) bean;
      }
      else
      {
         dateString = ValidatorUtil.getValueAsString( bean, field.getProperty() );
      }
      if (GenericValidator.isBlankOrNull(dateString))  return true;
      // Get the locale-specific date format
      String dateFormatField = field.getVarValue("format");
      if ((dateFormatField == null) || (dateFormatField.length() == 0))  return true;
      dateFormat = ValidatorUtil.getValueAsString( bean, dateFormatField );
      if ((dateFormat == null) || (dateFormat.length() == 0))  return true;
      try
      {
         SimpleDateFormat parser = new SimpleDateFormat( dateFormat );
         parser.setLenient(false);
         parser.parse( dateString );
      }
      catch( ParseException exc )
      {
         // If parser threw an exception, the user entered invalid data
         errors.add( field.getKey(), Resources.getActionError(request, va, field) );
         return false;
      }
      return true;
   }

This validation rule is made available in the Validator configuration like this:

   <validator name="date"
              classname="org.gavaghan.validator.FieldChecks"
              method="validateDate"
              methodParams="java.lang.Object,
                            org.apache.commons.validator.ValidatorAction,
                            org.apache.commons.validator.Field,
                            org.apache.struts.action.ActionErrors,
                            javax.servlet.http.HttpServletRequest"
              depends=""
              msg="errors.date"/>

Finally, in your form validation rules, you configure the form like this:

   <form name="birthdayForm">
      <field property="birthday" depends="date">
         <var>
            <var-name>format</var-name>
            <var-value>dateFormat</var-value>
         </var>
      </field>
   </form>

This construct above says the form property named "birthday" must be parseable using the java.text.SimpleDateFormat parsing pattern found in the field named "dateFormat".

This approach works great when individual, standalone fields require localized parsing. However, some forms may require more substantial variations during localization. Collecting a user's mailing or billing address, for example, is rather complicated considering the wide variation of postal addressing schemes across the globe. Your HTML select box of U.S. state abbreviations won't work well in France, will it?

At some point, particularly when the presence—not just the format—of a particular field is locale-sensitive, locale transformations simply can't be narrowly encapsulated. It may prove necessary to conditionally render entire forms—or entire pages—based on locale. At a minimum, you may need to create locale-specific servlets and Struts org.apache.struts.action.Action classes to address these tedious situations. Nevertheless, isn't the point of all of these internationalization and localization mechanisms the ability to deal with locale-specific content without rewriting entire components? Yes, but sometimes the maintenance of such a brute force approach is cheaper than attempting to force fit a tool that really can't be made to work.

What's the encoding of your input files?

One final note about getting the input to your Java application involves text files that are part of your Web archive. Do you know how they're encoded in the file system? You must know something about the editor you're using to determine this. Ensure the file encoding matches the encoding your

java.io.Reader

is expecting. The

java.util.Properties

and Struts

org.apache.struts.util.MessageResources

classes only deal in

ISO-8859-1

, so your files must be saved in this format, and any characters outside the supported character set must be properly escaped.

The

ISO-8859-1

limitation on property files is clumsy, but at least it's predictable. XML files can support any character encoding, but you must be careful the file is truly saved in the format it advertises. Suppose the "Line 1" XML directive of an XML document identified the file as being encoded in

UTF-8

:

   <?xml version="1.0" encoding="UTF-8"?>

When this file is first created, a sophisticated XML editor will probably recognize this directive and save the file in the proper encoding. However, what if another developer edits this file using some generic text editor that doesn't understand XML? The updated file might be resaved as

ISO-8859-1

. Of course, this isn't a problem if the file only contains

US-ASCII

-compatible characters, but your XML parser will not properly decode any other characters at runtime.

Other considerations

The Java code in a Web application seldom stands alone. A database, authentication server or other architectural component that supports your application must also be evaluated for its internationalization support. You may have gone to great lengths to ensure the use of multibyte Unicode across all your HTTP transmissions, but it is for naught if your database can only store 8-bit

ISO-8859-1

.

Understand these components and their requirements. Suppose your Web application must communicate with another server using a raw byte stream over a TCP/IP socket. The burden is on you to know what character encoding the other system is expecting to consume and to render your output accordingly. Simply calling getBytes() on a java.lang.String instance uses the platform default encoding (which probably isn't what you want!) Look at the version of getBytes( String enc ) that accepts a specific character encoding. Better yet, look at java.io.OutputStreamWriter and the constructor that accepts a character encoding.

Localizing the output

I began this discussion on end-to-end internationalization by talking about encoding our output to support all necessary characters sets required by our global application. I haven't yet discussed how the actual content is localized—and content localization completes the circle from browser, to Web application, and back, while touching other components and databases along the way.

The JDK has numerous mechanisms for selecting localized content, including the java.util.ResourceBundle class and various other classes that are aware of, and deal with, java.util.Locale. Classes exist for locale-specific dates, currency, decimal values, and other concepts. The java.io.Reader and java.io.Writer classes elegantly handle the transformations to and from characters and encoded byte streams. Internationalization support built into the JDK is even further leveraged and augmented by Struts and related technologies like Validator and Tiles. Because many fine discussions already tackle these topics (see Resources), I won't rehash them all here. However, any discussion that claims to cover end-to-end internationalization must at least mention these thorough and clever concepts fundamental to the Java language.

The only remaining pitfall with localized output involves content that simply cannot be rendered on a generic page using placeholders for appropriately translated text. Imagine accommodating languages rendered left-to-right and right-to-left using the same template. This might be something to simply design on your page layouts. If that's too constraining, you can implement separate pages for certain categories of locales. This approach resembles the form input localization I already discussed. The built-in localization framework is powerful, but no benefit arises from forcing it where it just can't work.

Conclusion

Internationalization of Web applications is not a trivial task, and it's not something that can easily be added to an existing component as an afterthought. You need to do more than simply create multilanguage translations of your ResourceBundle files to take a single-language application into a global marketplace. You must understand from the design stage what locales you need to support, how external components such as databases support internationalization, and how to encapsulate all of your localized content in a manner that allows your locale-neutral components to be reused. You also must identify when localization of input forms goes beyond simple locale-specific validation. Once all of your localization is narrowly encapsulated, your business logic may be reused across all locales—ensuring location-independent behavior for all of your users. Internationalization is challenging, but, if the solution is designed with enough forethought, your investments will payoff with faster development, reduced maintenance, and delighted global customers.

Mike Gavaghan is a Sun Certified Enterprise Architect and Web Component developer who has been providing software services for companies in the Dallas-Fort Worth area in Texas for more than 10 years. He is presently leading efforts at a major wireless Internet company to provide international support for its online customer applications. His fascination with the challenges of developing internationalized software began in 1999 during a visit to a Korean brokerage firm in Seoul to perform analysis on the Korean Stock Exchange and KOSDAQ. In 2001, he was reminded again of the importance and complexities of accommodating global users when visiting an Indian outsourcing firm in Delhi. He has discovered that the work is always exciting, often frustrating, and very relevant to the global marketplace.

Learn more about this topic

Join the discussion
Be the first to comment on this article. Our Commenting Policies