Get ready for advanced multimedia on your Java mobile platform

A tour of the features in the upcoming Advanced Multimedia Supplements for J2ME API

Java-enabled devices are rapidly evolving into full-fledged multimedia platforms. Features once available on separate devices such as cameras, radios, and audio processing are now being combined. Currently the developer must program to many different operating systems and APIs to take advantage of these advanced multimedia features—but that is about to change.

In this article, I present an overview of the Advanced Multimedia Supplements Specification (AMMS), explaining how it fits with the other APIs available on J2ME, and giving a series of code samples demonstrating some of the new features.

The AMMS builds on the Mobile Media API for J2ME (MMAPI) and therefore inherits the concepts of Players, to play both sound and video; the Manager, to create Players; and Controls, to interact with the various types of Players. AMMS adds many new Controls and also a GlobalManager, which creates new objects for the effects network and image processing functionality. It is a small-footprint API targeted for J2ME devices running Connected Limited Device Configuration (CLDC) or Connected Device Configuration (CDC), and profiles such as the Mobile Information Device Profile (MIDP). You can download the public review draft of the specification from the Java Community Process Website.


AMMS implementations consist of five building blocks, or capabilities:

  • Camera
  • Image post processing
  • Tuner
  • Music
  • 3D Audio

At least one of these capabilities must be supported for a device to be "AMMS-compliant." Each of these capabilities defines the programming artifacts available to the developer, some of which are mandatory and some optional ("must" and "may" are used throughout the rest of this article to indicate this). Of course, the devices running this software will vary greatly in the level of sophistication they offer—think of all the kinds of mobile phones you can buy—so the developer will have to design the program to cope with what is available. The following sections describe these five capabilities and show some of the interfaces and methods available.


Although you can take a picture with the MMAPI's VideoControl, AMMS gives you much more control over the way the picture is taken and what is done with it—think digital camera. In the examples that follow, I have already created a Player using Manager.createPlayer("capture://video"). Here we use the CameraControl (which must be supported in this capability) to enable the audio/visual shutter feedback, check if the camera is rotated to be in portrait mode (some advanced devices may be able to tell which way the camera is facing), and then allow the user to choose an exposure mode and image resolution:

   CameraControl camera = (CameraControl)
   int rotation = camera.getCameraRotation();
   boolean portrait = false;
   if(CameraControl.ROTATE_LEFT==rotation ||
      portrait = true; // And then perhaps do something different with the image
   String[] exposureModes = camera.getSupportedExposureModes();
   camera.setExposureMode(exposureModes[2]); // Pick one
   int[] resolutions = camera.getSupportedStillResolutions();
   camera.setStillResolution(1); // Pick the second pair (w, h)

We can also set the flash to the mode we want using FlashControl (which must be supported). AMMS contains a predefined list of modes, some or all of which may be available. Here we assume that auto with red-eye reduction is in the list of modes:

   FlashControl flash = (FlashControl)
   int[] modes = flash.getSupportedModes();

White balance can be changed using WhiteBalanceControl (which may be supported). The presets available depend on the device:

   if((WhiteBalanceControl white = (WhiteBalanceControl)
      != null) {
      String[] presets = white.getPresetNames();
      white.setPreset("tungsten"); // Picked from list
      int kelvin = white.getColorTemp(); // for display to user

We can zoom in to frame the subject using ZoomControl (which may be supported if the camera has a zoom function). We can use optical and digital zoom if they are available. The zooms have a set of levels that they can be set to, and, since our base configuration is CLDC 1.0, we use ints to represent fractions—100 means 1x, 150 means 1.5x, etc. Here we find out what levels are available, choose one, and zoom in by one level:

   if((ZoomControl zoom = (ZoomControl)
      != null) {
      int max = zoom.getMaxOpticalZoom(); // e.g., 200 for 2x
      int levels = zoom.getOpticalZoomLevels(); // e.g., 3 levels - 1x, 1.5x and 2x
      zoom.setOpticalZoom(140); // Request the closest level to 1.4x, which will be 1.5x
      zoom.setOpticalZoom(ZoomControl.NEXT); // Zoom in to 2x

To manually set the exposure settings, we can use ExposureControl (which may be supported). Again we use ints to represent fractions—an f-stop of 280 means f2.8. Since changing the optical zoom can affect the f-stop, we should set the f-stop after changing the zoom (f-stop measures the size of the aperture on a lens; the numbers get smaller as the aperture gets bigger):

   if((ExposureControl exposure = (ExposureControl)
      != null) {
      exposure.setExposureTime(2); // Microseconds (1/500th second)

We can focus using the FocusControl (which must be supported):

   FocusControl focus = (FocusControl)
   if(focus.isAutoFocusSupported()) {
   } else {
      // Otherwise, try the "mountain" or infinity setting. Find out what was actually set.
      int focusSet = focus.setFocus(Integer.MAX_VALUE);

Now we can take a photo or a series of photos in burst-shooting mode. PlayerListener from MMAPI can be used to listen for shooting events if we want to initiate shooting and then immediately continue processing in this thread, or if we want to give the user the option of previewing the picture. (Note: VideoControl from MMAPI can also be used but is not as sophisticated.) Here we set up the filename(s) to be saved and then either take up to 20 pictures in burst mode, or take one and allow the user to keep or discard it:

   SnapshotControl snapshot = (SnapshotControl)
   if(burstShooting) {
      // Start burst shooting, maximum 20 pictures
   } else {
      // Take one picture and allow the user to keep or discard it
      // ...
      // PlayerListener got a WAITING_UNFREEZE event and the user chose to discard the picture

Image post processing

The image post-processing capability allows us to manipulate images after they have been taken. Some digital cameras offer options to resize, rotate, change to black-and-white, and so on. We use the GlobalManager to create a MediaProcessor for this task. Here we use the ImageEffectControl (which may be supported) to change the image to monochrome, and the ImageFormatControl (which must be supported) to save the image in jpeg format. A MediaProcessorListener and processor.start() can be used to listen for process-completion events instead of using the blocking processor.complete() call:

   MediaProcessor processor = 
   InputStream inputStream = // ... Create an InputStream that contains the source image
   OutputStream outputStream = // ... Create an OutputStream that will receive the resulting image
   if((ImageEffectControl imageEffect = (ImageEffectControl)
      != null) {
   ImageFormatControl imageFormat = (ImageFormatControl)
   imageFormat.setParameter("quality", 80);


Access to the device's radio function is through a call to Manager.createPlayer("capture://radio"), and then by getting Controls on the Player returned. RDS (Radio Data System) functions can also be used if the device supports them. Here we use the TunerControl (which must be supported) to find an FM radio station above 97 MHz, set the station to play in stereo, save it in a preset slot, and then switch to another preset:

   TunerControl tuner = (TunerControl)
   int frequencyFound =, TunerControl.MODULATION_FM, true);
   tuner.setPresetName(1, "Radio 1");
   if(tuner.getNumberOfPresets()>=2) {
      int secondFrequency = tuner.getFrequency(); // For display to user
      String modulation = tuner.getModulation(); // For display to user

RDSControl may be supported and gives access to the RDS data on the selected FM frequency. Here we extract some information for display to the user and then turn off the automatic traffic announcement switching:

   if((RDSControl rds = (RDSControl)
      != null) {
      Date date = rds.getCT();
      boolean ta = rds.getTA();
      String ps = rds.getPS();
      String pty = rds.getPTYString(true);


The music capability provides Controls to modify how the audio sounds when listening to music—equalization and panning. VolumeControl from MMAPI is also available. Here we fetch the equalization presets available and let the user choose one:

   EqualizerControl equalizer = (EqualizerControl)
   String[] presets = equalizer.getPresetNames();
   equalizer.setPreset("rock"); // Pick one

More sophisticated settings are available—for example, bass and treble levels. Here we set the bass level to flat/normal (50) and then turn the treble all the way up (100):


For more fine-grained control over the sound, the device may support multiband equalization—how many bands are available and what they can be set to is up to the device:

   int numberOfBands = equalizer.getNumberOfBands();
   int minLevel = equalizer.getMinBandLevel();
   int maxLevel = equalizer.getMaxBandLevel();

Next we can find out the frequencies of the first two bands for display to the user:

   int firstBandFrequency = equalizer.getCenterFreq(0);
   int secondBandFrequency = equalizer.getCenterFreq(1);

And if we want to turn the voice frequencies up to the maximum, we find the band that has the most effect on 3 kHz and turn it up:

   int bandNumber = equalizer.getBand(3000000);
   if(bandNumber!=-1) {
      equalizer.setBandLevel(maxLevel, bandNumber);

3D audio

The 3D audio capability is probably the most difficult to understand since it is not something commonly used (unlike a digital camera or an equalizer), and the programming itself is quite complex. The API allows the programmer to construct a network of Players that can be combined and fed into effects, and then output to the user. Traditional 2D effects like chorus and equalization are supplemented by 3D effects that attempt to place sound sources in a virtual 3D space around the listener.

Note that there is some discussion going on in the expert group about this part of the specification and an alternative proposal exists, which is included with the draft specification download.

In the following examples, I have created Players p1, p2, and p3 with different sounds using calls to Manager.createPlayer("..."). First, we set up p1 as a 3D sound source using SoundSource3D and move it in 3D space using LocationControl (which must be supported), then set how the sound attenuates as it travels through space to our listening position, which we'll define later:

   SoundSource3D source = GlobalManager.createSoundSource3D();
   LocationControl locationSource = (LocationControl)
   locationSource.setCartesian(0, 0, -10000); // 10 meters in front (negative z axis)
   DistanceAttenuationControl distanceSource = (DistanceAttenuationControl)
   distanceSource.setParameters(10, 50000, true, 1000);

Last, we define how much of this sound is fed to the reverb effect—in this case -2 dB (ReverbSourceControl may be supported):

1 2 Page 1
Page 1 of 2