Importing 3D Models into Mobile Java Devices

Here's a step-by-step guide for importing 3D models from PCs into cell phones.


October 05, 2006
URL:http://www.drdobbs.com/mobile/importing-3d-models-into-mobile-java-dev/mobile/importing-3d-models-into-mobile-java-dev/193104855

Tom is a Systems and Applications Engineer for Freescale Semiconductor. He can be contacted at [email protected].


Fueling the market growth for mobile devices is that many of them implement the Java Mobile Edition (ME) platform (formerly known as "J2ME"). Java ME provides an array of APIs that lets you write lightweight, yet versatile, Java ME applications (MIDlets), including those that require 3D graphics support.

The Mobile 3D Graphics (M3G) API (jcp.org/en/jsr/ detail?id=184) provides an interface for rendering 3D graphics, plus it describes an object file format for importing, exporting, or transferring 3D model data. The file format permits 3D models to be designed outside of Java ME, then later integrated into a MIDlet. Graphic designers can therefore use PC-based commercial 3D authoring applications to construct models, add color and textures, and animate them. The PC does the heavy lifting of generating the model's geometric data. All of this information is subsequently exported into an M3G file. The final step is to add the M3G file as a resource to the MIDlet's JAR file. Alternatively, the models can be saved into another file format and a translator program converts the file to either M3G format or into data arrays that store the geometric and visual data. The MIDlet then uses the content of these arrays to generate the model on the fly.

In theory, you can use a PC to create and add awesome 3D models to M3G-capable games and other mobile applications. However, there are hurdles in the migration process. In this article, I show how to use freeware code and 3D authoring applications to make 3D models, apply texture maps, and add them to your MIDlet either as data array or an M3G file. The source code and related files that implement these techniques are available electronically; see www.ddj.com/code/.

Tripping Over First Hurdles

For a recent project, I had to demonstrate M3G's capabilities by writing a MIDlet that would display 3D models and let users interact with them. The MIDlet had to function identically on several types of handsets. My initial plan was to download a model in M3G format and add it to an M3G-capable MIDlet I wrote. The MIDlet would load the 3D model and display it on-screen. By pressing buttons on the phone's keypad, the MIDlet would make M3G API calls that manipulated the model's orientation.

After conducting Google searches and code experiments, I discovered that there weren't any M3G model files available anywhere. I would have to make the model myself, then somehow export it into a format or data arrays useable by M3G. The project was turning ugly.

This isn't to say a migration path wasn't in place when the M3G specification was implemented on vendors' phones in 2005. Hi Corporation, the company that did the M3G implementation for Sony Ericsson and Sprint-badged phones, provides plug-ins that let you export 3D models from several high-end commercial 3D authoring applications—Autodesk's 3ds Max (formerly Discrete's 3D Studio Max), Newtek's LightWave, Autodesk's Maya (formerly Alias), and Softimage's SOFTIMAGE|XSI. There are also utilities such as M3GExport Limited and M3GExporter that translate Maya and 3ds Max files, respectively.

But there's a catch. The price tag for commercial 3D authoring programs start at $500 and go as high as $4000! The plug-in situation also eliminates the use of capable shareware and freeware 3D authoring applications. For developers on a tight budget, migrating a 3D model from a PC to a MIDlet amounted to the adage "you can't get there from here."

Eventually, I discovered ways to accomplish the migration. The technique I describe here exports a model's geometry and appearance as data arrays. (In a future article, I will describe how to export the model in M3G file format.)

Key Classes and Rendering Modes

The key M3G classes involved with 3D object creation and rendering include:

Here, I concentrate on the object geometry and rendering attributes classes because they handle the model's construction and appearance. Figure 1 illustrates how these two groups of classes display a 3D model. The Camera and Light scene-graph classes figure in all rendering operations.

[Click image to view at full size]

Figure 1: How the M3G classes describe a 3D model and its visual attributes (appearance). The M3G rendering engine uses data from these objects to generate the image.

Two Rendering Modes

The presence of scene-graph support classes underscore that M3G supports "immediate" and "retained" rendering modes. In immediate mode, objects are rendered at once, applying the lighting conditions from an array of current Lights and a current Camera that determines the final image's POV. Objects rendered in immediate mode can be portions of a scene graph or individual meshes. Immediate mode offers low-level control of the rendering operations.

In retained mode, rendering operations are managed at a higher level. Each model's geometric data and appearance attributes are encapsulated in instances of scene-graph objects stored within a scene graph. A scene graph consists of a tree structure of data that describes all of the objects in a 3D world, along with its Lights and Cameras. These classes contain visibility flags and instances of them can be made visible or invisible under program control. In retained mode, scenes aren't rendered all at once. Instead, the rendering engine traverses the scene-graph tree, using the visibility flags, object positions, and POV to cull surfaces before rendering final images. This lets the rendering operation be optimized for performance. Only one Camera can be active at a time. Choosing a different Camera and making it the active one lets you immediately change the scene's POV.

Because the immediate-mode operation allows access to low-level primitives, you can define a model's surfaces through the use of algorithms or VertexArrays. However, for complex models it's easier and less memory-intensive to have Loader read the model from a scene graph stored in an M3G file.

Making the Immediate Mode Model

Since I couldn't generate M3G files, the best solution for me was to assemble the model as a Mesh using VertexArrays, then perform an immediate-mode render on it. My new plan still required building the model, then either finding a plug-in or writing a translator program to convert the model's data into triangle strips.

I chose Art of Illusion (www.artofillusion .org) as the 3D authoring application because it has an option for converting models into triangle strips, a feature that might simplify writing a translator program. It also is cross-platform, which meant I could press-gang my Apple Powerbook to do the work while I wrote the Java ME code on Windows.

Because the project already involved rocket science, I designed a retro-style Buck Rogers rocketship as the 3D model. Thus, the model "brocket" was born (Figure 2). I made the texture file, brocket2.png, to apply a simple color pattern on brocket's hull. I had Art of Illusion save the model in OBJ format, a 3D file format used by many authoring programs. The resulting files were brocket.obj, which contained the model's data, and brocket.mtl, which stored some appearance information and a reference to the texture map file.

[Click image to view at full size]

Figure 2: Using Art of Illusion to make the 3D model, brocket.

Before attempting to write a translator, I took one last stroll through Google to see if something similar was available. Luckily, I stumbled onto "Loading OBJ Models into M3G" (http://fivedots.coe .psu.ac.th/~ad/jg/objm3g/), where Andrew Davison describes his ObjView program—a Java utility that reads an OBJ file, renders the 3D objects with Java 3D, and writes the object data out as M3G VertexArrays. The source code was also available. This was exactly what I needed.

To use ObjView, you need Java SE 1.4.x SDK or later installed on your PC or Mac, along with Java 3D, which implements a 3D API for Java desktop applications. Originally developed by Sun, Java 3D is an open-source technology. For Windows, Java 3D is available online (https://java3d.dev.java.net/binary-builds.html). For Mac OS 10.4.x, Java 3D comes preinstalled. For Mac OS 10.3.x, you can download Java 3D (www.apple.com/downloads/macosx/apple/ java3dandjavaadvancedimagingupdate.html).

First, I compiled the ObjView source files, per directions in objM3G.pdf. Next, I had Art of Illusion save the model in OBJ format, along with the .mtl file to preserve texture information. I copied the resulting brocket.obj, brocket.mtl, and brocket2.png files to the directory with the ObjView classes. Using the console window, I switched to that directory and had Java execute ObjView and read the OBJ file.

ObjView rendered the brocket model in Figure 3, complete with the texture image. Clicking and dragging the mouse rotates the model within the window so that you can inspect your handiwork.

[Click image to view at full size]

Figure 3: ObjView rendering the brocket model, with a texture applied.

One problem ObjView's display revealed was that Art of Illusion could mangle the texture maps. This is probably my fault, as I found the application's interface for applying a texture map around a complex object difficult to use. To apply the texture to the model successfully, I used Milkshape (www.milkshape3d.com/), a 3D authoring program that has versatile texture-mapping tools and can import/export a variety of 3D file formats (Figure 4).

[Click image to view at full size]

Figure 4: Applying the texture to the brocket model with Milkshape.

When you're satisfied with the result, exit ObjView and it writes the model's geometry out as several M3G VertexArrays. These arrays describe the model's surfaces (as triangle strips), its surface normals (for calculating lighting effects), and its texture coordinates (which map a 2D image onto the surfaces). These VertexArrays appear as Java source code in the output file ObjView generates, examObj.txt. ObjView also writes some Appearance information and other visual attributes to this file, which are accessed through the code's built-in function setMatColours(). This file's contents can be copied and pasted directly into your application's display class, which is typically a Canvas.

Listing One shows the output ObjView generates, with the VertexArrays abbreviated for clarity. Technically, the data is actually stored as arrays, and the code's built-in methods, getVerts(), getNormals(), and getTextures(), load this data into VertextArrays. The built-in function setMatColours() sets the model's overall color to a light gray and its shininess to the maximum value of 128.0, which causes any specular highlights to be concentrated, rather than diffused.

// Methods for model in brocket.obj
  private short[] getVerts()
  // return an array holding Vertexes [2457 values / 3 = 819 points] 
  {
    short[] vals = {
          15,-3,-56,     16,-3,-56,     16,-4,-60,
     16,-4,-60,     26,-8,-70,     15,-3,-56,  
        ...  
     -11,-2,-41,    -11,-5,-40,    -14,-4,-26      };
    return vals;
  }  // end of getVerts()
  private byte[] getNormals()
  // return an array holding Normals [2457 values / 3 = 819 points] 
  {
    byte[] vals = {
          -110,-66,-6,   -70,105,-23,   -78,70,-73,  
     -78,70,-73,    -89,-77,-51,   -110,-66,-6,  
        ...  
     92,85,23,      120,39,21,     99,79,19      };
    return vals;
  }  // end of getNormals()
  private short[] getTexCoords()
  // return an array holding TexCoords [1638 values / 2 = 819 points] 
  {
    short[] vals = {
          160,72,   163,72,   164,68,
     164,68,   191,58,   160,72,
        ...  
     86,87,    86,88,    78,102      };
    return vals;
  }  // end of getTexCoords()
  private int[] getStripLengths()
  // return an array holding the lengths of each triangle strip
  {
    int[] lens = {
      44, 28, 40, 76, 31, 69, 15, 85, 36, 8, 21, 22, 156, 44, 28, 40, 76
    };
    return lens;
  }  // end of getStripLengths()
  private Material setMatColours()
  // set the material's colour and shininess values
  {
    Material mat = new Material();
    mat.setColor(Material.AMBIENT, 0x00000000);
    mat.setColor(Material.EMISSIVE, 0x00000000);
    mat.setColor(Material.DIFFUSE, 0xFFCDCDCD);
    mat.setColor(Material.SPECULAR, 0x00000000);
    mat.setShininess(128.0f);
    return mat;
  } // end of setMatColours()
Listing One

Building the Model and Scene

Now it was time to write the MIDlet code. The makeGeometry() method (Listing Two) assembles the data into a model. It invokes the built-in methods to load the positions, normals, and textures into VertexArrays va1, normArray1, and texArray, respectively. Next, it adds the VertexArrays to a VertexBuffer. Finally, the code makes a TriangleStripBuffer, idxBuf1, which references the triangle strips stored in va1.

    private IndexBuffer idxBuf1;    // Indices to the model's triangle strips
    private VertexBuffer vertBuf1;  
    private void makeGeometry()
    {
/*  Read in the values for the vertices, normals, and texture coords.
    Create a VertexBuffer object for these arrays. Next read in
    the strip lengths array and place the indexes in the IndexBuffer.
    These point to the triangle strips stored in the vertex buffer.
    Depending on the model, there may not be any texture coordinates,
    and so the calls related to them should be commented out.
*/
// Make vertices from coordinate data
        short[] verts = getVerts();
        VertexArray va1 = new VertexArray(verts.length/3, 3, 2);
        va1.set(0, verts.length/3, verts);
// Make normals from data
        byte[] norms = getNormals();
        VertexArray normArray1 = new VertexArray(norms.length/3, 3, 1);
        normArray1.set(0, norms.length/3, norms);
// Make texture coords -- uncomment this code to use textures
        short[]tcs = getTexCoords();
        VertexArray texArray = new VertexArray(tcs.length/2, 2, 2);
        texArray.set(0, tcs.length/2, tcs);
// ------ Make the VertexBuffer for this model -------
        vertBuf1 = new VertexBuffer();
// fix the scale and bias to create points in range [-1 to 1]
        float[] pbias = {(1.0f/255.0f), (1.0f/255.0f), (1.0f/255.0f)}; 
                                                      // Set scaling
        vertBuf1.setPositions(va1, (2.0f/255.0f), pbias); // set up geometry
        vertBuf1.setNormals(normArray1);                // Set up normals
// Uncomment following line to use textures
       vertBuf1.setTexCoords(0, texArray, (1.0f/255.0f), null);
// Create the index buffer for the model. This info tells M3G how to
// parse triangle strips from the contents of the vertex buffer.
        idxBuf1 = new TriangleStripArray(0, getStripLengths() );
    }  // end makeGeometry()
Listing Two

However, all I've done is set up the model's surfaces and texture map. Now I must describe the model's appearance, and the scene it resides in. Starting with the appearance, the method setAppearance() (Listing Three, available electronically) first loads the texture image into an instance of Texture2D. Next, it specifies the mapping characteristics that control how M3G applies the image to the model's surfaces. The code finishes up by making an instance of Appearance, and invokes setMatColours() to load the attributes generated by ObjView into it. This completes the model's assembly and appearance.

To build a scene, a POV must be established for a scene, and the lighting set and model added to it; see the createScene() method in Listing Four (available electronically). You can trace the steps that create a Camera and set its location in space. More M3G API calls create a Light, then set its type, color, and direction. (Rotating the Light to set its direction is only necessary for spotlights.) The last thing this method does is initialize the transform array, modelTrans(), which rotates or translates (moves) the model. With the scene illuminated and its POV established, the code invokes makeGeometry() and makeAppearance() to add the model to it. The last bit of code sets the background to a medium blue color.

Displaying The Model

To manipulate the model, the method updatModelTrans() takes the cumulative rotation values in the variables xDegrees and yDegrees and applies a rotation transformation to the model's transformation array. This method also uses the cumulative values in xTrans, yTrans, and zTrans to apply a Z-axis translation to the transformation array. Both the rotation and translation variables are modified by Canvas key event methods. Pressing the proper key on the phone's keypad increments/decrements the rotation and translation values. Pressing the FIRE game key performs a "reset" that clears these variables and returns the model to the origin.

To display the model, I use Canvas's paint() method to handle screen updates (Listing Five, available electronically). For 3D rendering, paint() first attaches (binds) the rendering engine's output to the Canvas, sets the camera and lighting, invokes Graphics3D's immediate-mode render() method on the model, and releases the Canvas. A set of M3G API calls position the Camera and add any active Lights. The model's orientation is updated with updatModelTrans(). The final step is to call render(), while providing the rendering engine with the data sources in the model's VertexBuffer, IndexBuffer, Appearance, and transformation arrays.

Lessons Learned

On my first attempt, I collided with Java ME's 32-KB code limit when I tried to display the model as a high-resolution (high polygon) model. I had to generate a low-resolution model to get the array sizes below this limit. You can use either Milkshape's DirectX Mesh Tools or Blender3D's Decimate modifier to reduce the polygon count of a model. Using both of these tools, I trimmed the original brocket model from 1728 surfaces to 518 surfaces before the demo MIDlet worked. When compared to the brocket model rendered in a 3D authoring program, the resulting display of the brocket model on the phone looks pretty good (Figure 5).

[Click image to view at full size]

Figure 5: The brocket model rendered in Blender3D (left), and the same model data rendered in M3G 1.1 on a Java ME phone (right).

The fewer polygons a model uses, the faster it renders, leaving more processor cycles available for other game logic. In the interest of performance, it becomes a matter of "how low can you go" in shaving off polygons until the model's geometry becomes distorted. I was able to reduce brocket to 240 polygons, which is a huge savings in processor cycles when you consider that rendering each polygon requires many floating-point operations.

I've written a demo MIDlet (available electronically) that displays brocket in the immediate mode using the techniques described here. The MIDlet also presents a menu so that you can pick a different POV, or switch to a second light of a different color and in a different position. Because the model and environment are represented in code, the only resource I had to add to the MIDlet was the texture image, brocket2.png. The MIDlet has been tested on a Sony Ericsson K700, a Samsung A940, and a Sanyo MM-9000. The demo's source code comes with the ObjView output of the medium-polygon model (brocket16TriStrip.txt) and low-polygon model (brocket18TriStrip.txt) for performance experiments.

Terms of Service | Privacy Statement | Copyright © 2024 UBM Tech, All rights reserved.