The Android Mobile Phone Platform

Android is an open-source mobile phone stack developed by Google for the Open Handset Alliance.


September 04, 2008
URL:http://www.drdobbs.com/mobile/the-android-mobile-phone-platform/mobile/the-android-mobile-phone-platform/210300551

"I have always wished that my computer would be as easy to use as my telephone. My wish has come true. I no longer know how to use my telephone."

—Bjarne Stroustrup

At 25 years of age, you might think that the mobile phone has grown up. However, it's about to undergo some dramatic changes—a delayed adolescence, if you will. Specifically, its software is about to undergo an overhaul. New, upstart players in the market want to revamp its entire software stack, with attention on its UI. Such changes will be welcomed by mobile phone users, because—as Bjarne Stroustrup attests—many of these devices have uniformly awful UIs. It's not altruism on these companies' part that motivates this effort, but a lucrative market of more than a billion subscribers that's growing rapidly.

With the launch of its iPhone in 2007, Apple quick-started the mobile phone's makeover. While the existing players—Nokia's Symbian, Sun's Java Micro Edition (Java ME, formerly known as "Java 2 Micro Edition," or J2ME), and Microsoft's Windows Mobile—aren't standing still, such a disruptive change also provides an opportunity for new players to enter the field. Google has done just that with Android, its own mobile phone platform (code.google.com/android). Android uses Java and its application frameworks are Java-based, which offers the intriguing possibility of porting field-tested Java ME applications to it.

This Is the Droid You're Looking For

Android is a mobile phone stack developed by Google for the Open Handset Alliance (www.openhandsetalliance.com). Android is open source, and draws on open-source projects such as Linux, WebKit, SQLite, and FreeType to implement its core services.

Figure 1 shows the Android software stack. At the stack's bottom and closest to the silicon, a Linux 2.6 kernel provides preemptive multitasking and system services such as threads, networking services, plus memory and process management. It manages all low-level drivers and acts as a Hardware Abstraction Layer (HAL). Any low-level device accesses should be handled through the Android frameworks and not Linux command-line utilities.

Above the kernel is the Android runtime and support libraries layer. The runtime consists of the Dalvik Virtual Machine (DVM) and core libraries. The DVM is a bytecode interpreter that provides device independence for Android applications, similar in purpose to Java ME's JVM. Like the JVM, Google's DVM is optimized for embedded systems. For example, it uses register-based calls and storage, reducing the overhead incurred using stack operations. Furthermore, the core libraries that implement Java capabilities are written in native code. Importantly, a device can execute multiple instances of the DVM, with each running an Android process. Because each Android application executes in its own process, a device can therefore execute multiple Android applications concurrently. However, it's important to note that Android is optimized for use on a single screen.

Figure 1: Android software stack.

The support libraries are written in C/C++ and provide a range of system services. All are from open-source projects: SQLite provides lightweight relational database services, SGL handles 2D vector graphics, and WebKit (which is at the heart of Apple's Safari browser) implements a web browser engine. An Android application can access the stack's filesystem, but not the files used by other Android applications.

The Application Frameworks occupy the next layer on the stack. These are Java-based, and provide a rich set of services for a mobile application. For example:

These frameworks let you, through inheritance, either modify a class's behavior or extend its capabilities.

Android's Application Layer will be populated with applications, including a web browser, calendar, e-mail client, and map display—all written in Java.

Mobile Java: The Next Generation

When you compare the lifecycle of Java ME Mobile Information Device Profile (MIDP) applications to Android applications, differences stand out.

Figure 2 is the lifecycle for a MIDP application (a "MIDlet"). A typical mobile phone then might have a black-and-white 96×54 pixel screen, with 128 KB of RAM and 256 KB of nonvolatile memory. In such a constrained platform, only one MIDlet executes at a time. A MIDlet spends most of its lifecycle shuttling between two states—active, where it does work, and paused, where the phone switches the MIDlet out so that it can handle a call. However, a MIDlet isn't completely inactive while in the paused state. It can use timers to schedule work, and create background threads to execute tasks that don't require the UI.

[Click image to view at full size]

Figure 2: Lifecycle for a MIDP application.

One problem when writing a MIDlet is that its startApp() method is called every time the phone returns it to the active state. This complicates the MIDlet's design because you often place the initialization code in the startApp() method. The reason for locating code there (rather than in the MIDlet's constructor) is that the specification requires that if an error occurs in the constructor's code, the stack throws a fatal exception. This behavior rules out the possibility of recovering gracefully or displaying messages that could tell users what went wrong. Many MIDlet applications therefore set a Boolean flag so that for subsequent calls of startApp(), the initialization code is bypassed. Another complication is that there's no set mechanism to preserve the state of a complex MIDlet when it is paused. You must save the values of key variables into its RMS yourself when pauseApp() is invoked, then restore these values when the MIDlet becomes active.

Figure 3 is the lifecycle of an Android application, termed an "Activity." The diagram reflects the more powerful capabilities of the hardware platform. Nowadays, high-end phones often have 1 GB of RAM and up to 16 GB of Flash memory. They have larger color screens (240×320, which are now more common), along with faster processors and features like cameras and GPS. The extra states in an Activity's lifecycle not only address some limitations in the MIDP specification, but also enable support for the concurrent execution of Activities.

[Click image to view at full size]

Figure 3: Lifecycle of an Android application.

An Activity's lifecycle starts with a call to onCreate(), which handles set-up chores and static global initializations. The onStart() method begins the Activity's visible lifetime, where it presents its UI window on-screen. Invoking onResume() has the Activity begin interactions with users. The application is in the foreground and active state. If another application must come to the foreground, Android calls the onFreeze() method to save the Activity's state. To this end, Android provides support methods that let you easily store and retrieve various data elements in objects called "Bundles." After you save the Activity's context, onPause() is called. The Activity stops any animations, commits its variables to persistent storage, and enters the paused state. At this point, the other Activity's UI window can come to the foreground and interact with users.

What happens to the Activity next depends on several things. If the foreground application only partially obscures the screen, the Activity hovers in the paused state, as users might bring it back to the foreground at any moment. If the foreground application's window completely covers the screen, onStop() is called and the Activity enters the stopped state. This state is similar to the paused state, except that for low-memory situations, Android might dispose of its window. If the Activity returns to the foreground later, calls to onRestart() and onStart() restore the window and UI. In very low memory situations, Android might kill the Activity to release memory. If users select the Activity again, it must be completely restarted and its context restored from saved Bundles. As Figure 3 illustrates, the additional methods and states let the Activity cleanly resume execution whether it's been paused, stopped, or destroyed.

Test Ride

The Android SDK is a free 80-MB download (code.google.com/android/documentation.html). The preferred developer toolset is Eclipse 3.3 (Europa), which also requires the JDT plug-in, Java Development Kit (JDK) 5 or 6, and optionally the Web Services Tool (WST).

I installed the software on a Dell Latitude with a 2.4-GHz Mobile Pentium 4 CPU equipped with 512 MB of RAM. The SDK documentation has a step-by-step procedure for installing the necessary software and getting the requisite "Hello World!" sample Android application up and running. The integration between the Eclipse IDE and the Dalvik emulator that implements the Android environment and executes your Android application code is very good.

Because I'm always dealing with software rocket science, an application that moved a rocket around the screen seemed an ideal project to explore Android's features and quirks. Google's example Android code was a trove of information. I borrowed some code from its Lunar Lander game and other examples to get started.

To write Android applications, you extend an Activity and add code that implements your design (as in Java ME). You soon appreciate onCreate(), where you can place initialization code with error recovery. This is also where you create an instance of a View that manages your application's window with its UI.

However, you can't do much with a View until you understand how to use Android's XML layout file to create the visual objects and interactive widgets that are children of the View. This layout file contains custom XML tagged elements that represent widgets — buttons, text-entry boxes, and labels. Because placement of widgets and the flow of events among them is hierarchical, by nesting the order of the XML tags, you arrange the screen layout and the position of its widgets. The XML is saved into a file in the res/layout/ directory. When you build the Android application, it compiles the layout file's XML elements into resources that you reference by ID number.

To create its UI, the Activity first assigns its visual content to a View that uses your XML file. Then it makes an instance of the View, using ID numbers to obtain the appropriate widget resources that comprise the layout. Android's use of XML to organize the UI layout lets you—within limits—tinker with the interface without modifying code in the View. It would be nice to have a visual editor that gave you a better idea of how changes to the XML nesting affects the UI's layout.

I made a simple layout that filled the screen and presented a background image. Using Blender3D, I took the 3D model of the Dr. Dobb's rocket that I used for a previous article (www.ddj.com/mobile/193104855) and rendered it into a 2D PNG bitmap image. Processing key presses with Android's key event handlers felt similar to those in a MIDlet, where only the method names and some arguments had changed. It didn't take long to have a rocket scooting around the screen. The program, SpaceActivity, is available online at www.ddj.com/code/.

One Android shift that tripped me up was getting the screen's size—values that are often crucial when designing a program's gameplay. In a MIDlet's constructor, you simply called getWidth() and getHeight() for this information. Calling the same methods in Android's View constructor always retrieved values of zero. The reason is that View doesn't know how the screen is oriented (which affects the View's dimensions) until onStart() makes it visible. When this occurs, Android immediately invokes an onSizeChanged() method. This method provides values for the screen's visible dimensions in its new orientation. You override this method when you want to force the View to redraw the screen for the new orientation, but for my application I just grabbed the height and width values. Listing One compares how to determine the mobile phone's screen dimensions in Java ME and Android.

Listing One
(a) Java ME

class DrawScreen extends Canvas implements CommandListener {
    private int width;
    private int height;
    DrawScreen()  {  // Constructor
        width = getWidth();
        height = get Height();
}

(b) Android


class SpaceView extends View { public SpaceView(Context context, AttributeSet attrs, Map inflateParams) { super(context, attrs, inflateParams); } // constructor // This method called at startup or when screen changes. // w, h,-- current screen's width and height // oldw, oldh -- previous screen's width and height @Override protected void onSizeChanged(int w, int h, int oldw, int oldh) { super.onSizeChanged(w, h, oldw, oldh); mX = w; // Get screen's width mY = h; // Get screen's height } // onSizeChanged

My second programming test was to port over a basic Java ME scribbling program that uses the touchscreen. The Java ME program starts by displaying a blank image. When you touch and draw on the screen with a stylus, the program draws a colored line that tracks the stylus. The MIDlet uses pointerPressed(), pointerDragged(), and pointerRelease() to implement the tracking/drawing operations. A key press paints the image buffer with white, erasing it.

The first difference for porting to Android was writing an XML layout file to support the View's generation of a UI window. The second difference was managing the Bitmap object that both stored and displayed the results of the scribbling. I overrode the onSizeChanged() method so that if the screen's orientation changed, a new Bitmap would be allocated. Also, to guarantee that work of art you're creating is safely preserved when another application seizes the foreground, I added code to the onFreeze() method to save the Bitmap into a Bundle, termed an "icicle."

The third difference was that Android uses one lone method, onTouchEvent(), to handle all touchscreen interactions. Fortunately, you can easily break out the types of stylus actions—down, up, and drag—with a switch statement. I used the switch statement to call slightly modified pointerPressed(), pointerDragged(), and pointerRelease() methods. Although onTouchEvent() places all motion events into an array where you can replay the entire track of the pointer's path, the original pointerDragged() worked by capturing the new point and drawing a line to the previous point, which it had saved previously. The behavior worked the first time in Android. The only changes I made were to how colors were assigned to the event. See Listing Two online for a comparison between the Java ME and Android touch-screen code. The original Java ME program, Simple_Scribble, and the Android application, ScribbleActivity, are also available online.

Lessons Learned And Looking Forward

If you have Java ME experience, you will have a gentle learning curve when writing Android code. Most of what comprises the curve is determining what classes have familiar methods, and learning Android's quirks. Using XML for screen layout, which reduces the edit-build-execute cycles to tweak the UI to look right, is a big plus. Android, like the iPhone, has the potential to change the way we work with mobile phones—maybe even make all of their features useful.


Tom is the head of Proactive Support for embedded products at Freescale Semiconductor. He has written documentation of demo programs for several mobile APIs. He can be reached at [email protected].

Terms of Service | Privacy Statement | Copyright © 2024 UBM Tech, All rights reserved.