Java and Web-Executable Object Security

Our authors examine the general problems with Web-embedded executable objects and how Java's security model attempts to resolve them.


November 01, 1996
URL:http://www.drdobbs.com/jvm/java-and-web-executable-object-security/184409995

November 1996: Java and Web Security

Java and Web-Executable Object Security

How safe is safe?

Michael Shoffner and Merlin Hughes

Michael and Merlin are developers for prominence.com. They can be contacted at shoffner@ prominence.com and [email protected], respectively.


Even though most users don't write the software they use, they nonetheless trust their programs to do what they're supposed to do. Unfortunately, this assumption may not be valid when it comes to executable objects embedded in web documents. The Web represents a new distribution channel and execution environment (see Figure 1) and, as such, introduces an unfamiliar trust model and a new class of potential security risks for users.

Despite uncertainties, the potential of the web software-distribution model is practically limitless. Live web objects make possible everything from flashy animations to powerful client/server collaborative systems. Furthermore, users can access all of these services from their browsers without having to download separate software such as helper apps or plug-ins.

In this article, we'll focus on how Java's security model attempts to resolve general problems with web-embedded executable objects. It's important to note that potential problems are not restricted to Java alone-any powerful "live" language for the Web raises the same concerns. It's also important that, while Java's security model is very sophisticated, implementation bugs and design problems remain. Independent teams of security researchers, most notably the Princeton group (Drew Dean, Ed Felten, and Dan Wallach) and David Hopwood of Oxford University, are subjecting Java to formal scrutiny-with interesting results. Consequently, we'll cover some of the problems with the Java model that they have demonstrated.

Intrinsic Java Security

The Java language itself has intrinsic layers of compile-time and run-time security that prevent standalone applications and web-embedded objects (applets) from exhibiting undesirable and/or unauthorized low-level system behavior. This front-line security mechanism is complemented by additional restrictions imposed by the SecurityManager class in the run-time environment of Java-enabled Web browsers. The SecurityManager controls access to higher-level client resources such as the local file system and the network.

Java's intrinsic language and compiler layers of security address program validity. They attempt to ensure that all operations are valid and that code can't read or modify any information to which it shouldn't have access.

The Java language specification includes security provisions that are enforced by the compiler and subsequently verified by the run-time system. The responsibility of language-level security is to make it impossible for a piece of code to crash the run-time system or to access memory that has not been correctly allocated. Of course, it will always be possible to write incorrect code. It should not, however, be possible for this incorrect code to compromise the run-time environment.

Rather than requiring (or allowing) you to deal with the physical aspects of memory management associated with the use of pointers and direct memory allocation, Java abstracts references to objects into symbolic handles. The only way you can access an object is through a symbolic reference that is null or refers to an object that has been properly and successfully allocated. The absence of pointer arithmetic solves many of C and C++'s major reliability and security problems and protects you from many of the common errors associated with direct memory access. Java's object-reference design also prevents malicious programmers from abusing knowledge of the memory layout of certain systems.

Java's strict type checking makes it impossible to use a reference to an object of one type in place of a reference to an object of another, incompatible type. This prevents programmer errors and prevents code from accessing unallocated memory by claiming that a string is a huge array, for example. Similarly, Java's strong type checking ensures that it's impossible to turn an integer number into an object reference, or forge object reference types and perform incorrect accesses. This is of particular concern in cases such as a workstation setup, where the screen display is mapped into a fixed readable location in memory. If an address could be supplied directly or an array could be indexed beyond its allocated limit, it would be possible for an application to illicitly read the console screen. Many machines have sensitive information such as stacks, video, and I/O (for example, the keyboard buffer) publicly available in memory, so this is a serious concern.

All array indexing in Java is strictly bounds-checked, which prevents the all-too-familiar "undefined" behavior associated with overrunning array bounds. Writing over the end of an array and clobbering memory will always result in an error being reported in Java-in C or C++, the results are unpredictable and frequently hard to diagnose. Similarly, if a reference is misused in Java, a run-time ClassCastException is thrown, but in C++ the results are unpredictable.

Java's language specification has several other important characteristics to prevent potentially dangerous incorrect code. All new variables are initialized to a default state to hide existing memory contents.

It is, however, a compile-time error to attempt to read from a demonstrably uninitialized variable. Similarly, it's a compile-time error to include code that is unreachable by any feasible path of execution. Memory is automatically garbage collected when a program no longer holds any references to it, preventing memory leaks and removing the need for you to try to free unused memory.

In addition to curtailing incorrect code, Java protects users: Since it's impossible to exploit the binding of object references to actual locations in memory, sensitive local system characteristics can't be determined by examining pointer values. More importantly, it's not possible to access memory that hasn't been correctly allocated. Also, any memory allocated by Java is automatically cleared, thereby removing sensitive information that might remain in newly allocated memory.

The Java Run-Time System

To achieve platform independence, Java programs are compiled to bytecode-machine language for a virtual platform known as the "Java Virtual Machine." The Java VM is a run-time system consisting of a verifier, ClassLoader, and an execution engine, all of which run over the top of the local OS. For the sake of security, it is insufficient for the compiler and language specification to require that programs be safe for execution. A malicious or incorrect compiler could quite easily generate bytecode that violates the security dictates of the language. This is where Java's second layer of intrinsic security-the run-time system-comes in:

The Verifier. The Java run-time system will not execute arbitrary bytecodes. Once the binary data that constitutes a class has been loaded (from a local disk or the network), it's put through a screening process called the "bytecode verifier." The verifier checks that the bytecode does not violate any of the security tests specified by the language. In a correct implementation, the bytecode verifier can't be bypassed or altered. Using formal dataflow techniques, it checks that the bytecode is correct and therefore performs no unauthorized memory accesses, breaks no access restrictions, and performs no illegal type casting. The verifier also ensures that all necessary run-time checks are present in the bytecode, including stack and type cast checks.

Ideally, the bytecode verifier guarantees that the run-time system can safely execute any approved bytecode. Although the verification process is nontrivial, it need only be performed when a class is first loaded. Subsequently, the bytecode is guaranteed to be safe, and Java's memory-access restrictions mean that it is impossible for any Java program to modify approved bytecodes. The end result is that accepted bytecode is subject to all of the language-level security restrictions provided by a correct compiler.

Unfortunately, the verifier is not as secure in practice as the Java specification asserts. The Princeton group has documented several significant problems and noted that the lack of a formal description of Java's type system makes it impossible to formally prove the correctness of the run-time's type verifier. Since the verifier can't be proven correct, its exact behavior for every possible set of bytecodes is uncertain. Of particular interest is an attack David Hopwood has described that makes it possible for a subclass of any nonfinal class to be constructed without security checks.

The ClassLoader. This is the final layer of intrinsic security invoked before the run time executes bytecodes. Every class in the Java language hierarchy fits into a naming scheme that guarantees it a unique name based on its source. For example, run-time restrictions on applets are enforced by a local class called SecurityManager. At execution time, the ClassLoader checks bytecodes to make sure they don't violate namespace restrictions, which means that the local SecurityManager class is distinguished from any class that calls itself SecurityManager that is loaded from the network. Otherwise, the network might serve a class intended as a spoof of a critical local system class.

The ClassLoader also partitions namespaces between classes from different network sources. The result is that classes from different sites can't interfere with each other as a result of name collisions. Without the ClassLoader, it might be possible for the compiler and bytecode verifier to ensure correct class usage within an application from one site, then have the run-time system resolve the class to one with the same name from another site, which would short-circuit the built-in language-integrity mechanisms.

The ClassLoader has had its share of implementation problems as well. For example, David Hopwood found that the ClassLoader would load arbitrary code from an absolute path on the local file system if the first component of the package name was a "/". An attacker who could place code somewhere on the local file system could then get the run time to execute that code in trusted mode, since the verifier saw it as originating on the local file system. Trusted code is able to load and execute DLLs.

The most interesting application of this attack involved Netscape's use of a file-system cache. If the DLLs are loaded by an applet as data and Netscape caches the data, an applet able to determine the relevant cache filename could execute imported machine code directly. This bug was fixed in JDK 1.01 and Navigator 2.01.

The Princeton group demonstrated a combination verifier/ClassLoader attack based on a shortcoming in Java's static typing mechanism. The verifier would not reject code that created a subclass of the ClassLoader and caught the resultant SecurityException. The modified ClassLoader was then able to load another class from the network that resolved to the system SecurityManager but had more permissive access modifiers.

Figure 2 illustrates the Princeton "Bypassed ClassLoader Attack." This attack works by creating a ClassLoader that resolves two references to the same class name to different classes: the "real" class and a "spoof" class with more permissive access modifiers. The two references are made in two separate classes. When an object of the "real" class is allocated in the first class and then passed to the second, the second class treats it as being of the "spoof" type. This allows an attacker to change the run-time hierarchy and call native code-a rather large hole indeed. A workaround for this bug was issued in Atlas Pr2 (3.0b3). JavaSoft indicates that this bug is fixed in JDK 1.0.2.

The Execution Engine. Because of the security checks in the compiler, verifier, and ClassLoader, the execution engine doesn't actively monitor code safety and so can operate relatively quickly. Since many checks simply can't be performed at compile time (array bounds checks, certain type casts, stack usage for recursive procedures, and so forth), the bytecode verifier ensures that the code that the execution engine executes contains all these self-checks.

The run-time execution engine was originally an interpreter, but just-in-time (JIT) compilers have recently become available as an alternative. JIT compilers build bytecodes into processor-optimized binaries on-the-fly to increase execution speed. The Java VM is also being implemented in hardware by several manufacturers. Java's security model (see Figure 3) asserts that as long as bytecodes are executed by any correct means, security will still be maintained.

Applets, Web Browsers, and Potential Dangers

Java is currently used primarily to create applets for the Web. A new applet is created by subclassing the Applet class, compiling the new classes into bytecode files, and embedding them in an HTML page with the APPLET tag. Java-enabled browsers (such as JavaSoft's HotJava, IBM's Web Explorer, Netscape's Navigator 2.0 and later, and Microsoft's Internet Explorer 3.0) instantiate and execute the embedded applet inline in the browser. The applet then runs in the browser environment, subject to security restrictions built into the browser's Java run-time system.

Java's SecurityManager object determines the level of I/O access for a given Java object. A standalone Java application can do whatever you want in terms of networking and file access because the standalone has a null SecurityManager. An applet embedded in a Web page is an entirely different matter because applets are instantiated within the browser's Java run-time object hierarchy. For this reason, the browser's built-in SecurityManager determines the applet's I/O capabilities and the applet programmer has no means to override the browser's settings. This property proves useful because it allows browser developers to set applet I/O policies to handle the potential threats inherent in unrestricted I/O and resource access.

Sockets and Networking. Like any network-oriented language, Java provides mechanisms to create and utilize TCP/IP sockets. Allowing nontrusted software access to sockets is a tricky proposition at best, so browser SecurityManagers impose varying restrictions on socket use. The classic worry in this area is the SMTP port and its associated fakemail and denial-of-service weaknesses, but more sophisticated distributed attacks also are possible if no restrictions are imposed on sockets.

In a distributed SMTP attack, for example, a cracklet could check a random IP on a given subnet for an SMTP port, and if there is one, probe for known bugs based on the MTA's version number. A similar cracklet could connect to a host's port 23 to try root login with random passwords. If a hidden (or Trojan) cracklet loads on a different client with every hit on a high-traffic page, there could conceivably be tens of thousands of copies simultaneously hammering a given subnet from a variety of places, probing for weaknesses and reporting back to an arbitrary host. Such attacks would be hard to trace, since they would appear to originate from all over the network.

The masquerading problem is a related threat. A Trojan applet could serve as a trail fixer for a cracker by loading in a browser, alerting the cracker at his machine, and proceeding to bounce a Telnet session from the cracker to a target machine. When the target's log entries are examined, the machine running the browser shows up as the guilty party, and the target site has the log entries to prove it.

A less malevolent and far more intriguing hidden or Trojan applet could use client machines to crunch numbers for other distributed tasks. Obviously, the server would incur coordination overhead, but for certain classes of problems, such an approach might prove fruitful. These attacks may seem a bit esoteric, but on a network as large as the Internet, one has to keep such possibilities in mind, even though they are not representative of the general case.

Sockets and Firewalls. Applets with unrestricted socket privileges also hold dangers for firewalls that allow unrestricted outbound connections. When a cracklet comes across a firewall through a proxy and runs, it could try to open a socket back across the firewall. If this operation succeeds, the cracklet could start "wardialing" behind the firewall, checking for interesting information and forwarding it to some arbitrary host in the outside world.

Threads. Access to client threads is another potential security problem. Applets obviously shouldn't be able to alter system threads. Likewise, applets should have no modify access to the threads of other applets, otherwise those threads are vulnerable to degradation. For instance, available electronically is an applet that exploits Java's unprotected execution space. When the program runs, it kills all other active applets while protecting itself against being killed.

File systems. It's pretty clear that an applet with unlimited write permissions on a file system is unsafe, but what about read-only access? The friendly applet that displays an LED sign, for example, could easily contain code that tries to read sensitive files, such as /etc/passwd on UNIX systems. If a user carelessly runs the browser as root, the contents of the entire file system are available to be streamed off to the outside world.

System Resources. Denial-of-service, usually called "hostile applets,'' based on hogging resources (such as memory) are possible as well. These sorts of attacks are hard to prevent since they involve abuse of legitimate resources. Luckily, these sorts of attacks don't generally do any damage more serious than ruining the running copy of the browser. In an extreme example, they may crash the machine before they can be stopped.

Security Policies of Java-Enabled Browsers

By now it's more than obvious that resource access for applets has to be handled carefully-too much restriction means useless applets, but too little means potentially dangerous ones. Java-enabled browsers walk this line by implementing security policies of varying "paranoia levels." Notwithstanding the paranoia level, no browser currently allows applets from the Net to load libraries, define native methods calls, modify system threads, or start processes on the client.

Netscape Navigator 2.0 and Atlas (3.0). Netscape's Navigator/Atlas browser is extremely stringent with regard to I/O permissions. URLs and sockets may be opened only to the server that is the applet's source (as specified in the CODEBASE portion of the APPLET tag). Navigator turns off all disk access and allows read-only access to nonsensitive system properties. Applets don't persist on the client-they die when the browser's thread dies. Netscape's security restrictions may mean "lame" applets, but at least they're fairly harmless.

HotJava. JavaSoft's HotJava takes a less severe approach. HotJava's paranoia level is user-configurable via an interface screen. Applet network permissions can be set to one of three security modes: "No access," "Applet host," or "Unrestricted." No access turns off applet networking completely. Applet host allows applets to connect to their originating servers. Unrestricted does the obvious, which is strongly discouraged.

An interesting configuration option that JavaSoft is investigating for a future release is the "Firewall" setting, which would allow users to specify a firewall host boundary. Applets from within this scope would be able to access any URL, but applets outside of the firewall could access only URLs which are also outside. HotJava also implements Access Control Lists (ACLs) to set read/write permission on files. If the file is not on the client ACL, applets may not access it in any way.

Bugs Galore?

JavaSoft and Netscape, private security consultants, and independent researchers have been working to expose problems with Java's security model. An impressive analysis from the Princeton group identifies several important implementation bugs as well as design problems. The most fundamental shortcoming that Princeton points out is that a given implementation can't be verified because there is no formal security specification. In their paper to IEEE, the Princeton group calls for a significant low-level redesign of the language based on a formal specification.

Future Directions

Signature/authentication APIs for Java are on the way from JavaSoft. Authentication will solve a lot of trust-related problems, because the browser will be able to determine the applet's true source and issue permissions accordingly, as shown in Figure 1. Properly authenticated applets from trusted hosts will be able to enjoy more privileges than untrusted applets. Interestingly, the Java development team has stated that even with the introduction of trusted applets, they will still work to "expand the functionality of unauthenticated applets without compromising security."

Conclusion

A stated guiding principle of Java's development engineers is that "using a Java enhanced browser should be no more risky than using a non-Java enhanced browser." Java and Java-enabled Web browsers strive to protect users while still enabling useful applications.

The algorithms and source for Java's security mechanisms are open to the scrutiny of the Net at large, so that when security bugs or other flaws exist, they can be exposed and remedied quickly. Java's security model is impressive, but analysis has revealed the need for improvements. Hopefully JavaSoft will quickly take the steps necessary to make Java a more airtight system.

Figure 1: Models of software distribution.

Figure 2: The Princeton Bypassed ClassLoader Attack.

Figure 3: The Java security model.

Terms of Service | Privacy Statement | Copyright © 2024 UBM Tech, All rights reserved.