Dr. Dobb's is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.


Channels ▼
RSS

Open Source

Porting Unix to the 386 Language Tools Cross Support


APR91: PORTING UNIX TO THE 386 LANGUAGE TOOLS CROSS SUPPORT

Bill was the principal developer of 2.8 and 2.9BSD and was the chief architect of National Semiconductor's GENIX project, which was the first virtual memory microprocessor-based UNIX system. Prior to establishing TeleMuse, a market research firm, Lynne was vice president of marketing at Symmetric Computer Systems. Bill and Lynne conduct seminars on BSD, ISDN and TCP/IP. Send e-mail questions or comments to lynne@berkeley .edu. Copyright (c) 1991 TeleMuse.


We stated last month that "Projects of great complexity are always uncertain" and then we went on to develop our standalone system. Now we must examine what we accomplished. Recall that last month, we started with an empty 386 residing in protected mode without one shred of reliable code: just three little PC utilities to facilitate software loading and bootstrap operation. Using our protected-mode program loader, we created a minimal 80386 protected-mode standalone C programming environment for operating systems kernel development work. Then we wrote prototype code for various kernel hardware support facilities. Finally, we used our standalone programming environment as a testbed to shake out the bugs in our first-cut implementation of kernel 386 machine-dependent code in preparation for incorporation in the BSD kernel. Following our specification methodology, we created a suitable standalone system and conquered a number of latent software bugs and misconceptions.

With our standalone system, we have essentially established the "base camp" on our 386 expedition. We now possess much of the "gear" (utilities, compiler and assembler, and other equipment) required for such an adventure, but we must check it out and test it prior to actual use. As any good mountaineer knows, thorough knowledge of your equipment could save your life. In this case, an adherence to appropriate testing and coding procedures could save a project.

As we stated earlier, the standalone system can be viewed at this stage as if it were the kernel itself, with the extensions as the basis of our prototype kernel code. We now continue up the base of the mountain, furthering our initial utilities development through the creation of a stable cross-tools environment.

Why Develop Cross-Tools?

We have mentioned little about our protected-mode software generation mechanism in previous articles. In this article, we describe our set of tools that allows us to port 386BSD. Since we don't have 386BSD to generate 386BSD (yet), we must use another UNIX host to run the tools and generate protected-mode software; this "cross" mode operation is part of the means by which we bootstrap 386BSD. In our case, the cross-host that runs the software generating 386 code isn't even a 386!

Because the computer we use to generate the software is not the one that runs it, we will need a means to load files and program over Ethernet and serial lines to the target 386 system. We will then focus on proving GCC itself valid for cross-support purposes. The mechanisms used for this "first assault" will be of great importance until we have developed a stable native environment. Careful preparation in this area will allow us to weather the blinding "blizzards" of bugs which will inevitably arise on our way to the top.

386BSD Cross-Tools Goals

A proper evaluation of our cross-tools was crucial to the successful generation of the earliest version of 386BSD--before the system had the ability to generate its own binaries. While everyone always wants to use the very best tools possible in all cases, we decided that what we wanted from our cross-tools was simply to be able to generate enough of an operational BSD kernel and utilities to run our language tools in a native environment. Ultimately, we want to use native tools because they are more convenient, have a shorter "compile-edit-debug" cycle, are easier to support (for example, just one architecture to worry about), and use much of the traditional program development aids provided in BSD UNIX.

In a nutshell, BSD, like most UNIX systems, expects to be developed in a native environment.

As such, our principle concern at this stage is with correctness, not optimization. Performance considerations arise only after we achieve an operational system that can be refined using traditional means. This first "bootstrap" version of utilities and kernel is compromised in areas where our cross-support mechanisms are weakest. However, if carefully selected, we can jettison these compromised areas when we "go native."

More Details.

Both the kernel and early utilities are predominantly written in C, with some assembler support. Before a self-supporting kernel exists, approximately 250,000 lines of C code must be made operational via the cross-support. The chance of discovering compiler bugs, or cross-support-induced bugs, is almost certain.

What's in the Tool Chest?

Our tool chest of cross-tools consists of the following:

The C Compiler: The bulk of our effort is organized around the C compiler. For the 386BSD project, we relied upon the Free Software Foundation's (FSF) GCC compiler, version 1.34. At the beginning of this port, we had little familiarity with the strengths and weaknesses of GCC. We were also uncertain about its usefulness as an operating systems development tool, as it appeared primarily alongside other 386 C compilers on extant System 5 UNIX systems. Unfortunately, we cannot supply written code fragment examples from GCC (or any other FSF software) in this article due to constraints of the "copyleft" (see the accompanying text box entitled "Brief Notes: Copyrights, Copylefts, and Competitive Advantage").

386 Protected-mode Assembler: The remaining 386 code, particularly the code used for interfacing to non-C mechanisms and data structures needed to support i386 and ISA hardware functionality, was written in assembly language. The FSF's GAS assembler was used for this purpose, more out of need than preference. The great majority of problems we encountered with the port were traced to "hidden surprises" and "features" in GAS, which we bypassed with clever use of inline code and other contrivances. GAS is functional and proven, if not pretty.

Linker-Loader: Object modules created by GAS were linked together by an object module linkage editor. We had a wide variety of candidates available from BSD, FSF, and others from which to choose. However, because the object file format exactly matched the arrangement of our cross-host (a National Series 32000 machine), we put off the ultimate decision by using our cross-host's native UNIX ld command. This worked without modification to our satisfaction.

Communications and File Transfer: We needed a way to get programs and files created on our cross-host transferred to our 386 PC. Many cross-host to PC communications programs are available, and we settled on Kermit and NCSA Telnet (ftp) to do the job.

Protected-mode Loader: Once we had transferred the programs to the PC, we used our protected-mode loader program (see "Three Initial PC Utilities" in DDJ, February 1991 ) to load the programs and execute them in 386 protected mode.

Ancillary Tools: In addition to the heavy hitters, various minor commands are also needed to create and organize the object libraries. Commands such as ar, ranlib, nm, and lorder were required. Again, like the ld command above, we were able to use the cross-host's native commands due to the identical executable format and byte order of cross-host and our 386.

In addition to these programs, our cross-support facility must have the following data objects present to build kernel and utilities:

Object Libraries: The standalone system (libsa.a) and utilities (libc.a and others) make great use of their respective library calls. These libraries satisfy, on the average, a few hundred of the function entry and data structure references invoked by various BSD utility programs. Most of the machine-dependent portions of BSD utilities are located in the libraries, so the majority of effort expended in porting the utilities is focused on the libraries. Over the course of the 386BSD project, we wrote the machine-dependent code into the libraries to get a given utility operational only as needed, rather than writing it all at once. Incremental coding provided a tactical advantage, because by the time we needed to wrestle with the most difficult code, we had quite a bit of seasoned experience with the 386.

Include Files: In addition to object libraries, we must provide a complete set of include files for use with our cross-support package. A simple approach might be to have all references to include files directed to a separate i386 include directory, but this would interfere with the pathnames invoked by a variety of makefiles and shell scripts, not to mention all the embedded references in the source code itself. After finding over a hundred references to absolute pathnames, with no end in sight, we gave up on this approach and did the unspeakable--put into place on the cross-host all 386 include files. By virtue of the shell commands to386 and back2normal, we could switch our cross-host back and forth in this manner. Thank goodness, no other users needed to compile native programs at the time; they would have been somewhat surprised!

Cross-Support Methodology

We can employ several standard methods to aid in our cross-support effort: regression testing, divide and conquer, consistency checks, and defeating optimizations.

Regression testing is used to probe for the presence of induced bugs in every step along the way to proving our cross-tools. Prior to creating our cross-compiler, we generate our early test files off of a known good and tested implementation (in the case of 386BSD, a Sequent 386 UNIX system). The compiler output for some unmodified portions of the compiler and the kernel of the operating system are kept as reference assembly language files, for comparison against subsequent compiler versions output compiling the same files. An induced error would cause a difference to show up in the comparison of the two. As an example of this, a whole group of instructions might be missing, signifying a dropped expression left uncompiled by a buggy compiler. In a similar fashion, a group of object files from the assembler are also created to compare with those created by the assembler on our 386.

In addition to this set of test files, a record is kept of every kind of induced bug and the source code which generated it. Thus, common bugs which are inadvertently reintroduced periodically can be caught without needing to be debugged a second time (or a third ... ).

This mechanism for tracking compiler bugs is not a panacea--it is vulnerable to error in two major ways: It does nothing to aid detection of "latent" bugs in the "good" version we started with; and it becomes useless if modifications to the compiler result in widespread changes in the output code, thus obscuring "bug" changes. However, it proved adequate for the short period (one to two months) it took to reliably compile code in native 386BSD.

"Divide and conquer" is used to isolate the effect of multiple bugs appearing as a single impossible-to-find bug. It is a very powerful tool for use in certain unpleasant predicaments. For example, during the 386BSD project, we detected the presence of a kernel bug, a compiler bug, and a library bug all hitting at the same point, at a time when we did not yet have an operable debugger to sort out the mess. After isolating the problem with blitheringly primitive printfs, we tried porting similar, related programs, until we found a program that isolated the library bug and the compiler bug at separate times. Once we fixed these bugs, we recompiled the entire set of kernel and applications programs. The remaining kernel bug was then obvious to see and correct. Divide and conquer allowed us to solve an "unsolvable" problem.

Consistency checks are implemented in the drivers and trap/system call handlers to detect "impossible" conditions, such as returning to a user program with interrupts off, a completely invalid user stack pointer, and so forth. At one point, we even had them in library code and inline to the C compilers assembly language output. Throughout the 386BSD development cycle, consistency checks provided a mechanism to detect a problem before it became terminal and untraceable. For example, when we converted 386BSD from 4.3BSD-Tahoe to 4.3BSD-Reno, consistency checks detected a disastrous problem caused by a side effect of the context-switch code. Consistency checks have their downside, however. Performance degrades with the use of consistency checks in speed-sensitive areas such as system call handling. Resist temptation, however, and don't take them out just for convenience. Otherwise, mysterious problems will reappear and drive you crazy.

Another type of seemingly benign tinkering which results in disaster comes when one tries various performance optimizations too early in the game. We ran into problems every time we tried jumping ahead by improving our early development code before it was fully reliable. It is better to "comment out" performance improvements, compiler optimization, and "short circuit" code evaluation, until the code and compiler are somewhat shopworn. It is very frustrating when you have found a mechanism for a section of code that might improve performance by an order of magnitude or more, but only at the risk of upsetting the kernel operation itself. Be wary of such improvements--patience is definitely a virtue in a systems project.

Which C Standard?

In the early days of Berkeley UNIX (pre-Version 6), C was not yet standardized. For example, types such as "unsigned" did not even exist--instead, arithmetic was done on "char*" types. Partly as a result of early portability experiments, Bell Labs eventually revised C to conform to a definition devised by Brian Kernighan and Dennis Ritchie (K&R), two Bell Labs scientists. Their book, The C Programming Language (Prentice-Hall, 1978), defined what C was for almost the next ten years. Berkeley then adopted this new "standard" for all related prior code and all new code when it began to put a serious effort into developing new UNIX functionality. As the use of C has grown, its popularity has necessitated the evolution and solidification of an ANSI specification of the language and its semantics. Pre-K&R adherents to C, ideological to a fault, have frequently found much amusement in this obsession with standards. After all, they originally had to fight management and funding group opposition to its use (partly on the grounds of "standardization") in many major projects for which it was well suited, and had to live with the barrage of Fortran, Pascal, and then Ada efforts to displace C as the preeminent systems programming language of the day. Perhaps those groups might finally agree that C will be around for yet a few years to come!

What does this have to do with 386BSD? Plenty! It seems that some believe it is time to move BSD, kicking and screaming, into the ANSI C world, but others are still adherents of the K&R viewpoint. Since the K&R portable C compiler is still used for slowly dying architectures and is yet a force to be reckoned with, 386BSD must find a median solution. 386BSD has an eye towards the future, however, so a concerted effort has been made with 386-dependent code to work within the new ANSI C format, while remaining compatible with K&R C in common code by virtue of #ifdefs.

GCC attempts to remedy this conflict by providing a traditional mode, but this is inadequate to our needs. GCC, it turns out, is not perfectly "traditional," as it favors ANSI semantics. (This should actually be no surprise, as it is difficult to be complete in this regard.) As such, it is another source of "silent" bugs that one should be aware of because the majority of the BSD code was written to older standards.

Other Cross-Support Issues

In the area of cross-host communications, a few amusing irritants developed. When we first used Kermit and ordinary serial lines for the early standalone system and kernel work, the few minutes of download delay to MS-DOS were livable, given that the debugging time required for each cycle was usually about 20 minutes. As we got more proficient with the 386, however, and as we reached the limits of our documentation on 386 features, our debug sessions became shorter than the download time. Also, downloading a kernel (100 to 200 Kbytes) or a filesystem (1 to 5 Mbytes) began to occur more frequently, thus eating up even more time. Finally, with the help of a cheap (approximately $100) Ethernet card, we migrated to NCSA Telnet. This change cut the download time to a more reasonable number.

Success frequently results in its own problems; we rapidly filled our tiny 40-Mbyte drive. It became increasingly difficult to manage slightly different versions of utilities, and the cheap and clever tricks we had used to bypass some development steps were themselves becoming stumbling blocks. Because we were sharing the disk with MS-DOS and using MS-DOS utilities to communicate with the outside world, files had to fit in the MS-DOS partition. By this time, it was clear that the tenuous partnership between MS-DOS and BSD was drawing to an end.

Validating GCC for Use in a Cross-Environment

We found GCC to have many fine qualities--unfortunately, cross-support operation was not one of them. From its inception, GCC has traditionally been run on the host on which it was compiled, and little thought has been put into preserving its ability to run on a machine vastly different from that host. In addition, some architectures supported under GCC relied to some degree on the presence of a preexisting native compiler to compile GCC and parts of its own compiler support libraries. To be fair, the compiler itself is quite capable of compiling and supporting itself. However, as originally configured, both cross-support and compiler bootstrapping are not very satisfying.

Other hurdles which we had to surmount included locating host compiler bugs upon compiling GCC. Unlike other compiler writers who attempt to minimize the use of arbitrary C features in their code, GCC's creators revel in it. As a result, compiling GCC itself constitutes an excellent test of a compiler because of its rich use of the language, and the impressive demands (macros, pointer dereferencing) it places on the said compiler. While this style of implementation goes loggerheads with practical portability in our compromised "real" world, we must admit that the creators of GCC show fearless, if not reckless, faith in their compiler. No one else so completely exploits the C language, at the price of providing faultless support for such an extensive use of the language. The intellectual honesty required for such an implementation has received its fair portion of praise.

In the course of attempting to qualify a cross-host, we attempted to compile GCC on many machines. One less than serious attempt was made to compile portions of GCC on MS-DOS using various common PC C compilers. As expected, we got dismal results. We found that to compile GCC on MS-DOS, we would have to extensively rewrite the code, and also use some manner of MS-DOS extender--an effort not compatible with our specification goals. We did consider using the standalone library (see "The Standalone System" in DDJ, March 1991) to run GCC in native mode after compiling GCC on a borrowed 386 system elsewhere, but gave up on this when our cross-host version of GCC stabilized. We worried that these two PC-hosted approaches would not only require a great deal of additional work, but also require us to maintain them in the future for avid users. Perhaps a fate worse than death?

Our intended cross-host, a UNIX machine, had many problems in compiling GCC, even though the compiler has been part of a stable production system for many years. However, consistency checks within GCC itself allowed us to locate the nature of the problem to within a few thousand instructions, whereupon we would tediously single-step to the problem with a debugger. Since we could not fix the cross-host's native compiler (frequently this would mean exchanging the bug you know for the bugs generated by the fix that you don't know), we mauled GCC itself and defeated portions of the compiler in a successful attempt to avoid code that the native compiler would mishandle. Due to the nature of the native compiler bug (an obscure pointer aliasing problem), this was the only way we could convince ourselves that we were not just migrating the bug. As you might expect from our mention above, one of the best tests of our then-generated cross-compiler was GCC itself.

Another aspect of running GCC in a cross-environment is dealing with an internal support library known as gnulib.a. GCC is arranged so that portions of machine-dependent operations not implemented by the compiler itself with issued assembly code will instead be implemented by a subroutine call to a gnulib.a entry point. To cleverly implement these missing areas within the compiler, one creates gnulib.a by compiling source code encapsulating the missing feature with the native host's compiler (not GCC), relying on it to implement the missing feature as it sees fit. Here's an example. Suppose we have the C expression:

  if(a != b)....

Let's assume the compiler does not know how to handle !=. It could generate code to call a gnulib entry point:

      ...
    pushl _a
    pushl _b
    call noteqsi2
      ...

The gnulib would contain code compiled with a different native compiler than GCC, one that can deal with a != expression:

  noteqsi2(n,m) {
                return(n != m);
  }

This is a sneaky way to leverage an existing native compiler to fill out voids in GCC. Surprisingly, this works with our cross-host in most cases. We implemented a replacement for gnulib only as needed (few are ever called).

We ran into an entertaining problem when we first moved the compiler onto the 386. Because we no longer needed our cross-host modifications to GCC, we started recompiling the stock version of GCC, including gnulib, with the only compiler we had on our nascent BSD UNIX system, namely GCC. GCC generated code that would call the support library, which in turn would then call itself to implement the same support, and so on ad infinitum! This is another minor example on the lack of native support for GCC in the then standard release. It is expected that GCC 2.0 and later versions will better address these and other cross-support issues.

GCC Support Calls to Replace GNULIB

In addition to the normal subroutine libraries found with BSD, two support subroutines are needed. GCC handles all ANSI C operations by generating the appropriate 386 instructions, with the exception of floating point conversion to signed and unsigned integers. In Listing One (page xx), fixdfsi( ) manages to take a double precision floating point argument (a df) and turn it into a signed integer (an si, or small, within a machine word, integer). In Listing Two (page xx), fixunsdfsi( ) likewise takes a double-precision floating point argument and returns an unsigned (uns) integer. These functions use the 386 numeric processor integer truncation features to return the appropriate values. Because there is no direct method to convert a floating point number to unsigned format, we detect the condition (for example, above the most positive number possible), reduce the value prior to conversion (so it will fit into a signed value), then add back in what we subtracted after conversion, thus avoiding overflow.

Choosing a Sensible Cross-Host

Our ad hoc modifications of GCC resulted in a cross-compiler that would provide a considerable amount of language support, but it had limits. We also needed to consider the following: include file differences, byte sex, floating point format, inline assembly code, table generation programs, hardware page size, and object libraries. Some of these areas were so pervasive and important that they were primary considerations when we selected our cross-host.

By selecting an appropriate cross-host, we minimized a number of problems, including compatible byte sex, structure data alignment, program size, and existing tool set. Floating point data format turned out to be a minor concern because few programs in the early utilities group require it. Thanks to the IEEE floating point standard, this becomes easy as most post-VAX period processors support the same format (modulo byte order). Obviously, our job would have been simpler if we already had 386BSD up and running and then had to port it, so what we looked for in a cross-host was something very similar. Oddly enough, a C compiler hides most of the native machine's instruction set, so the least important part is the cross-host's processor architecture. Operating system version and program development tool similarity count for much more.

Those more dogmatic, gutsy, or energetic might say that we simply avoided the hard parts. They are quite correct. What hardships we did endure in cross-tools were more than enough for us.

Where Do We Go From Here

Now that we have created a stable cross-tools environment, we can get on to the last of our initial utilities--the initial root filesystem. In our next article, we will examine the minimum requirements which must be met to run a UNIX system, and the interrelationships between different UNIX files and utilities needed during the various stages of our 386BSD port. We then create a root filesystem containing, among others, /etc/init, /bin/sh, /dev/console, and /bin/ls (a token program), and debug it via the standalone utilities. We also discuss some of the problems encountered in filesystem downloading and validation procedures.

Brief Notes: Copyrights, Copylefts, and Competitive Advantage

Usually when we discuss a piece of software, we attempt to enhance our understanding with a program or fragment of code which illustrates the topic. Therefore, it is quite frustrating to discuss as major a tool as GCC, where the code is available to anyone upon request but we are prevented by the "copyleft" from showing you any code fragments. As such, we feel it important to examine the history and some effects of the copyleft.

The copyleft on GNU software was born out of rather turbulent circumstance. In the mid-1980s, a number of commercial entities made a practice of "appropriating" software developed at MIT and other universities and placing their own copyright on it. Richard Stallman, then (and still) at the MIT Media Lab, was involved with some early LISP software development, and experienced first hand the ruthless and bloody battle between Symbolics and LMI over LISP software enhancements. At the same time, AT&T was leading the forefront in the development of license agreements for UNIX, though not investing much at that time in the development of UNIX itself. This obvious (and still successful) locking up of research led Stallman and others to work on software projects which would be unencumbered by licenses, copyrights, and other restrictive means. Stallman's EMACS for the PDP-10 was one of the first visual editors available without those restrictions.

While commendable in theory, the practice was quickly thwarted by the success of Gosling's EMACS, a C-based version of Stallman's EMACS, which ran under UNIX. As more use was made of Gosling's EMACS, companies began to support it, add new features, and so forth, until finally it was locked-up by the vendors. Of course, it goes without saying that the changes to the code and new features were not returned to Stallman's group for updates, since that would have impacted a vendor's perceived competitive advantage.

Basically, the copyleft was an extreme response to the excesses of a cutthroat market. While permitting redistribution, the copyleft attempts to maintain access to and control of changes in code, by requiring that source modifications be returned to the FSF for redistribution and by demanding that the source with these modifications be made available from that company to anyone for essentially a "copying" fee. A liberal reading of the license makes it practically impossible for a company to easily lock up the software. It also prevents a company from easily recouping its investment in further software development, enhancements, or support by eliminating its competitive advantage over its competitors. A large company can avoid this by developing or licensing needed software tools, but a small business or individual developer does not have access to these resources.

Finally, the copyleft attempts to exert control over any discussion and analysis of the code itself in any printed medium, and states in part: "...The 'Program,' below, refers to any such program or work, and a 'work based on the Program' means either the Program or any work containing the Program or a portion of it, either verbatim or with modifications ...."

Thus, according to the copyleft, a written examination of GCC, which utilizes some of the code itself for purposes of discussion, falls under the copyleft itself. This is a condition unacceptable to authors and publishers, because they make their income only from the publishing and distribution of written works, and not necessarily from software products. Perhaps this was an unintended side effect of the copyleft, but attempts to narrow it have been to no avail.

The headlong rush towards "open standards," an oxymoron worthy of the military, is no solution either, but merely an effort to mask the implicit control, development, and innovation of a proprietary object by a vested interest by calling it "open." The only open standard is one that has an openly accessible model or example of the standard itself. Just as a mathematical formula in physics is meaningless without example problems and solutions, a standard based on a proprietary object is also meaningless without code solutions which justify its worthiness --and the code answer book to this open standard should not be subject to ransom through the use of "licensing" fees and anticompetitive product controls. Such a standard must also be equally accessible to those developing proprietary and nonproprietary works. This not only mitigates the inherent competitive disadvantage for the small innovator, but is also a disincentive to the development of proprietary "copycat" standards alongside the open standard, in an attempt to undermine its use.

Recently, the trend at many universities and research institutions has been to permit access to university-developed code through simple copyright procedures which permit modification and redistribution with attribution. The copyright used by TeleMuse, for example, is similar to the University of California at Berkeley (UCB) copyright and is designed to be simple and direct; see Figure 1.

Figure 1: The copyright used by TeleMuse in the 386BSD article series

  /* Copyright (c) date, name-of-author.  All rights reserved.
  * Written by name-of-author, date-written.
  * Redistribution and use in source and binary forms are freely permitted
  * provided that the above copyright notice and attribution and date of work
  * and this paragraph are duplicated in all such forms.
  * THIS SOFTWARE IS PROVIDED "AS IS" AND WITHOUT ANY EXPRESS OR
  * IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED
  * WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR A PARTICULAR PURPOSE.*/

In addition, UCB copyrights currently prohibit use of the UCB name in products incorporating the software to avoid the appearance of an endorsement.

According to Marshall Kirk McKusick, UCB CSRG Research Computer scientist and president of USENIX; "We have the capitalists with their copyright and the radicals with their copyleft. We are at the 'copycenter,' since we allow redistribution with credit to the authors. Our goal is to have as many people as possible use our software." In January of 1991, CMU adopted a variant of the UCB copyright for the MACH operating system.

This different approach to copyright does not attempt to regulate the development and distribution of code as does the copyleft. Instead, software is made available with the full knowledge that it will be incorporated into many different projects. These projects, in turn, will ultimately enhance the international competitiveness of the computer industry itself, by allowing individuals and small businesses the same access to these development tools as large corporations. After all, it is the individual and small business which are the sources of innovation in our society. Anything less (including the copyleft) results in a competitive advantage only for large companies with a vested interest in the status quo.

The Free Software Foundation deserves high praise for leading the fight against locked-up software. Some GNU packages, such as GCC and EMACS, have been used by small firms and research groups to develop innovative and unique software and products, which would not otherwise have been feasible for these economically strapped entities. Even 386BSD might not have been possible had we not been able to leverage other resources like GCC. However, as the climate in which the copyleft was developed has moderated, we hope that the FSF will moderate its stand as well, and at the very least permit unfettered discussion and analysis of the code in print. We have every confidence that there will continue to be a flow of new software back to the source from companies, individuals and research groups.

It is time vested interest started offering innovative and competitive works and stopped preventing innovation through the "anticompetitive" use of copylefts, open standards, and licensing. Those who maintain a competitive advantage through the inappropriate use of these methods, instead of through true innovation, have done so at the cost of the competitiveness of the entire domestic computer industry. --L.J.


_PORTING UNIX TO THE 386: LANGUAGE TOOLS CROSS SUPPORT_
by William Frederick Jolitz and Lynne Greer Jolitz


[LISTING ONE]
<a name="00e7_0011">

/* fixdfsi.s: Copyright (c) 1990 William Jolitz. All rights reserved.
 * Written by William Jolitz 1/90
 * Redistribution and use in source and binary forms are freely permitted
 * provided that the above copyright notice and attribution and date of work
 * and this paragraph are duplicated in all such forms.
 * THIS SOFTWARE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR
 * IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED
 * WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR A PARTICULAR PURPOSE.
 * GCC compiler support function, truncates a double float into a signed long.
 */

    .globl ___fixdfsi
___fixdfsi:
    pushl   $0xe7f      /* truncate, long real, mask all */
    fnstcw  2(%esp)     /* save my old control word */
    fldcw   (%esp)      /* load truncating one */

    fldl    8(%esp)     /* load double */
    fistpl  8(%esp)     /* store back as an integer */
    fldcw   2(%esp)     /* load prior control word */
    popl    %eax
    movl    4(%esp),%eax
    ret




<a name="00e7_0012">
<a name="00e7_0013">
[LISTING TWO]
<a name="00e7_0013">

/* fixunsdfsi.s: Copyright (c) 1990 William Jolitz. All rights reserved.
 * Written by William Jolitz 4/90
 * Redistribution and use in source and binary forms are freely permitted
 * provided that the above copyright notice and attribution and date of work
 * and this paragraph are duplicated in all such forms.
 * THIS SOFTWARE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR
 * IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED
 * WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR A PARTICULAR PURPOSE.
 * GCC compiler support function, truncates a double float into unsigned long.
 */

    .globl ___fixunsdfsi
___fixunsdfsi:
    pushl   $0xe7f          /* truncate, long real, mask all */
    fnstcw  2(%esp)         /* save my old control word */
    fldcw   (%esp)          /* load truncating one */
    fldl    8(%esp)         /* argument double to accum stack */
    frndint             /* create integer */
    fcoml   fbiggestsigned      /* bigger than biggest signed? */
    fstsw   %ax
    sahf
    jnb 1f

    fistpl  8(%esp)
    fldcw   2(%esp)         /* load prior control word */
    popl    %eax
    movl    4(%esp),%eax
    ret

1:  fsubl   fbiggestsigned      /* reduce for proper conversion */
    fistpl  8(%esp)         /* convert */
    fldcw   2(%esp)         /* load prior control word */
    popl    %eax
    movl    4(%esp),%eax
    addl    $2147483648,%eax    /* restore bias of 2^31 */
    ret

fbiggestsigned: .double 0r2147483648.0  /* 2^31 */


Copyright © 1991, Dr. Dobb's Journal


Related Reading


More Insights






Currently we allow the following HTML tags in comments:

Single tags

These tags can be used alone and don't need an ending tag.

<br> Defines a single line break

<hr> Defines a horizontal line

Matching tags

These require an ending tag - e.g. <i>italic text</i>

<a> Defines an anchor

<b> Defines bold text

<big> Defines big text

<blockquote> Defines a long quotation

<caption> Defines a table caption

<cite> Defines a citation

<code> Defines computer code text

<em> Defines emphasized text

<fieldset> Defines a border around elements in a form

<h1> This is heading 1

<h2> This is heading 2

<h3> This is heading 3

<h4> This is heading 4

<h5> This is heading 5

<h6> This is heading 6

<i> Defines italic text

<p> Defines a paragraph

<pre> Defines preformatted text

<q> Defines a short quotation

<samp> Defines sample computer code text

<small> Defines small text

<span> Defines a section in a document

<s> Defines strikethrough text

<strike> Defines strikethrough text

<strong> Defines strong text

<sub> Defines subscripted text

<sup> Defines superscripted text

<u> Defines underlined text

Dr. Dobb's encourages readers to engage in spirited, healthy debate, including taking us to task. However, Dr. Dobb's moderates all comments posted to our site, and reserves the right to modify or remove any content that it determines to be derogatory, offensive, inflammatory, vulgar, irrelevant/off-topic, racist or obvious marketing or spam. Dr. Dobb's further reserves the right to disable the profile of any commenter participating in said activities.

 
Disqus Tips To upload an avatar photo, first complete your Disqus profile. | View the list of supported HTML tags you can use to style comments. | Please read our commenting policy.